piqp-octave/0000755000175100001770000000000014635516624012667 5ustar runnerdockerpiqp-octave/src/0000755000175100001770000000000014635516624013456 5ustar runnerdockerpiqp-octave/src/eigen/0000755000175100001770000000000014107270226014532 5ustar runnerdockerpiqp-octave/src/eigen/COPYING.MPL20000644000175100001770000004052614107270226016305 0ustar runnerdockerMozilla Public License Version 2.0 ================================== 1. Definitions -------------- 1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. "Contribution" means Covered Software of a particular Contributor. 1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. "Incompatible With Secondary Licenses" means (a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or (b) that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. "Executable Form" means any form of the work other than Source Code Form. 1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. "License" means this document. 1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. "Modifications" means any of the following: (a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or (b) any new file in Source Code Form that contains any Covered Software. 1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. "Source Code Form" means the form of the work preferred for making modifications. 1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions -------------------------------- 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: (a) under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and (b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: (a) for any code that a Contributor has removed from Covered Software; or (b) for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or (c) under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities ------------------- 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: (a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and (b) You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation --------------------------------------------------- If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination -------------- 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ************************************************************************ * * * 6. Disclaimer of Warranty * * ------------------------- * * * * Covered Software is provided under this License on an "as is" * * basis, without warranty of any kind, either expressed, implied, or * * statutory, including, without limitation, warranties that the * * Covered Software is free of defects, merchantable, fit for a * * particular purpose or non-infringing. The entire risk as to the * * quality and performance of the Covered Software is with You. * * Should any Covered Software prove defective in any respect, You * * (not any Contributor) assume the cost of any necessary servicing, * * repair, or correction. This disclaimer of warranty constitutes an * * essential part of this License. No use of any Covered Software is * * authorized under this License except under this disclaimer. * * * ************************************************************************ ************************************************************************ * * * 7. Limitation of Liability * * -------------------------- * * * * Under no circumstances and under no legal theory, whether tort * * (including negligence), contract, or otherwise, shall any * * Contributor, or anyone who distributes Covered Software as * * permitted above, be liable to You for any direct, indirect, * * special, incidental, or consequential damages of any character * * including, without limitation, damages for lost profits, loss of * * goodwill, work stoppage, computer failure or malfunction, or any * * and all other commercial damages or losses, even if such party * * shall have been informed of the possibility of such damages. This * * limitation of liability shall not apply to liability for death or * * personal injury resulting from such party's negligence to the * * extent applicable law prohibits such limitation. Some * * jurisdictions do not allow the exclusion or limitation of * * incidental or consequential damages, so this exclusion and * * limitation may not apply to You. * * * ************************************************************************ 8. Litigation ------------- Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous ---------------- This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License --------------------------- 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice ------------------------------------------- This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - "Incompatible With Secondary Licenses" Notice --------------------------------------------------------- This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. piqp-octave/src/eigen/lapack/0000755000175100001770000000000014107270226015765 5ustar runnerdockerpiqp-octave/src/eigen/lapack/sladiv.f0000644000175100001770000000552114107270226017421 0ustar runnerdocker*> \brief \b SLADIV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLADIV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE SLADIV( A, B, C, D, P, Q ) * * .. Scalar Arguments .. * REAL A, B, C, D, P, Q * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLADIV performs complex division in real arithmetic *> *> a + i*b *> p + i*q = --------- *> c + i*d *> *> The algorithm is due to Robert L. Smith and can be found *> in D. Knuth, The art of Computer Programming, Vol.2, p.195 *> \endverbatim * * Arguments: * ========== * *> \param[in] A *> \verbatim *> A is REAL *> \endverbatim *> *> \param[in] B *> \verbatim *> B is REAL *> \endverbatim *> *> \param[in] C *> \verbatim *> C is REAL *> \endverbatim *> *> \param[in] D *> \verbatim *> D is REAL *> The scalars a, b, c, and d in the above expression. *> \endverbatim *> *> \param[out] P *> \verbatim *> P is REAL *> \endverbatim *> *> \param[out] Q *> \verbatim *> Q is REAL *> The scalars p and q in the above expression. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== SUBROUTINE SLADIV( A, B, C, D, P, Q ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. REAL A, B, C, D, P, Q * .. * * ===================================================================== * * .. Local Scalars .. REAL E, F * .. * .. Intrinsic Functions .. INTRINSIC ABS * .. * .. Executable Statements .. * IF( ABS( D ).LT.ABS( C ) ) THEN E = D / C F = C + D*E P = ( A+B*E ) / F Q = ( B-A*E ) / F ELSE E = C / D F = D + C*E P = ( B+A*E ) / F Q = ( -A+B*E ) / F END IF * RETURN * * End of SLADIV * END piqp-octave/src/eigen/lapack/lu.cpp0000644000175100001770000000513714107270226017117 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010-2011 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include "common.h" #include // computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges EIGEN_LAPACK_FUNC(getrf,(int *m, int *n, RealScalar *pa, int *lda, int *ipiv, int *info)) { *info = 0; if(*m<0) *info = -1; else if(*n<0) *info = -2; else if(*lda(pa); int nb_transpositions; int ret = int(Eigen::internal::partial_lu_impl ::blocked_lu(*m, *n, a, *lda, ipiv, nb_transpositions)); for(int i=0; i=0) *info = ret+1; return 0; } //GETRS solves a system of linear equations // A * X = B or A' * X = B // with a general N-by-N matrix A using the LU factorization computed by GETRF EIGEN_LAPACK_FUNC(getrs,(char *trans, int *n, int *nrhs, RealScalar *pa, int *lda, int *ipiv, RealScalar *pb, int *ldb, int *info)) { *info = 0; if(OP(*trans)==INVALID) *info = -1; else if(*n<0) *info = -2; else if(*nrhs<0) *info = -3; else if(*lda(pa); Scalar* b = reinterpret_cast(pb); MatrixType lu(a,*n,*n,*lda); MatrixType B(b,*n,*nrhs,*ldb); for(int i=0; i<*n; ++i) ipiv[i]--; if(OP(*trans)==NOTR) { B = PivotsType(ipiv,*n) * B; lu.triangularView().solveInPlace(B); lu.triangularView().solveInPlace(B); } else if(OP(*trans)==TR) { lu.triangularView().transpose().solveInPlace(B); lu.triangularView().transpose().solveInPlace(B); B = PivotsType(ipiv,*n).transpose() * B; } else if(OP(*trans)==ADJ) { lu.triangularView().adjoint().solveInPlace(B); lu.triangularView().adjoint().solveInPlace(B); B = PivotsType(ipiv,*n).transpose() * B; } for(int i=0; i<*n; ++i) ipiv[i]++; return 0; } piqp-octave/src/eigen/lapack/dlarfb.f0000644000175100001770000005433514107270226017400 0ustar runnerdocker*> \brief \b DLARFB * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLARFB + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE DLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, * T, LDT, C, LDC, WORK, LDWORK ) * * .. Scalar Arguments .. * CHARACTER DIRECT, SIDE, STOREV, TRANS * INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. * DOUBLE PRECISION C( LDC, * ), T( LDT, * ), V( LDV, * ), * $ WORK( LDWORK, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLARFB applies a real block reflector H or its transpose H**T to a *> real m by n matrix C, from either the left or the right. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': apply H or H**T from the Left *> = 'R': apply H or H**T from the Right *> \endverbatim *> *> \param[in] TRANS *> \verbatim *> TRANS is CHARACTER*1 *> = 'N': apply H (No transpose) *> = 'T': apply H**T (Transpose) *> \endverbatim *> *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Indicates how H is formed from a product of elementary *> reflectors *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Indicates how the vectors which define the elementary *> reflectors are stored: *> = 'C': Columnwise *> = 'R': Rowwise *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the matrix T (= the number of elementary *> reflectors whose product defines the block reflector). *> \endverbatim *> *> \param[in] V *> \verbatim *> V is DOUBLE PRECISION array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,M) if STOREV = 'R' and SIDE = 'L' *> (LDV,N) if STOREV = 'R' and SIDE = 'R' *> The matrix V. See Further Details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); *> if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); *> if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] T *> \verbatim *> T is DOUBLE PRECISION array, dimension (LDT,K) *> The triangular k by k matrix T in the representation of the *> block reflector. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is DOUBLE PRECISION array, dimension (LDC,N) *> On entry, the m by n matrix C. *> On exit, C is overwritten by H*C or H**T*C or C*H or C*H**T. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is DOUBLE PRECISION array, dimension (LDWORK,K) *> \endverbatim *> *> \param[in] LDWORK *> \verbatim *> LDWORK is INTEGER *> The leading dimension of the array WORK. *> If SIDE = 'L', LDWORK >= max(1,N); *> if SIDE = 'R', LDWORK >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup doubleOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored; the corresponding *> array elements are modified but restored on exit. The rest of the *> array is not used. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE DLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, $ T, LDT, C, LDC, WORK, LDWORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER DIRECT, SIDE, STOREV, TRANS INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. DOUBLE PRECISION C( LDC, * ), T( LDT, * ), V( LDV, * ), $ WORK( LDWORK, * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE PARAMETER ( ONE = 1.0D+0 ) * .. * .. Local Scalars .. CHARACTER TRANST INTEGER I, J, LASTV, LASTC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILADLR, ILADLC EXTERNAL LSAME, ILADLR, ILADLC * .. * .. External Subroutines .. EXTERNAL DCOPY, DGEMM, DTRMM * .. * .. Executable Statements .. * * Quick return if possible * IF( M.LE.0 .OR. N.LE.0 ) $ RETURN * IF( LSAME( TRANS, 'N' ) ) THEN TRANST = 'T' ELSE TRANST = 'N' END IF * IF( LSAME( STOREV, 'C' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 ) (first K rows) * ( V2 ) * where V1 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILADLR( M, K, V, LDV ) ) LASTC = ILADLC( LASTV, N, C, LDC ) * * W := C**T * V = (C1**T * V1 + C2**T * V2) (stored in WORK) * * W := C1**T * DO 10 J = 1, K CALL DCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) 10 CONTINUE * * W := W * V1 * CALL DTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**T *V2 * CALL DGEMM( 'Transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL DTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2 * W**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTV-K, LASTC, K, $ -ONE, V( K+1, 1 ), LDV, WORK, LDWORK, ONE, $ C( K+1, 1 ), LDC ) END IF * * W := W * V1**T * CALL DTRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**T * DO 30 J = 1, K DO 20 I = 1, LASTC C( J, I ) = C( J, I ) - WORK( I, J ) 20 CONTINUE 30 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILADLR( N, K, V, LDV ) ) LASTC = ILADLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C1 * DO 40 J = 1, K CALL DCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 40 CONTINUE * * W := W * V1 * CALL DTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2 * CALL DGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL DTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( K+1, 1 ), LDV, ONE, $ C( 1, K+1 ), LDC ) END IF * * W := W * V1**T * CALL DTRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 60 J = 1, K DO 50 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 50 CONTINUE 60 CONTINUE END IF * ELSE * * Let V = ( V1 ) * ( V2 ) (last K rows) * where V2 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILADLR( M, K, V, LDV ) ) LASTC = ILADLC( LASTV, N, C, LDC ) * * W := C**T * V = (C1**T * V1 + C2**T * V2) (stored in WORK) * * W := C2**T * DO 70 J = 1, K CALL DCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) 70 CONTINUE * * W := W * V2 * CALL DTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**T*V1 * CALL DGEMM( 'Transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL DTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1 * W**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTV-K, LASTC, K, -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2**T * CALL DTRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**T * DO 90 J = 1, K DO 80 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - WORK(I, J) 80 CONTINUE 90 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILADLR( N, K, V, LDV ) ) LASTC = ILADLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C2 * DO 100 J = 1, K CALL DCOPY( LASTC, C( 1, N-K+J ), 1, WORK( 1, J ), 1 ) 100 CONTINUE * * W := W * V2 * CALL DTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1 * CALL DGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL DTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2**T * CALL DTRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W * DO 120 J = 1, K DO 110 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) - WORK(I, J) 110 CONTINUE 120 CONTINUE END IF END IF * ELSE IF( LSAME( STOREV, 'R' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 V2 ) (V1: first K columns) * where V1 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILADLC( K, M, V, LDV ) ) LASTC = ILADLC( LASTV, N, C, LDC ) * * W := C**T * V**T = (C1**T * V1**T + C2**T * V2**T) (stored in WORK) * * W := C1**T * DO 130 J = 1, K CALL DCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) 130 CONTINUE * * W := W * V1**T * CALL DTRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**T*V2**T * CALL DGEMM( 'Transpose', 'Transpose', $ LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL DTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**T * W**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2**T * W**T * CALL DGEMM( 'Transpose', 'Transpose', $ LASTV-K, LASTC, K, $ -ONE, V( 1, K+1 ), LDV, WORK, LDWORK, $ ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1 * CALL DTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**T * DO 150 J = 1, K DO 140 I = 1, LASTC C( J, I ) = C( J, I ) - WORK( I, J ) 140 CONTINUE 150 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILADLC( K, N, V, LDV ) ) LASTC = ILADLR( M, LASTV, C, LDC ) * * W := C * V**T = (C1*V1**T + C2*V2**T) (stored in WORK) * * W := C1 * DO 160 J = 1, K CALL DCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 160 CONTINUE * * W := W * V1**T * CALL DTRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL DTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2 * CALL DGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( 1, K+1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1 * CALL DTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 180 J = 1, K DO 170 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 170 CONTINUE 180 CONTINUE * END IF * ELSE * * Let V = ( V1 V2 ) (V2: last K columns) * where V2 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILADLC( K, M, V, LDV ) ) LASTC = ILADLC( LASTV, N, C, LDC ) * * W := C**T * V**T = (C1**T * V1**T + C2**T * V2**T) (stored in WORK) * * W := C2**T * DO 190 J = 1, K CALL DCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) 190 CONTINUE * * W := W * V2**T * CALL DTRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**T * V1**T * CALL DGEMM( 'Transpose', 'Transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL DTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**T * W**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1**T * W**T * CALL DGEMM( 'Transpose', 'Transpose', $ LASTV-K, LASTC, K, -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL DTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**T * DO 210 J = 1, K DO 200 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - WORK(I, J) 200 CONTINUE 210 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILADLC( K, N, V, LDV ) ) LASTC = ILADLR( M, LASTV, C, LDC ) * * W := C * V**T = (C1*V1**T + C2*V2**T) (stored in WORK) * * W := C2 * DO 220 J = 1, K CALL DCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 220 CONTINUE * * W := W * V2**T * CALL DTRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1**T * CALL DGEMM( 'No transpose', 'Transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL DTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1 * CALL DGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL DTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C1 := C1 - W * DO 240 J = 1, K DO 230 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) - WORK(I, J) 230 CONTINUE 240 CONTINUE * END IF * END IF END IF * RETURN * * End of DLARFB * END piqp-octave/src/eigen/lapack/ilaclc.f0000644000175100001770000000561514107270226017372 0ustar runnerdocker*> \brief \b ILACLC * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILACLC + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILACLC( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * COMPLEX A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILACLC scans A for its last non-zero column. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is COMPLEX array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILACLC( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. COMPLEX A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX ZERO PARAMETER ( ZERO = (0.0E+0, 0.0E+0) ) * .. * .. Local Scalars .. INTEGER I * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( N.EQ.0 ) THEN ILACLC = N ELSE IF( A(1, N).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILACLC = N ELSE * Now scan each column from the end, returning with the first non-zero. DO ILACLC = N, 1, -1 DO I = 1, M IF( A(I, ILACLC).NE.ZERO ) RETURN END DO END DO END IF RETURN END piqp-octave/src/eigen/lapack/zladiv.f0000644000175100001770000000447414107270226017436 0ustar runnerdocker*> \brief \b ZLADIV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLADIV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * COMPLEX*16 FUNCTION ZLADIV( X, Y ) * * .. Scalar Arguments .. * COMPLEX*16 X, Y * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLADIV := X / Y, where X and Y are complex. The computation of X / Y *> will not overflow on an intermediary step unless the results *> overflows. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is COMPLEX*16 *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is COMPLEX*16 *> The complex scalars X and Y. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== COMPLEX*16 FUNCTION ZLADIV( X, Y ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. COMPLEX*16 X, Y * .. * * ===================================================================== * * .. Local Scalars .. DOUBLE PRECISION ZI, ZR * .. * .. External Subroutines .. EXTERNAL DLADIV * .. * .. Intrinsic Functions .. INTRINSIC DBLE, DCMPLX, DIMAG * .. * .. Executable Statements .. * CALL DLADIV( DBLE( X ), DIMAG( X ), DBLE( Y ), DIMAG( Y ), ZR, $ ZI ) ZLADIV = DCMPLX( ZR, ZI ) * RETURN * * End of ZLADIV * END piqp-octave/src/eigen/lapack/iladlr.f0000644000175100001770000000567014107270226017413 0ustar runnerdocker*> \brief \b ILADLR * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILADLR + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILADLR( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * DOUBLE PRECISION A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILADLR scans A for its last non-zero row. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is DOUBLE PRECISION array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup auxOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILADLR( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. DOUBLE PRECISION A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ZERO PARAMETER ( ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER I, J * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( M.EQ.0 ) THEN ILADLR = M ELSE IF( A(M, 1).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILADLR = M ELSE * Scan up each column tracking the last zero row seen. ILADLR = 0 DO J = 1, N I=M DO WHILE((A(MAX(I,1),J).EQ.ZERO).AND.(I.GE.1)) I=I-1 ENDDO ILADLR = MAX( ILADLR, I ) END DO END IF RETURN END piqp-octave/src/eigen/lapack/ilaslc.f0000644000175100001770000000557514107270226017417 0ustar runnerdocker*> \brief \b ILASLC * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILASLC + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILASLC( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * REAL A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILASLC scans A for its last non-zero column. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is REAL array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup realOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILASLC( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. REAL A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. REAL ZERO PARAMETER ( ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER I * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( N.EQ.0 ) THEN ILASLC = N ELSE IF( A(1, N).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILASLC = N ELSE * Now scan each column from the end, returning with the first non-zero. DO ILASLC = N, 1, -1 DO I = 1, M IF( A(I, ILASLC).NE.ZERO ) RETURN END DO END DO END IF RETURN END piqp-octave/src/eigen/lapack/dsecnd_NONE.f0000644000175100001770000000240214107270226020211 0ustar runnerdocker*> \brief \b DSECND returns nothing * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * * Definition: * =========== * * DOUBLE PRECISION FUNCTION DSECND( ) * * *> \par Purpose: * ============= *> *> \verbatim *> *> DSECND returns nothing instead of returning the user time for a process in seconds. *> If you are using that routine, it means that neither EXTERNAL ETIME, *> EXTERNAL ETIME_, INTERNAL ETIME, INTERNAL CPU_TIME is available on *> your machine. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== DOUBLE PRECISION FUNCTION DSECND( ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * ===================================================================== * DSECND = 0.0D+0 RETURN * * End of DSECND * END piqp-octave/src/eigen/lapack/clarfg.f0000644000175100001770000001234014107270226017372 0ustar runnerdocker*> \brief \b CLARFG * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLARFG + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE CLARFG( N, ALPHA, X, INCX, TAU ) * * .. Scalar Arguments .. * INTEGER INCX, N * COMPLEX ALPHA, TAU * .. * .. Array Arguments .. * COMPLEX X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLARFG generates a complex elementary reflector H of order n, such *> that *> *> H**H * ( alpha ) = ( beta ), H**H * H = I. *> ( x ) ( 0 ) *> *> where alpha and beta are scalars, with beta real, and x is an *> (n-1)-element complex vector. H is represented in the form *> *> H = I - tau * ( 1 ) * ( 1 v**H ) , *> ( v ) *> *> where tau is a complex scalar and v is a complex (n-1)-element *> vector. Note that H is not hermitian. *> *> If the elements of x are all zero and alpha is real, then tau = 0 *> and H is taken to be the unit matrix. *> *> Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1 . *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the elementary reflector. *> \endverbatim *> *> \param[in,out] ALPHA *> \verbatim *> ALPHA is COMPLEX *> On entry, the value alpha. *> On exit, it is overwritten with the value beta. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is COMPLEX array, dimension *> (1+(N-2)*abs(INCX)) *> On entry, the vector x. *> On exit, it is overwritten with the vector v. *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The increment between elements of X. INCX > 0. *> \endverbatim *> *> \param[out] TAU *> \verbatim *> TAU is COMPLEX *> The value tau. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * * ===================================================================== SUBROUTINE CLARFG( N, ALPHA, X, INCX, TAU ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N COMPLEX ALPHA, TAU * .. * .. Array Arguments .. COMPLEX X( * ) * .. * * ===================================================================== * * .. Parameters .. REAL ONE, ZERO PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 ) * .. * .. Local Scalars .. INTEGER J, KNT REAL ALPHI, ALPHR, BETA, RSAFMN, SAFMIN, XNORM * .. * .. External Functions .. REAL SCNRM2, SLAMCH, SLAPY3 COMPLEX CLADIV EXTERNAL SCNRM2, SLAMCH, SLAPY3, CLADIV * .. * .. Intrinsic Functions .. INTRINSIC ABS, AIMAG, CMPLX, REAL, SIGN * .. * .. External Subroutines .. EXTERNAL CSCAL, CSSCAL * .. * .. Executable Statements .. * IF( N.LE.0 ) THEN TAU = ZERO RETURN END IF * XNORM = SCNRM2( N-1, X, INCX ) ALPHR = REAL( ALPHA ) ALPHI = AIMAG( ALPHA ) * IF( XNORM.EQ.ZERO .AND. ALPHI.EQ.ZERO ) THEN * * H = I * TAU = ZERO ELSE * * general case * BETA = -SIGN( SLAPY3( ALPHR, ALPHI, XNORM ), ALPHR ) SAFMIN = SLAMCH( 'S' ) / SLAMCH( 'E' ) RSAFMN = ONE / SAFMIN * KNT = 0 IF( ABS( BETA ).LT.SAFMIN ) THEN * * XNORM, BETA may be inaccurate; scale X and recompute them * 10 CONTINUE KNT = KNT + 1 CALL CSSCAL( N-1, RSAFMN, X, INCX ) BETA = BETA*RSAFMN ALPHI = ALPHI*RSAFMN ALPHR = ALPHR*RSAFMN IF( ABS( BETA ).LT.SAFMIN ) $ GO TO 10 * * New BETA is at most 1, at least SAFMIN * XNORM = SCNRM2( N-1, X, INCX ) ALPHA = CMPLX( ALPHR, ALPHI ) BETA = -SIGN( SLAPY3( ALPHR, ALPHI, XNORM ), ALPHR ) END IF TAU = CMPLX( ( BETA-ALPHR ) / BETA, -ALPHI / BETA ) ALPHA = CLADIV( CMPLX( ONE ), ALPHA-BETA ) CALL CSCAL( N-1, ALPHA, X, INCX ) * * If ALPHA is subnormal, it may lose relative accuracy * DO 20 J = 1, KNT BETA = BETA*SAFMIN 20 CONTINUE ALPHA = BETA END IF * RETURN * * End of CLARFG * END piqp-octave/src/eigen/lapack/slapy3.f0000644000175100001770000000521514107270226017352 0ustar runnerdocker*> \brief \b SLAPY3 * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLAPY3 + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * REAL FUNCTION SLAPY3( X, Y, Z ) * * .. Scalar Arguments .. * REAL X, Y, Z * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause *> unnecessary overflow. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is REAL *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is REAL *> \endverbatim *> *> \param[in] Z *> \verbatim *> Z is REAL *> X, Y and Z specify the values x, y and z. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== REAL FUNCTION SLAPY3( X, Y, Z ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. REAL X, Y, Z * .. * * ===================================================================== * * .. Parameters .. REAL ZERO PARAMETER ( ZERO = 0.0E0 ) * .. * .. Local Scalars .. REAL W, XABS, YABS, ZABS * .. * .. Intrinsic Functions .. INTRINSIC ABS, MAX, SQRT * .. * .. Executable Statements .. * XABS = ABS( X ) YABS = ABS( Y ) ZABS = ABS( Z ) W = MAX( XABS, YABS, ZABS ) IF( W.EQ.ZERO ) THEN * W can be zero for max(0,nan,0) * adding all three entries together will make sure * NaN will not disappear. SLAPY3 = XABS + YABS + ZABS ELSE SLAPY3 = W*SQRT( ( XABS / W )**2+( YABS / W )**2+ $ ( ZABS / W )**2 ) END IF RETURN * * End of SLAPY3 * END piqp-octave/src/eigen/lapack/ilazlc.f0000644000175100001770000000562214107270226017417 0ustar runnerdocker*> \brief \b ILAZLC * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILAZLC + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILAZLC( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * COMPLEX*16 A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILAZLC scans A for its last non-zero column. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is COMPLEX*16 array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILAZLC( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. COMPLEX*16 A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX*16 ZERO PARAMETER ( ZERO = (0.0D+0, 0.0D+0) ) * .. * .. Local Scalars .. INTEGER I * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( N.EQ.0 ) THEN ILAZLC = N ELSE IF( A(1, N).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILAZLC = N ELSE * Now scan each column from the end, returning with the first non-zero. DO ILAZLC = N, 1, -1 DO I = 1, M IF( A(I, ILAZLC).NE.ZERO ) RETURN END DO END DO END IF RETURN END piqp-octave/src/eigen/lapack/complex_single.cpp0000644000175100001770000000110114107270226021472 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #define SCALAR std::complex #define SCALAR_SUFFIX c #define SCALAR_SUFFIX_UP "C" #define REAL_SCALAR_SUFFIX s #define ISCOMPLEX 1 #include "cholesky.cpp" #include "lu.cpp" #include "svd.cpp" piqp-octave/src/eigen/lapack/clarft.f0000644000175100001770000002432214107270226017412 0ustar runnerdocker*> \brief \b CLARFT * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLARFT + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE CLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * .. Scalar Arguments .. * CHARACTER DIRECT, STOREV * INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. * COMPLEX T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLARFT forms the triangular factor T of a complex block reflector H *> of order n, which is defined as a product of k elementary reflectors. *> *> If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; *> *> If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. *> *> If STOREV = 'C', the vector which defines the elementary reflector *> H(i) is stored in the i-th column of the array V, and *> *> H = I - V * T * V**H *> *> If STOREV = 'R', the vector which defines the elementary reflector *> H(i) is stored in the i-th row of the array V, and *> *> H = I - V**H * T * V *> \endverbatim * * Arguments: * ========== * *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Specifies the order in which the elementary reflectors are *> multiplied to form the block reflector: *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Specifies how the vectors which define the elementary *> reflectors are stored (see also Further Details): *> = 'C': columnwise *> = 'R': rowwise *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the block reflector H. N >= 0. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the triangular factor T (= the number of *> elementary reflectors). K >= 1. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,N) if STOREV = 'R' *> The matrix V. See further details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is COMPLEX array, dimension (K) *> TAU(i) must contain the scalar factor of the elementary *> reflector H(i). *> \endverbatim *> *> \param[out] T *> \verbatim *> T is COMPLEX array, dimension (LDT,K) *> The k by k triangular factor T of the block reflector. *> If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is *> lower triangular. The rest of the array is not used. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup complexOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE CLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. CHARACTER DIRECT, STOREV INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. COMPLEX T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX ONE, ZERO PARAMETER ( ONE = ( 1.0E+0, 0.0E+0 ), $ ZERO = ( 0.0E+0, 0.0E+0 ) ) * .. * .. Local Scalars .. INTEGER I, J, PREVLASTV, LASTV * .. * .. External Subroutines .. EXTERNAL CGEMV, CLACGV, CTRMV * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Executable Statements .. * * Quick return if possible * IF( N.EQ.0 ) $ RETURN * IF( LSAME( DIRECT, 'F' ) ) THEN PREVLASTV = N DO I = 1, K PREVLASTV = MAX( PREVLASTV, I ) IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = 1, I T( J, I ) = ZERO END DO ELSE * * general case * IF( LSAME( STOREV, 'C' ) ) THEN * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * CONJG( V( I , J ) ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(i:j,1:i-1)**H * V(i:j,i) * CALL CGEMV( 'Conjugate transpose', J-I, I-1, $ -TAU( I ), V( I+1, 1 ), LDV, $ V( I+1, I ), 1, $ ONE, T( 1, I ), 1 ) ELSE * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( J , I ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(1:i-1,i:j) * V(i,i:j)**H * CALL CGEMM( 'N', 'C', I-1, 1, J-I, -TAU( I ), $ V( 1, I+1 ), LDV, V( I, I+1 ), LDV, $ ONE, T( 1, I ), LDT ) END IF * * T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) * CALL CTRMV( 'Upper', 'No transpose', 'Non-unit', I-1, T, $ LDT, T( 1, I ), 1 ) T( I, I ) = TAU( I ) IF( I.GT.1 ) THEN PREVLASTV = MAX( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF END DO ELSE PREVLASTV = 1 DO I = K, 1, -1 IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = I, K T( J, I ) = ZERO END DO ELSE * * general case * IF( I.LT.K ) THEN IF( LSAME( STOREV, 'C' ) ) THEN * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * CONJG( V( N-K+I , J ) ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(j:n-k+i,i+1:k)**H * V(j:n-k+i,i) * CALL CGEMV( 'Conjugate transpose', N-K+I-J, K-I, $ -TAU( I ), V( J, I+1 ), LDV, V( J, I ), $ 1, ONE, T( I+1, I ), 1 ) ELSE * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( J, N-K+I ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(i+1:k,j:n-k+i) * V(i,j:n-k+i)**H * CALL CGEMM( 'N', 'C', K-I, 1, N-K+I-J, -TAU( I ), $ V( I+1, J ), LDV, V( I, J ), LDV, $ ONE, T( I+1, I ), LDT ) END IF * * T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) * CALL CTRMV( 'Lower', 'No transpose', 'Non-unit', K-I, $ T( I+1, I+1 ), LDT, T( I+1, I ), 1 ) IF( I.GT.1 ) THEN PREVLASTV = MIN( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF T( I, I ) = TAU( I ) END IF END DO END IF RETURN * * End of CLARFT * END piqp-octave/src/eigen/lapack/ilaslr.f0000644000175100001770000000565414107270226017434 0ustar runnerdocker*> \brief \b ILASLR * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILASLR + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILASLR( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * REAL A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILASLR scans A for its last non-zero row. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is REAL array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup realOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILASLR( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. REAL A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. REAL ZERO PARAMETER ( ZERO = 0.0E+0 ) * .. * .. Local Scalars .. INTEGER I, J * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( M.EQ.0 ) THEN ILASLR = M ELSEIF( A(M, 1).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILASLR = M ELSE * Scan up each column tracking the last zero row seen. ILASLR = 0 DO J = 1, N I=M DO WHILE((A(MAX(I,1),J).EQ.ZERO).AND.(I.GE.1)) I=I-1 ENDDO ILASLR = MAX( ILASLR, I ) END DO END IF RETURN END piqp-octave/src/eigen/lapack/complex_double.cpp0000644000175100001770000000110214107270226021464 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #define SCALAR std::complex #define SCALAR_SUFFIX z #define SCALAR_SUFFIX_UP "Z" #define REAL_SCALAR_SUFFIX d #define ISCOMPLEX 1 #include "cholesky.cpp" #include "lu.cpp" #include "svd.cpp" piqp-octave/src/eigen/lapack/slamch.f0000644000175100001770000001221514107270226017404 0ustar runnerdocker*> \brief \b SLAMCH * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * * Definition: * =========== * * REAL FUNCTION SLAMCH( CMACH ) * * .. Scalar Arguments .. * CHARACTER CMACH * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLAMCH determines single precision machine parameters. *> \endverbatim * * Arguments: * ========== * *> \param[in] CMACH *> \verbatim *> Specifies the value to be returned by SLAMCH: *> = 'E' or 'e', SLAMCH := eps *> = 'S' or 's , SLAMCH := sfmin *> = 'B' or 'b', SLAMCH := base *> = 'P' or 'p', SLAMCH := eps*base *> = 'N' or 'n', SLAMCH := t *> = 'R' or 'r', SLAMCH := rnd *> = 'M' or 'm', SLAMCH := emin *> = 'U' or 'u', SLAMCH := rmin *> = 'L' or 'l', SLAMCH := emax *> = 'O' or 'o', SLAMCH := rmax *> where *> eps = relative machine precision *> sfmin = safe minimum, such that 1/sfmin does not overflow *> base = base of the machine *> prec = eps*base *> t = number of (base) digits in the mantissa *> rnd = 1.0 when rounding occurs in addition, 0.0 otherwise *> emin = minimum exponent before (gradual) underflow *> rmin = underflow threshold - base**(emin-1) *> emax = largest exponent before overflow *> rmax = overflow threshold - (base**emax)*(1-eps) *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== REAL FUNCTION SLAMCH( CMACH ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER CMACH * .. * * ===================================================================== * * .. Parameters .. REAL ONE, ZERO PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 ) * .. * .. Local Scalars .. REAL RND, EPS, SFMIN, SMALL, RMACH * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Intrinsic Functions .. INTRINSIC DIGITS, EPSILON, HUGE, MAXEXPONENT, $ MINEXPONENT, RADIX, TINY * .. * .. Executable Statements .. * * * Assume rounding, not chopping. Always. * RND = ONE * IF( ONE.EQ.RND ) THEN EPS = EPSILON(ZERO) * 0.5 ELSE EPS = EPSILON(ZERO) END IF * IF( LSAME( CMACH, 'E' ) ) THEN RMACH = EPS ELSE IF( LSAME( CMACH, 'S' ) ) THEN SFMIN = TINY(ZERO) SMALL = ONE / HUGE(ZERO) IF( SMALL.GE.SFMIN ) THEN * * Use SMALL plus a bit, to avoid the possibility of rounding * causing overflow when computing 1/sfmin. * SFMIN = SMALL*( ONE+EPS ) END IF RMACH = SFMIN ELSE IF( LSAME( CMACH, 'B' ) ) THEN RMACH = RADIX(ZERO) ELSE IF( LSAME( CMACH, 'P' ) ) THEN RMACH = EPS * RADIX(ZERO) ELSE IF( LSAME( CMACH, 'N' ) ) THEN RMACH = DIGITS(ZERO) ELSE IF( LSAME( CMACH, 'R' ) ) THEN RMACH = RND ELSE IF( LSAME( CMACH, 'M' ) ) THEN RMACH = MINEXPONENT(ZERO) ELSE IF( LSAME( CMACH, 'U' ) ) THEN RMACH = tiny(zero) ELSE IF( LSAME( CMACH, 'L' ) ) THEN RMACH = MAXEXPONENT(ZERO) ELSE IF( LSAME( CMACH, 'O' ) ) THEN RMACH = HUGE(ZERO) ELSE RMACH = ZERO END IF * SLAMCH = RMACH RETURN * * End of SLAMCH * END ************************************************************************ *> \brief \b SLAMC3 *> \details *> \b Purpose: *> \verbatim *> SLAMC3 is intended to force A and B to be stored prior to doing *> the addition of A and B , for use in situations where optimizers *> might hold one of these in a register. *> \endverbatim *> \author LAPACK is a software package provided by Univ. of Tennessee, Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd.. *> \date November 2011 *> \ingroup auxOTHERauxiliary *> *> \param[in] A *> \verbatim *> \endverbatim *> *> \param[in] B *> \verbatim *> The values A and B. *> \endverbatim *> * REAL FUNCTION SLAMC3( A, B ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * Univ. of Tennessee, Univ. of California Berkeley and NAG Ltd.. * November 2010 * * .. Scalar Arguments .. REAL A, B * .. * ===================================================================== * * .. Executable Statements .. * SLAMC3 = A + B * RETURN * * End of SLAMC3 * END * ************************************************************************ piqp-octave/src/eigen/lapack/slapy2.f0000644000175100001770000000467214107270226017357 0ustar runnerdocker*> \brief \b SLAPY2 * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLAPY2 + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * REAL FUNCTION SLAPY2( X, Y ) * * .. Scalar Arguments .. * REAL X, Y * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary *> overflow. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is REAL *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is REAL *> X and Y specify the values x and y. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== REAL FUNCTION SLAPY2( X, Y ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. REAL X, Y * .. * * ===================================================================== * * .. Parameters .. REAL ZERO PARAMETER ( ZERO = 0.0E0 ) REAL ONE PARAMETER ( ONE = 1.0E0 ) * .. * .. Local Scalars .. REAL W, XABS, YABS, Z * .. * .. Intrinsic Functions .. INTRINSIC ABS, MAX, MIN, SQRT * .. * .. Executable Statements .. * XABS = ABS( X ) YABS = ABS( Y ) W = MAX( XABS, YABS ) Z = MIN( XABS, YABS ) IF( Z.EQ.ZERO ) THEN SLAPY2 = W ELSE SLAPY2 = W*SQRT( ONE+( Z / W )**2 ) END IF RETURN * * End of SLAPY2 * END piqp-octave/src/eigen/lapack/zlarft.f0000644000175100001770000002432514107270226017444 0ustar runnerdocker*> \brief \b ZLARFT * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLARFT + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE ZLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * .. Scalar Arguments .. * CHARACTER DIRECT, STOREV * INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. * COMPLEX*16 T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLARFT forms the triangular factor T of a complex block reflector H *> of order n, which is defined as a product of k elementary reflectors. *> *> If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; *> *> If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. *> *> If STOREV = 'C', the vector which defines the elementary reflector *> H(i) is stored in the i-th column of the array V, and *> *> H = I - V * T * V**H *> *> If STOREV = 'R', the vector which defines the elementary reflector *> H(i) is stored in the i-th row of the array V, and *> *> H = I - V**H * T * V *> \endverbatim * * Arguments: * ========== * *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Specifies the order in which the elementary reflectors are *> multiplied to form the block reflector: *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Specifies how the vectors which define the elementary *> reflectors are stored (see also Further Details): *> = 'C': columnwise *> = 'R': rowwise *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the block reflector H. N >= 0. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the triangular factor T (= the number of *> elementary reflectors). K >= 1. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX*16 array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,N) if STOREV = 'R' *> The matrix V. See further details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is COMPLEX*16 array, dimension (K) *> TAU(i) must contain the scalar factor of the elementary *> reflector H(i). *> \endverbatim *> *> \param[out] T *> \verbatim *> T is COMPLEX*16 array, dimension (LDT,K) *> The k by k triangular factor T of the block reflector. *> If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is *> lower triangular. The rest of the array is not used. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup complex16OTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE ZLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. CHARACTER DIRECT, STOREV INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. COMPLEX*16 T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX*16 ONE, ZERO PARAMETER ( ONE = ( 1.0D+0, 0.0D+0 ), $ ZERO = ( 0.0D+0, 0.0D+0 ) ) * .. * .. Local Scalars .. INTEGER I, J, PREVLASTV, LASTV * .. * .. External Subroutines .. EXTERNAL ZGEMV, ZLACGV, ZTRMV * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Executable Statements .. * * Quick return if possible * IF( N.EQ.0 ) $ RETURN * IF( LSAME( DIRECT, 'F' ) ) THEN PREVLASTV = N DO I = 1, K PREVLASTV = MAX( PREVLASTV, I ) IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = 1, I T( J, I ) = ZERO END DO ELSE * * general case * IF( LSAME( STOREV, 'C' ) ) THEN * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * CONJG( V( I , J ) ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(i:j,1:i-1)**H * V(i:j,i) * CALL ZGEMV( 'Conjugate transpose', J-I, I-1, $ -TAU( I ), V( I+1, 1 ), LDV, $ V( I+1, I ), 1, ONE, T( 1, I ), 1 ) ELSE * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( J , I ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(1:i-1,i:j) * V(i,i:j)**H * CALL ZGEMM( 'N', 'C', I-1, 1, J-I, -TAU( I ), $ V( 1, I+1 ), LDV, V( I, I+1 ), LDV, $ ONE, T( 1, I ), LDT ) END IF * * T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) * CALL ZTRMV( 'Upper', 'No transpose', 'Non-unit', I-1, T, $ LDT, T( 1, I ), 1 ) T( I, I ) = TAU( I ) IF( I.GT.1 ) THEN PREVLASTV = MAX( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF END DO ELSE PREVLASTV = 1 DO I = K, 1, -1 IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = I, K T( J, I ) = ZERO END DO ELSE * * general case * IF( I.LT.K ) THEN IF( LSAME( STOREV, 'C' ) ) THEN * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * CONJG( V( N-K+I , J ) ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(j:n-k+i,i+1:k)**H * V(j:n-k+i,i) * CALL ZGEMV( 'Conjugate transpose', N-K+I-J, K-I, $ -TAU( I ), V( J, I+1 ), LDV, V( J, I ), $ 1, ONE, T( I+1, I ), 1 ) ELSE * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( J, N-K+I ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(i+1:k,j:n-k+i) * V(i,j:n-k+i)**H * CALL ZGEMM( 'N', 'C', K-I, 1, N-K+I-J, -TAU( I ), $ V( I+1, J ), LDV, V( I, J ), LDV, $ ONE, T( I+1, I ), LDT ) END IF * * T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) * CALL ZTRMV( 'Lower', 'No transpose', 'Non-unit', K-I, $ T( I+1, I+1 ), LDT, T( I+1, I ), 1 ) IF( I.GT.1 ) THEN PREVLASTV = MIN( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF T( I, I ) = TAU( I ) END IF END DO END IF RETURN * * End of ZLARFT * END piqp-octave/src/eigen/lapack/ilaclr.f0000644000175100001770000000566514107270226017416 0ustar runnerdocker*> \brief \b ILACLR * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILACLR + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILACLR( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * COMPLEX A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILACLR scans A for its last non-zero row. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup complexOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILACLR( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. COMPLEX A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX ZERO PARAMETER ( ZERO = (0.0E+0, 0.0E+0) ) * .. * .. Local Scalars .. INTEGER I, J * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( M.EQ.0 ) THEN ILACLR = M ELSE IF( A(M, 1).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILACLR = M ELSE * Scan up each column tracking the last zero row seen. ILACLR = 0 DO J = 1, N I=M DO WHILE((A(MAX(I,1),J).EQ.ZERO).AND.(I.GE.1)) I=I-1 ENDDO ILACLR = MAX( ILACLR, I ) END DO END IF RETURN END piqp-octave/src/eigen/lapack/clarf.f0000644000175100001770000001422714107270226017231 0ustar runnerdocker*> \brief \b CLARF * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLARF + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE CLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * .. Scalar Arguments .. * CHARACTER SIDE * INTEGER INCV, LDC, M, N * COMPLEX TAU * .. * .. Array Arguments .. * COMPLEX C( LDC, * ), V( * ), WORK( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLARF applies a complex elementary reflector H to a complex M-by-N *> matrix C, from either the left or the right. H is represented in the *> form *> *> H = I - tau * v * v**H *> *> where tau is a complex scalar and v is a complex vector. *> *> If tau = 0, then H is taken to be the unit matrix. *> *> To apply H**H (the conjugate transpose of H), supply conjg(tau) instead *> tau. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': form H * C *> = 'R': form C * H *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX array, dimension *> (1 + (M-1)*abs(INCV)) if SIDE = 'L' *> or (1 + (N-1)*abs(INCV)) if SIDE = 'R' *> The vector v in the representation of H. V is not used if *> TAU = 0. *> \endverbatim *> *> \param[in] INCV *> \verbatim *> INCV is INTEGER *> The increment between elements of v. INCV <> 0. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is COMPLEX *> The value tau in the representation of H. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is COMPLEX array, dimension (LDC,N) *> On entry, the M-by-N matrix C. *> On exit, C is overwritten by the matrix H * C if SIDE = 'L', *> or C * H if SIDE = 'R'. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is COMPLEX array, dimension *> (N) if SIDE = 'L' *> or (M) if SIDE = 'R' *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * * ===================================================================== SUBROUTINE CLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER SIDE INTEGER INCV, LDC, M, N COMPLEX TAU * .. * .. Array Arguments .. COMPLEX C( LDC, * ), V( * ), WORK( * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX ONE, ZERO PARAMETER ( ONE = ( 1.0E+0, 0.0E+0 ), $ ZERO = ( 0.0E+0, 0.0E+0 ) ) * .. * .. Local Scalars .. LOGICAL APPLYLEFT INTEGER I, LASTV, LASTC * .. * .. External Subroutines .. EXTERNAL CGEMV, CGERC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILACLR, ILACLC EXTERNAL LSAME, ILACLR, ILACLC * .. * .. Executable Statements .. * APPLYLEFT = LSAME( SIDE, 'L' ) LASTV = 0 LASTC = 0 IF( TAU.NE.ZERO ) THEN ! Set up variables for scanning V. LASTV begins pointing to the end ! of V. IF( APPLYLEFT ) THEN LASTV = M ELSE LASTV = N END IF IF( INCV.GT.0 ) THEN I = 1 + (LASTV-1) * INCV ELSE I = 1 END IF ! Look for the last non-zero row in V. DO WHILE( LASTV.GT.0 .AND. V( I ).EQ.ZERO ) LASTV = LASTV - 1 I = I - INCV END DO IF( APPLYLEFT ) THEN ! Scan for the last non-zero column in C(1:lastv,:). LASTC = ILACLC(LASTV, N, C, LDC) ELSE ! Scan for the last non-zero row in C(:,1:lastv). LASTC = ILACLR(M, LASTV, C, LDC) END IF END IF ! Note that lastc.eq.0 renders the BLAS operations null; no special ! case is needed at this level. IF( APPLYLEFT ) THEN * * Form H * C * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastv,1:lastc)**H * v(1:lastv,1) * CALL CGEMV( 'Conjugate transpose', LASTV, LASTC, ONE, $ C, LDC, V, INCV, ZERO, WORK, 1 ) * * C(1:lastv,1:lastc) := C(...) - v(1:lastv,1) * w(1:lastc,1)**H * CALL CGERC( LASTV, LASTC, -TAU, V, INCV, WORK, 1, C, LDC ) END IF ELSE * * Form C * H * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastc,1:lastv) * v(1:lastv,1) * CALL CGEMV( 'No transpose', LASTC, LASTV, ONE, C, LDC, $ V, INCV, ZERO, WORK, 1 ) * * C(1:lastc,1:lastv) := C(...) - w(1:lastc,1) * v(1:lastv,1)**H * CALL CGERC( LASTC, LASTV, -TAU, WORK, 1, V, INCV, C, LDC ) END IF END IF RETURN * * End of CLARF * END piqp-octave/src/eigen/lapack/ilazlr.f0000644000175100001770000000570214107270226017435 0ustar runnerdocker*> \brief \b ILAZLR * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILAZLR + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILAZLR( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * COMPLEX*16 A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILAZLR scans A for its last non-zero row. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is COMPLEX*16 array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILAZLR( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. COMPLEX*16 A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX*16 ZERO PARAMETER ( ZERO = (0.0D+0, 0.0D+0) ) * .. * .. Local Scalars .. INTEGER I, J * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( M.EQ.0 ) THEN ILAZLR = M ELSE IF( A(M, 1).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILAZLR = M ELSE * Scan up each column tracking the last zero row seen. ILAZLR = 0 DO J = 1, N I=M DO WHILE((A(MAX(I,1),J).EQ.ZERO).AND.(I.GE.1)) I=I-1 ENDDO ILAZLR = MAX( ILAZLR, I ) END DO END IF RETURN END piqp-octave/src/eigen/lapack/iladlc.f0000644000175100001770000000561014107270226017366 0ustar runnerdocker*> \brief \b ILADLC * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ILADLC + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * INTEGER FUNCTION ILADLC( M, N, A, LDA ) * * .. Scalar Arguments .. * INTEGER M, N, LDA * .. * .. Array Arguments .. * DOUBLE PRECISION A( LDA, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ILADLC scans A for its last non-zero column. *> \endverbatim * * Arguments: * ========== * *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix A. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix A. *> \endverbatim *> *> \param[in] A *> \verbatim *> A is DOUBLE PRECISION array, dimension (LDA,N) *> The m by n matrix A. *> \endverbatim *> *> \param[in] LDA *> \verbatim *> LDA is INTEGER *> The leading dimension of the array A. LDA >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== INTEGER FUNCTION ILADLC( M, N, A, LDA ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER M, N, LDA * .. * .. Array Arguments .. DOUBLE PRECISION A( LDA, * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ZERO PARAMETER ( ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER I * .. * .. Executable Statements .. * * Quick test for the common case where one corner is non-zero. IF( N.EQ.0 ) THEN ILADLC = N ELSE IF( A(1, N).NE.ZERO .OR. A(M, N).NE.ZERO ) THEN ILADLC = N ELSE * Now scan each column from the end, returning with the first non-zero. DO ILADLC = N, 1, -1 DO I = 1, M IF( A(I, ILADLC).NE.ZERO ) RETURN END DO END DO END IF RETURN END piqp-octave/src/eigen/lapack/slarft.f0000644000175100001770000002370714107270226017440 0ustar runnerdocker*> \brief \b SLARFT * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLARFT + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE SLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * .. Scalar Arguments .. * CHARACTER DIRECT, STOREV * INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. * REAL T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLARFT forms the triangular factor T of a real block reflector H *> of order n, which is defined as a product of k elementary reflectors. *> *> If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; *> *> If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. *> *> If STOREV = 'C', the vector which defines the elementary reflector *> H(i) is stored in the i-th column of the array V, and *> *> H = I - V * T * V**T *> *> If STOREV = 'R', the vector which defines the elementary reflector *> H(i) is stored in the i-th row of the array V, and *> *> H = I - V**T * T * V *> \endverbatim * * Arguments: * ========== * *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Specifies the order in which the elementary reflectors are *> multiplied to form the block reflector: *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Specifies how the vectors which define the elementary *> reflectors are stored (see also Further Details): *> = 'C': columnwise *> = 'R': rowwise *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the block reflector H. N >= 0. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the triangular factor T (= the number of *> elementary reflectors). K >= 1. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is REAL array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,N) if STOREV = 'R' *> The matrix V. See further details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is REAL array, dimension (K) *> TAU(i) must contain the scalar factor of the elementary *> reflector H(i). *> \endverbatim *> *> \param[out] T *> \verbatim *> T is REAL array, dimension (LDT,K) *> The k by k triangular factor T of the block reflector. *> If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is *> lower triangular. The rest of the array is not used. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup realOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE SLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. CHARACTER DIRECT, STOREV INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. REAL T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * ===================================================================== * * .. Parameters .. REAL ONE, ZERO PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 ) * .. * .. Local Scalars .. INTEGER I, J, PREVLASTV, LASTV * .. * .. External Subroutines .. EXTERNAL SGEMV, STRMV * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Executable Statements .. * * Quick return if possible * IF( N.EQ.0 ) $ RETURN * IF( LSAME( DIRECT, 'F' ) ) THEN PREVLASTV = N DO I = 1, K PREVLASTV = MAX( I, PREVLASTV ) IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = 1, I T( J, I ) = ZERO END DO ELSE * * general case * IF( LSAME( STOREV, 'C' ) ) THEN * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( I , J ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(i:j,1:i-1)**T * V(i:j,i) * CALL SGEMV( 'Transpose', J-I, I-1, -TAU( I ), $ V( I+1, 1 ), LDV, V( I+1, I ), 1, ONE, $ T( 1, I ), 1 ) ELSE * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( J , I ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(1:i-1,i:j) * V(i,i:j)**T * CALL SGEMV( 'No transpose', I-1, J-I, -TAU( I ), $ V( 1, I+1 ), LDV, V( I, I+1 ), LDV, $ ONE, T( 1, I ), 1 ) END IF * * T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) * CALL STRMV( 'Upper', 'No transpose', 'Non-unit', I-1, T, $ LDT, T( 1, I ), 1 ) T( I, I ) = TAU( I ) IF( I.GT.1 ) THEN PREVLASTV = MAX( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF END DO ELSE PREVLASTV = 1 DO I = K, 1, -1 IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = I, K T( J, I ) = ZERO END DO ELSE * * general case * IF( I.LT.K ) THEN IF( LSAME( STOREV, 'C' ) ) THEN * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( N-K+I , J ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(j:n-k+i,i+1:k)**T * V(j:n-k+i,i) * CALL SGEMV( 'Transpose', N-K+I-J, K-I, -TAU( I ), $ V( J, I+1 ), LDV, V( J, I ), 1, ONE, $ T( I+1, I ), 1 ) ELSE * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( J, N-K+I ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(i+1:k,j:n-k+i) * V(i,j:n-k+i)**T * CALL SGEMV( 'No transpose', K-I, N-K+I-J, $ -TAU( I ), V( I+1, J ), LDV, V( I, J ), LDV, $ ONE, T( I+1, I ), 1 ) END IF * * T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) * CALL STRMV( 'Lower', 'No transpose', 'Non-unit', K-I, $ T( I+1, I+1 ), LDT, T( I+1, I ), 1 ) IF( I.GT.1 ) THEN PREVLASTV = MIN( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF T( I, I ) = TAU( I ) END IF END DO END IF RETURN * * End of SLARFT * END piqp-octave/src/eigen/lapack/dlapy3.f0000644000175100001770000000526114107270226017334 0ustar runnerdocker*> \brief \b DLAPY3 * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLAPY3 + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * DOUBLE PRECISION FUNCTION DLAPY3( X, Y, Z ) * * .. Scalar Arguments .. * DOUBLE PRECISION X, Y, Z * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause *> unnecessary overflow. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is DOUBLE PRECISION *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is DOUBLE PRECISION *> \endverbatim *> *> \param[in] Z *> \verbatim *> Z is DOUBLE PRECISION *> X, Y and Z specify the values x, y and z. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== DOUBLE PRECISION FUNCTION DLAPY3( X, Y, Z ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. DOUBLE PRECISION X, Y, Z * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ZERO PARAMETER ( ZERO = 0.0D0 ) * .. * .. Local Scalars .. DOUBLE PRECISION W, XABS, YABS, ZABS * .. * .. Intrinsic Functions .. INTRINSIC ABS, MAX, SQRT * .. * .. Executable Statements .. * XABS = ABS( X ) YABS = ABS( Y ) ZABS = ABS( Z ) W = MAX( XABS, YABS, ZABS ) IF( W.EQ.ZERO ) THEN * W can be zero for max(0,nan,0) * adding all three entries together will make sure * NaN will not disappear. DLAPY3 = XABS + YABS + ZABS ELSE DLAPY3 = W*SQRT( ( XABS / W )**2+( YABS / W )**2+ $ ( ZABS / W )**2 ) END IF RETURN * * End of DLAPY3 * END piqp-octave/src/eigen/lapack/second_NONE.f0000644000175100001770000000235214107270226020230 0ustar runnerdocker*> \brief \b SECOND returns nothing * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * * Definition: * =========== * * REAL FUNCTION SECOND( ) * * *> \par Purpose: * ============= *> *> \verbatim *> *> SECOND returns nothing instead of returning the user time for a process in seconds. *> If you are using that routine, it means that neither EXTERNAL ETIME, *> EXTERNAL ETIME_, INTERNAL ETIME, INTERNAL CPU_TIME is available on *> your machine. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== REAL FUNCTION SECOND( ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * ===================================================================== * SECOND = 0.0E+0 RETURN * * End of SECOND * END piqp-octave/src/eigen/lapack/double.cpp0000644000175100001770000000106214107270226017742 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #define SCALAR double #define SCALAR_SUFFIX d #define SCALAR_SUFFIX_UP "D" #define ISCOMPLEX 0 #include "cholesky.cpp" #include "lu.cpp" #include "eigenvalues.cpp" #include "svd.cpp" piqp-octave/src/eigen/lapack/single.cpp0000644000175100001770000000106114107270226017750 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #define SCALAR float #define SCALAR_SUFFIX s #define SCALAR_SUFFIX_UP "S" #define ISCOMPLEX 0 #include "cholesky.cpp" #include "lu.cpp" #include "eigenvalues.cpp" #include "svd.cpp" piqp-octave/src/eigen/lapack/dladiv.f0000644000175100001770000000563114107270226017404 0ustar runnerdocker*> \brief \b DLADIV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLADIV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE DLADIV( A, B, C, D, P, Q ) * * .. Scalar Arguments .. * DOUBLE PRECISION A, B, C, D, P, Q * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLADIV performs complex division in real arithmetic *> *> a + i*b *> p + i*q = --------- *> c + i*d *> *> The algorithm is due to Robert L. Smith and can be found *> in D. Knuth, The art of Computer Programming, Vol.2, p.195 *> \endverbatim * * Arguments: * ========== * *> \param[in] A *> \verbatim *> A is DOUBLE PRECISION *> \endverbatim *> *> \param[in] B *> \verbatim *> B is DOUBLE PRECISION *> \endverbatim *> *> \param[in] C *> \verbatim *> C is DOUBLE PRECISION *> \endverbatim *> *> \param[in] D *> \verbatim *> D is DOUBLE PRECISION *> The scalars a, b, c, and d in the above expression. *> \endverbatim *> *> \param[out] P *> \verbatim *> P is DOUBLE PRECISION *> \endverbatim *> *> \param[out] Q *> \verbatim *> Q is DOUBLE PRECISION *> The scalars p and q in the above expression. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== SUBROUTINE DLADIV( A, B, C, D, P, Q ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. DOUBLE PRECISION A, B, C, D, P, Q * .. * * ===================================================================== * * .. Local Scalars .. DOUBLE PRECISION E, F * .. * .. Intrinsic Functions .. INTRINSIC ABS * .. * .. Executable Statements .. * IF( ABS( D ).LT.ABS( C ) ) THEN E = D / C F = C + D*E P = ( A+B*E ) / F Q = ( B-A*E ) / F ELSE E = C / D F = D + C*E P = ( B+A*E ) / F Q = ( -A+B*E ) / F END IF * RETURN * * End of DLADIV * END piqp-octave/src/eigen/lapack/dlamch.f0000644000175100001770000001221314107270226017363 0ustar runnerdocker*> \brief \b DLAMCH * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * * Definition: * =========== * * DOUBLE PRECISION FUNCTION DLAMCH( CMACH ) * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLAMCH determines double precision machine parameters. *> \endverbatim * * Arguments: * ========== * *> \param[in] CMACH *> \verbatim *> Specifies the value to be returned by DLAMCH: *> = 'E' or 'e', DLAMCH := eps *> = 'S' or 's , DLAMCH := sfmin *> = 'B' or 'b', DLAMCH := base *> = 'P' or 'p', DLAMCH := eps*base *> = 'N' or 'n', DLAMCH := t *> = 'R' or 'r', DLAMCH := rnd *> = 'M' or 'm', DLAMCH := emin *> = 'U' or 'u', DLAMCH := rmin *> = 'L' or 'l', DLAMCH := emax *> = 'O' or 'o', DLAMCH := rmax *> where *> eps = relative machine precision *> sfmin = safe minimum, such that 1/sfmin does not overflow *> base = base of the machine *> prec = eps*base *> t = number of (base) digits in the mantissa *> rnd = 1.0 when rounding occurs in addition, 0.0 otherwise *> emin = minimum exponent before (gradual) underflow *> rmin = underflow threshold - base**(emin-1) *> emax = largest exponent before overflow *> rmax = overflow threshold - (base**emax)*(1-eps) *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== DOUBLE PRECISION FUNCTION DLAMCH( CMACH ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER CMACH * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE, ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. * .. Local Scalars .. DOUBLE PRECISION RND, EPS, SFMIN, SMALL, RMACH * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Intrinsic Functions .. INTRINSIC DIGITS, EPSILON, HUGE, MAXEXPONENT, $ MINEXPONENT, RADIX, TINY * .. * .. Executable Statements .. * * * Assume rounding, not chopping. Always. * RND = ONE * IF( ONE.EQ.RND ) THEN EPS = EPSILON(ZERO) * 0.5 ELSE EPS = EPSILON(ZERO) END IF * IF( LSAME( CMACH, 'E' ) ) THEN RMACH = EPS ELSE IF( LSAME( CMACH, 'S' ) ) THEN SFMIN = TINY(ZERO) SMALL = ONE / HUGE(ZERO) IF( SMALL.GE.SFMIN ) THEN * * Use SMALL plus a bit, to avoid the possibility of rounding * causing overflow when computing 1/sfmin. * SFMIN = SMALL*( ONE+EPS ) END IF RMACH = SFMIN ELSE IF( LSAME( CMACH, 'B' ) ) THEN RMACH = RADIX(ZERO) ELSE IF( LSAME( CMACH, 'P' ) ) THEN RMACH = EPS * RADIX(ZERO) ELSE IF( LSAME( CMACH, 'N' ) ) THEN RMACH = DIGITS(ZERO) ELSE IF( LSAME( CMACH, 'R' ) ) THEN RMACH = RND ELSE IF( LSAME( CMACH, 'M' ) ) THEN RMACH = MINEXPONENT(ZERO) ELSE IF( LSAME( CMACH, 'U' ) ) THEN RMACH = tiny(zero) ELSE IF( LSAME( CMACH, 'L' ) ) THEN RMACH = MAXEXPONENT(ZERO) ELSE IF( LSAME( CMACH, 'O' ) ) THEN RMACH = HUGE(ZERO) ELSE RMACH = ZERO END IF * DLAMCH = RMACH RETURN * * End of DLAMCH * END ************************************************************************ *> \brief \b DLAMC3 *> \details *> \b Purpose: *> \verbatim *> DLAMC3 is intended to force A and B to be stored prior to doing *> the addition of A and B , for use in situations where optimizers *> might hold one of these in a register. *> \endverbatim *> \author LAPACK is a software package provided by Univ. of Tennessee, Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd.. *> \date November 2011 *> \ingroup auxOTHERauxiliary *> *> \param[in] A *> \verbatim *> A is a DOUBLE PRECISION *> \endverbatim *> *> \param[in] B *> \verbatim *> B is a DOUBLE PRECISION *> The values A and B. *> \endverbatim *> DOUBLE PRECISION FUNCTION DLAMC3( A, B ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * Univ. of Tennessee, Univ. of California Berkeley and NAG Ltd.. * November 2010 * * .. Scalar Arguments .. DOUBLE PRECISION A, B * .. * ===================================================================== * * .. Executable Statements .. * DLAMC3 = A + B * RETURN * * End of DLAMC3 * END * ************************************************************************ piqp-octave/src/eigen/lapack/cholesky.cpp0000644000175100001770000000423514107270226020316 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010-2011 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include "lapack_common.h" #include // POTRF computes the Cholesky factorization of a real symmetric positive definite matrix A. EIGEN_LAPACK_FUNC(potrf,(char* uplo, int *n, RealScalar *pa, int *lda, int *info)) { *info = 0; if(UPLO(*uplo)==INVALID) *info = -1; else if(*n<0) *info = -2; else if(*lda(pa); MatrixType A(a,*n,*n,*lda); int ret; if(UPLO(*uplo)==UP) ret = int(internal::llt_inplace::blocked(A)); else ret = int(internal::llt_inplace::blocked(A)); if(ret>=0) *info = ret+1; return 0; } // POTRS solves a system of linear equations A*X = B with a symmetric // positive definite matrix A using the Cholesky factorization // A = U**T*U or A = L*L**T computed by DPOTRF. EIGEN_LAPACK_FUNC(potrs,(char* uplo, int *n, int *nrhs, RealScalar *pa, int *lda, RealScalar *pb, int *ldb, int *info)) { *info = 0; if(UPLO(*uplo)==INVALID) *info = -1; else if(*n<0) *info = -2; else if(*nrhs<0) *info = -3; else if(*lda(pa); Scalar* b = reinterpret_cast(pb); MatrixType A(a,*n,*n,*lda); MatrixType B(b,*n,*nrhs,*ldb); if(UPLO(*uplo)==UP) { A.triangularView().adjoint().solveInPlace(B); A.triangularView().solveInPlace(B); } else { A.triangularView().solveInPlace(B); A.triangularView().adjoint().solveInPlace(B); } return 0; } piqp-octave/src/eigen/lapack/slarfb.f0000644000175100001770000005430714107270226017416 0ustar runnerdocker*> \brief \b SLARFB * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLARFB + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE SLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, * T, LDT, C, LDC, WORK, LDWORK ) * * .. Scalar Arguments .. * CHARACTER DIRECT, SIDE, STOREV, TRANS * INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. * REAL C( LDC, * ), T( LDT, * ), V( LDV, * ), * $ WORK( LDWORK, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLARFB applies a real block reflector H or its transpose H**T to a *> real m by n matrix C, from either the left or the right. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': apply H or H**T from the Left *> = 'R': apply H or H**T from the Right *> \endverbatim *> *> \param[in] TRANS *> \verbatim *> TRANS is CHARACTER*1 *> = 'N': apply H (No transpose) *> = 'T': apply H**T (Transpose) *> \endverbatim *> *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Indicates how H is formed from a product of elementary *> reflectors *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Indicates how the vectors which define the elementary *> reflectors are stored: *> = 'C': Columnwise *> = 'R': Rowwise *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the matrix T (= the number of elementary *> reflectors whose product defines the block reflector). *> \endverbatim *> *> \param[in] V *> \verbatim *> V is REAL array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,M) if STOREV = 'R' and SIDE = 'L' *> (LDV,N) if STOREV = 'R' and SIDE = 'R' *> The matrix V. See Further Details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); *> if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); *> if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] T *> \verbatim *> T is REAL array, dimension (LDT,K) *> The triangular k by k matrix T in the representation of the *> block reflector. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is REAL array, dimension (LDC,N) *> On entry, the m by n matrix C. *> On exit, C is overwritten by H*C or H**T*C or C*H or C*H**T. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is REAL array, dimension (LDWORK,K) *> \endverbatim *> *> \param[in] LDWORK *> \verbatim *> LDWORK is INTEGER *> The leading dimension of the array WORK. *> If SIDE = 'L', LDWORK >= max(1,N); *> if SIDE = 'R', LDWORK >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup realOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored; the corresponding *> array elements are modified but restored on exit. The rest of the *> array is not used. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE SLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, $ T, LDT, C, LDC, WORK, LDWORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER DIRECT, SIDE, STOREV, TRANS INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. REAL C( LDC, * ), T( LDT, * ), V( LDV, * ), $ WORK( LDWORK, * ) * .. * * ===================================================================== * * .. Parameters .. REAL ONE PARAMETER ( ONE = 1.0E+0 ) * .. * .. Local Scalars .. CHARACTER TRANST INTEGER I, J, LASTV, LASTC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILASLR, ILASLC EXTERNAL LSAME, ILASLR, ILASLC * .. * .. External Subroutines .. EXTERNAL SCOPY, SGEMM, STRMM * .. * .. Executable Statements .. * * Quick return if possible * IF( M.LE.0 .OR. N.LE.0 ) $ RETURN * IF( LSAME( TRANS, 'N' ) ) THEN TRANST = 'T' ELSE TRANST = 'N' END IF * IF( LSAME( STOREV, 'C' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 ) (first K rows) * ( V2 ) * where V1 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILASLR( M, K, V, LDV ) ) LASTC = ILASLC( LASTV, N, C, LDC ) * * W := C**T * V = (C1**T * V1 + C2**T * V2) (stored in WORK) * * W := C1**T * DO 10 J = 1, K CALL SCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) 10 CONTINUE * * W := W * V1 * CALL STRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**T *V2 * CALL SGEMM( 'Transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL STRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2 * W**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTV-K, LASTC, K, $ -ONE, V( K+1, 1 ), LDV, WORK, LDWORK, ONE, $ C( K+1, 1 ), LDC ) END IF * * W := W * V1**T * CALL STRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**T * DO 30 J = 1, K DO 20 I = 1, LASTC C( J, I ) = C( J, I ) - WORK( I, J ) 20 CONTINUE 30 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILASLR( N, K, V, LDV ) ) LASTC = ILASLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C1 * DO 40 J = 1, K CALL SCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 40 CONTINUE * * W := W * V1 * CALL STRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2 * CALL SGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL STRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( K+1, 1 ), LDV, ONE, $ C( 1, K+1 ), LDC ) END IF * * W := W * V1**T * CALL STRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 60 J = 1, K DO 50 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 50 CONTINUE 60 CONTINUE END IF * ELSE * * Let V = ( V1 ) * ( V2 ) (last K rows) * where V2 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILASLR( M, K, V, LDV ) ) LASTC = ILASLC( LASTV, N, C, LDC ) * * W := C**T * V = (C1**T * V1 + C2**T * V2) (stored in WORK) * * W := C2**T * DO 70 J = 1, K CALL SCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) 70 CONTINUE * * W := W * V2 * CALL STRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**T*V1 * CALL SGEMM( 'Transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL STRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1 * W**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTV-K, LASTC, K, -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2**T * CALL STRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**T * DO 90 J = 1, K DO 80 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - WORK(I, J) 80 CONTINUE 90 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILASLR( N, K, V, LDV ) ) LASTC = ILASLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C2 * DO 100 J = 1, K CALL SCOPY( LASTC, C( 1, N-K+J ), 1, WORK( 1, J ), 1 ) 100 CONTINUE * * W := W * V2 * CALL STRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1 * CALL SGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL STRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2**T * CALL STRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W * DO 120 J = 1, K DO 110 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) - WORK(I, J) 110 CONTINUE 120 CONTINUE END IF END IF * ELSE IF( LSAME( STOREV, 'R' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 V2 ) (V1: first K columns) * where V1 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILASLC( K, M, V, LDV ) ) LASTC = ILASLC( LASTV, N, C, LDC ) * * W := C**T * V**T = (C1**T * V1**T + C2**T * V2**T) (stored in WORK) * * W := C1**T * DO 130 J = 1, K CALL SCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) 130 CONTINUE * * W := W * V1**T * CALL STRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**T*V2**T * CALL SGEMM( 'Transpose', 'Transpose', $ LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL STRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**T * W**T * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2**T * W**T * CALL SGEMM( 'Transpose', 'Transpose', $ LASTV-K, LASTC, K, $ -ONE, V( 1, K+1 ), LDV, WORK, LDWORK, $ ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1 * CALL STRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**T * DO 150 J = 1, K DO 140 I = 1, LASTC C( J, I ) = C( J, I ) - WORK( I, J ) 140 CONTINUE 150 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILASLC( K, N, V, LDV ) ) LASTC = ILASLR( M, LASTV, C, LDC ) * * W := C * V**T = (C1*V1**T + C2*V2**T) (stored in WORK) * * W := C1 * DO 160 J = 1, K CALL SCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 160 CONTINUE * * W := W * V1**T * CALL STRMM( 'Right', 'Upper', 'Transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL STRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2 * CALL SGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( 1, K+1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1 * CALL STRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 180 J = 1, K DO 170 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 170 CONTINUE 180 CONTINUE * END IF * ELSE * * Let V = ( V1 V2 ) (V2: last K columns) * where V2 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**T * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILASLC( K, M, V, LDV ) ) LASTC = ILASLC( LASTV, N, C, LDC ) * * W := C**T * V**T = (C1**T * V1**T + C2**T * V2**T) (stored in WORK) * * W := C2**T * DO 190 J = 1, K CALL SCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) 190 CONTINUE * * W := W * V2**T * CALL STRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**T * V1**T * CALL SGEMM( 'Transpose', 'Transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**T or W * T * CALL STRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**T * W**T * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1**T * W**T * CALL SGEMM( 'Transpose', 'Transpose', $ LASTV-K, LASTC, K, -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL STRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**T * DO 210 J = 1, K DO 200 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - WORK(I, J) 200 CONTINUE 210 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**T where C = ( C1 C2 ) * LASTV = MAX( K, ILASLC( K, N, V, LDV ) ) LASTC = ILASLR( M, LASTV, C, LDC ) * * W := C * V**T = (C1*V1**T + C2*V2**T) (stored in WORK) * * W := C2 * DO 220 J = 1, K CALL SCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 220 CONTINUE * * W := W * V2**T * CALL STRMM( 'Right', 'Lower', 'Transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1**T * CALL SGEMM( 'No transpose', 'Transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**T * CALL STRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1 * CALL SGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL STRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C1 := C1 - W * DO 240 J = 1, K DO 230 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) $ - WORK( I, J ) 230 CONTINUE 240 CONTINUE * END IF * END IF END IF * RETURN * * End of SLARFB * END piqp-octave/src/eigen/lapack/clacgv.f0000644000175100001770000000541714107270226017402 0ustar runnerdocker*> \brief \b CLACGV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLACGV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE CLACGV( N, X, INCX ) * * .. Scalar Arguments .. * INTEGER INCX, N * .. * .. Array Arguments .. * COMPLEX X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLACGV conjugates a complex vector of length N. *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The length of the vector X. N >= 0. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is COMPLEX array, dimension *> (1+(N-1)*abs(INCX)) *> On entry, the vector of length N to be conjugated. *> On exit, X is overwritten with conjg(X). *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The spacing between successive elements of X. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * * ===================================================================== SUBROUTINE CLACGV( N, X, INCX ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N * .. * .. Array Arguments .. COMPLEX X( * ) * .. * * ===================================================================== * * .. Local Scalars .. INTEGER I, IOFF * .. * .. Intrinsic Functions .. INTRINSIC CONJG * .. * .. Executable Statements .. * IF( INCX.EQ.1 ) THEN DO 10 I = 1, N X( I ) = CONJG( X( I ) ) 10 CONTINUE ELSE IOFF = 1 IF( INCX.LT.0 ) $ IOFF = 1 - ( N-1 )*INCX DO 20 I = 1, N X( IOFF ) = CONJG( X( IOFF ) ) IOFF = IOFF + INCX 20 CONTINUE END IF RETURN * * End of CLACGV * END piqp-octave/src/eigen/lapack/eigenvalues.cpp0000644000175100001770000000344214107270226021003 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include "lapack_common.h" #include // computes eigen values and vectors of a general N-by-N matrix A EIGEN_LAPACK_FUNC(syev,(char *jobz, char *uplo, int* n, Scalar* a, int *lda, Scalar* w, Scalar* /*work*/, int* lwork, int *info)) { // TODO exploit the work buffer bool query_size = *lwork==-1; *info = 0; if(*jobz!='N' && *jobz!='V') *info = -1; else if(UPLO(*uplo)==INVALID) *info = -2; else if(*n<0) *info = -3; else if(*lda eig(mat,computeVectors?ComputeEigenvectors:EigenvaluesOnly); if(eig.info()==NoConvergence) { make_vector(w,*n).setZero(); if(computeVectors) matrix(a,*n,*n,*lda).setIdentity(); //*info = 1; return 0; } make_vector(w,*n) = eig.eigenvalues(); if(computeVectors) matrix(a,*n,*n,*lda) = eig.eigenvectors(); return 0; } piqp-octave/src/eigen/lapack/dlarft.f0000644000175100001770000002375614107270226017425 0ustar runnerdocker*> \brief \b DLARFT * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLARFT + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE DLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * .. Scalar Arguments .. * CHARACTER DIRECT, STOREV * INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. * DOUBLE PRECISION T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLARFT forms the triangular factor T of a real block reflector H *> of order n, which is defined as a product of k elementary reflectors. *> *> If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; *> *> If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. *> *> If STOREV = 'C', the vector which defines the elementary reflector *> H(i) is stored in the i-th column of the array V, and *> *> H = I - V * T * V**T *> *> If STOREV = 'R', the vector which defines the elementary reflector *> H(i) is stored in the i-th row of the array V, and *> *> H = I - V**T * T * V *> \endverbatim * * Arguments: * ========== * *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Specifies the order in which the elementary reflectors are *> multiplied to form the block reflector: *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Specifies how the vectors which define the elementary *> reflectors are stored (see also Further Details): *> = 'C': columnwise *> = 'R': rowwise *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the block reflector H. N >= 0. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the triangular factor T (= the number of *> elementary reflectors). K >= 1. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is DOUBLE PRECISION array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,N) if STOREV = 'R' *> The matrix V. See further details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is DOUBLE PRECISION array, dimension (K) *> TAU(i) must contain the scalar factor of the elementary *> reflector H(i). *> \endverbatim *> *> \param[out] T *> \verbatim *> T is DOUBLE PRECISION array, dimension (LDT,K) *> The k by k triangular factor T of the block reflector. *> If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is *> lower triangular. The rest of the array is not used. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date April 2012 * *> \ingroup doubleOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE DLARFT( DIRECT, STOREV, N, K, V, LDV, TAU, T, LDT ) * * -- LAPACK auxiliary routine (version 3.4.1) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * April 2012 * * .. Scalar Arguments .. CHARACTER DIRECT, STOREV INTEGER K, LDT, LDV, N * .. * .. Array Arguments .. DOUBLE PRECISION T( LDT, * ), TAU( * ), V( LDV, * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE, ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER I, J, PREVLASTV, LASTV * .. * .. External Subroutines .. EXTERNAL DGEMV, DTRMV * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. Executable Statements .. * * Quick return if possible * IF( N.EQ.0 ) $ RETURN * IF( LSAME( DIRECT, 'F' ) ) THEN PREVLASTV = N DO I = 1, K PREVLASTV = MAX( I, PREVLASTV ) IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = 1, I T( J, I ) = ZERO END DO ELSE * * general case * IF( LSAME( STOREV, 'C' ) ) THEN * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( I , J ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(i:j,1:i-1)**T * V(i:j,i) * CALL DGEMV( 'Transpose', J-I, I-1, -TAU( I ), $ V( I+1, 1 ), LDV, V( I+1, I ), 1, ONE, $ T( 1, I ), 1 ) ELSE * Skip any trailing zeros. DO LASTV = N, I+1, -1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = 1, I-1 T( J, I ) = -TAU( I ) * V( J , I ) END DO J = MIN( LASTV, PREVLASTV ) * * T(1:i-1,i) := - tau(i) * V(1:i-1,i:j) * V(i,i:j)**T * CALL DGEMV( 'No transpose', I-1, J-I, -TAU( I ), $ V( 1, I+1 ), LDV, V( I, I+1 ), LDV, ONE, $ T( 1, I ), 1 ) END IF * * T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) * CALL DTRMV( 'Upper', 'No transpose', 'Non-unit', I-1, T, $ LDT, T( 1, I ), 1 ) T( I, I ) = TAU( I ) IF( I.GT.1 ) THEN PREVLASTV = MAX( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF END DO ELSE PREVLASTV = 1 DO I = K, 1, -1 IF( TAU( I ).EQ.ZERO ) THEN * * H(i) = I * DO J = I, K T( J, I ) = ZERO END DO ELSE * * general case * IF( I.LT.K ) THEN IF( LSAME( STOREV, 'C' ) ) THEN * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( LASTV, I ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( N-K+I , J ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(j:n-k+i,i+1:k)**T * V(j:n-k+i,i) * CALL DGEMV( 'Transpose', N-K+I-J, K-I, -TAU( I ), $ V( J, I+1 ), LDV, V( J, I ), 1, ONE, $ T( I+1, I ), 1 ) ELSE * Skip any leading zeros. DO LASTV = 1, I-1 IF( V( I, LASTV ).NE.ZERO ) EXIT END DO DO J = I+1, K T( J, I ) = -TAU( I ) * V( J, N-K+I ) END DO J = MAX( LASTV, PREVLASTV ) * * T(i+1:k,i) = -tau(i) * V(i+1:k,j:n-k+i) * V(i,j:n-k+i)**T * CALL DGEMV( 'No transpose', K-I, N-K+I-J, $ -TAU( I ), V( I+1, J ), LDV, V( I, J ), LDV, $ ONE, T( I+1, I ), 1 ) END IF * * T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) * CALL DTRMV( 'Lower', 'No transpose', 'Non-unit', K-I, $ T( I+1, I+1 ), LDT, T( I+1, I ), 1 ) IF( I.GT.1 ) THEN PREVLASTV = MIN( PREVLASTV, LASTV ) ELSE PREVLASTV = LASTV END IF END IF T( I, I ) = TAU( I ) END IF END DO END IF RETURN * * End of DLARFT * END piqp-octave/src/eigen/lapack/CMakeLists.txt0000644000175100001770000002517414107270226020536 0ustar runnerdocker project(EigenLapack CXX) include(CheckLanguage) check_language(Fortran) if(CMAKE_Fortran_COMPILER) enable_language(Fortran) set(EIGEN_Fortran_COMPILER_WORKS ON) else() set(EIGEN_Fortran_COMPILER_WORKS OFF) endif() add_custom_target(lapack) include_directories(../blas) set(EigenLapack_SRCS single.cpp double.cpp complex_single.cpp complex_double.cpp ../blas/xerbla.cpp ) if(EIGEN_Fortran_COMPILER_WORKS) set(EigenLapack_SRCS ${EigenLapack_SRCS} slarft.f dlarft.f clarft.f zlarft.f slarfb.f dlarfb.f clarfb.f zlarfb.f slarfg.f dlarfg.f clarfg.f zlarfg.f slarf.f dlarf.f clarf.f zlarf.f sladiv.f dladiv.f cladiv.f zladiv.f ilaslr.f iladlr.f ilaclr.f ilazlr.f ilaslc.f iladlc.f ilaclc.f ilazlc.f dlapy2.f dlapy3.f slapy2.f slapy3.f clacgv.f zlacgv.f slamch.f dlamch.f second_NONE.f dsecnd_NONE.f ) option(EIGEN_ENABLE_LAPACK_TESTS OFF "Enable the Lapack unit tests") if(EIGEN_ENABLE_LAPACK_TESTS) get_filename_component(eigen_full_path_to_reference_lapack "./reference/" ABSOLUTE) if(NOT EXISTS ${eigen_full_path_to_reference_lapack}) # Download lapack and install sources and testing at the right place message(STATUS "Download lapack_addons_3.4.1.tgz...") file(DOWNLOAD "http://downloads.tuxfamily.org/eigen/lapack_addons_3.4.1.tgz" "${CMAKE_CURRENT_SOURCE_DIR}/lapack_addons_3.4.1.tgz" INACTIVITY_TIMEOUT 15 TIMEOUT 240 STATUS download_status EXPECTED_MD5 ab5742640617e3221a873aba44bbdc93 SHOW_PROGRESS) message(STATUS ${download_status}) list(GET download_status 0 download_status_num) set(download_status_num 0) if(download_status_num EQUAL 0) message(STATUS "Setup lapack reference and lapack unit tests") execute_process(COMMAND tar xzf "lapack_addons_3.4.1.tgz" WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}) else() message(STATUS "Download of lapack_addons_3.4.1.tgz failed, LAPACK unit tests won't be enabled") set(EIGEN_ENABLE_LAPACK_TESTS false) endif() endif() get_filename_component(eigen_full_path_to_reference_lapack "./reference/" ABSOLUTE) if(EXISTS ${eigen_full_path_to_reference_lapack}) set(EigenLapack_funcfilenames ssyev.f dsyev.f csyev.f zsyev.f spotrf.f dpotrf.f cpotrf.f zpotrf.f spotrs.f dpotrs.f cpotrs.f zpotrs.f sgetrf.f dgetrf.f cgetrf.f zgetrf.f sgetrs.f dgetrs.f cgetrs.f zgetrs.f) file(GLOB ReferenceLapack_SRCS0 RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} "reference/*.f") foreach(filename1 IN LISTS ReferenceLapack_SRCS0) string(REPLACE "reference/" "" filename ${filename1}) list(FIND EigenLapack_SRCS ${filename} id1) list(FIND EigenLapack_funcfilenames ${filename} id2) if((id1 EQUAL -1) AND (id2 EQUAL -1)) set(ReferenceLapack_SRCS ${ReferenceLapack_SRCS} reference/${filename}) endif() endforeach() endif() endif() endif() set(EIGEN_LAPACK_TARGETS "") add_library(eigen_lapack_static ${EigenLapack_SRCS} ${ReferenceLapack_SRCS}) list(APPEND EIGEN_LAPACK_TARGETS eigen_lapack_static) if (EIGEN_BUILD_SHARED_LIBS) add_library(eigen_lapack SHARED ${EigenLapack_SRCS}) list(APPEND EIGEN_LAPACK_TARGETS eigen_lapack) target_link_libraries(eigen_lapack eigen_blas) endif() foreach(target IN LISTS EIGEN_LAPACK_TARGETS) if(EIGEN_STANDARD_LIBRARIES_TO_LINK_TO) target_link_libraries(${target} ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO}) endif() add_dependencies(lapack ${target}) install(TARGETS ${target} RUNTIME DESTINATION bin LIBRARY DESTINATION lib ARCHIVE DESTINATION lib) endforeach() get_filename_component(eigen_full_path_to_testing_lapack "./testing/" ABSOLUTE) if(EXISTS ${eigen_full_path_to_testing_lapack}) # The following comes from lapack/TESTING/CMakeLists.txt # Get Python find_package(PythonInterp) message(STATUS "Looking for Python found - ${PYTHONINTERP_FOUND}") if (PYTHONINTERP_FOUND) message(STATUS "Using Python version ${PYTHON_VERSION_STRING}") endif() set(LAPACK_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}) set(LAPACK_BINARY_DIR ${CMAKE_CURRENT_BINARY_DIR}) set(BUILD_SINGLE true) set(BUILD_DOUBLE true) set(BUILD_COMPLEX true) set(BUILD_COMPLEX16E true) if(MSVC_VERSION) # string(REPLACE "/STACK:10000000" "/STACK:900000000000000000" # CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}") string(REGEX REPLACE "(.*)/STACK:(.*) (.*)" "\\1/STACK:900000000000000000 \\3" CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}") endif() file(MAKE_DIRECTORY "${LAPACK_BINARY_DIR}/TESTING") add_subdirectory(testing/MATGEN) add_subdirectory(testing/LIN) add_subdirectory(testing/EIG) macro(add_lapack_test output input target) set(TEST_INPUT "${LAPACK_SOURCE_DIR}/testing/${input}") set(TEST_OUTPUT "${LAPACK_BINARY_DIR}/TESTING/${output}") string(REPLACE "." "_" input_name ${input}) set(testName "${target}_${input_name}") if(EXISTS "${TEST_INPUT}") add_test(NAME LAPACK-${testName} COMMAND "${CMAKE_COMMAND}" -DTEST=$ -DINPUT=${TEST_INPUT} -DOUTPUT=${TEST_OUTPUT} -DINTDIR=${CMAKE_CFG_INTDIR} -P "${LAPACK_SOURCE_DIR}/testing/runtest.cmake") endif() endmacro() if (BUILD_SINGLE) add_lapack_test(stest.out stest.in xlintsts) # # ======== SINGLE RFP LIN TESTS ======================== add_lapack_test(stest_rfp.out stest_rfp.in xlintstrfs) # # # ======== SINGLE EIG TESTS =========================== # add_lapack_test(snep.out nep.in xeigtsts) add_lapack_test(ssep.out sep.in xeigtsts) add_lapack_test(ssvd.out svd.in xeigtsts) add_lapack_test(sec.out sec.in xeigtsts) add_lapack_test(sed.out sed.in xeigtsts) add_lapack_test(sgg.out sgg.in xeigtsts) add_lapack_test(sgd.out sgd.in xeigtsts) add_lapack_test(ssb.out ssb.in xeigtsts) add_lapack_test(ssg.out ssg.in xeigtsts) add_lapack_test(sbal.out sbal.in xeigtsts) add_lapack_test(sbak.out sbak.in xeigtsts) add_lapack_test(sgbal.out sgbal.in xeigtsts) add_lapack_test(sgbak.out sgbak.in xeigtsts) add_lapack_test(sbb.out sbb.in xeigtsts) add_lapack_test(sglm.out glm.in xeigtsts) add_lapack_test(sgqr.out gqr.in xeigtsts) add_lapack_test(sgsv.out gsv.in xeigtsts) add_lapack_test(scsd.out csd.in xeigtsts) add_lapack_test(slse.out lse.in xeigtsts) endif() if (BUILD_DOUBLE) # # ======== DOUBLE LIN TESTS =========================== add_lapack_test(dtest.out dtest.in xlintstd) # # ======== DOUBLE RFP LIN TESTS ======================== add_lapack_test(dtest_rfp.out dtest_rfp.in xlintstrfd) # # ======== DOUBLE EIG TESTS =========================== add_lapack_test(dnep.out nep.in xeigtstd) add_lapack_test(dsep.out sep.in xeigtstd) add_lapack_test(dsvd.out svd.in xeigtstd) add_lapack_test(dec.out dec.in xeigtstd) add_lapack_test(ded.out ded.in xeigtstd) add_lapack_test(dgg.out dgg.in xeigtstd) add_lapack_test(dgd.out dgd.in xeigtstd) add_lapack_test(dsb.out dsb.in xeigtstd) add_lapack_test(dsg.out dsg.in xeigtstd) add_lapack_test(dbal.out dbal.in xeigtstd) add_lapack_test(dbak.out dbak.in xeigtstd) add_lapack_test(dgbal.out dgbal.in xeigtstd) add_lapack_test(dgbak.out dgbak.in xeigtstd) add_lapack_test(dbb.out dbb.in xeigtstd) add_lapack_test(dglm.out glm.in xeigtstd) add_lapack_test(dgqr.out gqr.in xeigtstd) add_lapack_test(dgsv.out gsv.in xeigtstd) add_lapack_test(dcsd.out csd.in xeigtstd) add_lapack_test(dlse.out lse.in xeigtstd) endif() if (BUILD_COMPLEX) add_lapack_test(ctest.out ctest.in xlintstc) # # ======== COMPLEX RFP LIN TESTS ======================== add_lapack_test(ctest_rfp.out ctest_rfp.in xlintstrfc) # # ======== COMPLEX EIG TESTS =========================== add_lapack_test(cnep.out nep.in xeigtstc) add_lapack_test(csep.out sep.in xeigtstc) add_lapack_test(csvd.out svd.in xeigtstc) add_lapack_test(cec.out cec.in xeigtstc) add_lapack_test(ced.out ced.in xeigtstc) add_lapack_test(cgg.out cgg.in xeigtstc) add_lapack_test(cgd.out cgd.in xeigtstc) add_lapack_test(csb.out csb.in xeigtstc) add_lapack_test(csg.out csg.in xeigtstc) add_lapack_test(cbal.out cbal.in xeigtstc) add_lapack_test(cbak.out cbak.in xeigtstc) add_lapack_test(cgbal.out cgbal.in xeigtstc) add_lapack_test(cgbak.out cgbak.in xeigtstc) add_lapack_test(cbb.out cbb.in xeigtstc) add_lapack_test(cglm.out glm.in xeigtstc) add_lapack_test(cgqr.out gqr.in xeigtstc) add_lapack_test(cgsv.out gsv.in xeigtstc) add_lapack_test(ccsd.out csd.in xeigtstc) add_lapack_test(clse.out lse.in xeigtstc) endif() if (BUILD_COMPLEX16) # # ======== COMPLEX16 LIN TESTS ======================== add_lapack_test(ztest.out ztest.in xlintstz) # # ======== COMPLEX16 RFP LIN TESTS ======================== add_lapack_test(ztest_rfp.out ztest_rfp.in xlintstrfz) # # ======== COMPLEX16 EIG TESTS =========================== add_lapack_test(znep.out nep.in xeigtstz) add_lapack_test(zsep.out sep.in xeigtstz) add_lapack_test(zsvd.out svd.in xeigtstz) add_lapack_test(zec.out zec.in xeigtstz) add_lapack_test(zed.out zed.in xeigtstz) add_lapack_test(zgg.out zgg.in xeigtstz) add_lapack_test(zgd.out zgd.in xeigtstz) add_lapack_test(zsb.out zsb.in xeigtstz) add_lapack_test(zsg.out zsg.in xeigtstz) add_lapack_test(zbal.out zbal.in xeigtstz) add_lapack_test(zbak.out zbak.in xeigtstz) add_lapack_test(zgbal.out zgbal.in xeigtstz) add_lapack_test(zgbak.out zgbak.in xeigtstz) add_lapack_test(zbb.out zbb.in xeigtstz) add_lapack_test(zglm.out glm.in xeigtstz) add_lapack_test(zgqr.out gqr.in xeigtstz) add_lapack_test(zgsv.out gsv.in xeigtstz) add_lapack_test(zcsd.out csd.in xeigtstz) add_lapack_test(zlse.out lse.in xeigtstz) endif() if (BUILD_SIMPLE) if (BUILD_DOUBLE) # # ======== SINGLE-DOUBLE PROTO LIN TESTS ============== add_lapack_test(dstest.out dstest.in xlintstds) endif() endif() if (BUILD_COMPLEX) if (BUILD_COMPLEX16) # # ======== COMPLEX-COMPLEX16 LIN TESTS ======================== add_lapack_test(zctest.out zctest.in xlintstzc) endif() endif() # ============================================================================== execute_process(COMMAND ${CMAKE_COMMAND} -E copy ${LAPACK_SOURCE_DIR}/testing/lapack_testing.py ${LAPACK_BINARY_DIR}) add_test( NAME LAPACK_Test_Summary WORKING_DIRECTORY ${LAPACK_BINARY_DIR} COMMAND ${PYTHON_EXECUTABLE} "lapack_testing.py" ) endif() piqp-octave/src/eigen/lapack/dlarfg.f0000644000175100001770000001152214107270226017374 0ustar runnerdocker*> \brief \b DLARFG * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLARFG + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE DLARFG( N, ALPHA, X, INCX, TAU ) * * .. Scalar Arguments .. * INTEGER INCX, N * DOUBLE PRECISION ALPHA, TAU * .. * .. Array Arguments .. * DOUBLE PRECISION X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLARFG generates a real elementary reflector H of order n, such *> that *> *> H * ( alpha ) = ( beta ), H**T * H = I. *> ( x ) ( 0 ) *> *> where alpha and beta are scalars, and x is an (n-1)-element real *> vector. H is represented in the form *> *> H = I - tau * ( 1 ) * ( 1 v**T ) , *> ( v ) *> *> where tau is a real scalar and v is a real (n-1)-element *> vector. *> *> If the elements of x are all zero, then tau = 0 and H is taken to be *> the unit matrix. *> *> Otherwise 1 <= tau <= 2. *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the elementary reflector. *> \endverbatim *> *> \param[in,out] ALPHA *> \verbatim *> ALPHA is DOUBLE PRECISION *> On entry, the value alpha. *> On exit, it is overwritten with the value beta. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is DOUBLE PRECISION array, dimension *> (1+(N-2)*abs(INCX)) *> On entry, the vector x. *> On exit, it is overwritten with the vector v. *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The increment between elements of X. INCX > 0. *> \endverbatim *> *> \param[out] TAU *> \verbatim *> TAU is DOUBLE PRECISION *> The value tau. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup doubleOTHERauxiliary * * ===================================================================== SUBROUTINE DLARFG( N, ALPHA, X, INCX, TAU ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N DOUBLE PRECISION ALPHA, TAU * .. * .. Array Arguments .. DOUBLE PRECISION X( * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE, ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER J, KNT DOUBLE PRECISION BETA, RSAFMN, SAFMIN, XNORM * .. * .. External Functions .. DOUBLE PRECISION DLAMCH, DLAPY2, DNRM2 EXTERNAL DLAMCH, DLAPY2, DNRM2 * .. * .. Intrinsic Functions .. INTRINSIC ABS, SIGN * .. * .. External Subroutines .. EXTERNAL DSCAL * .. * .. Executable Statements .. * IF( N.LE.1 ) THEN TAU = ZERO RETURN END IF * XNORM = DNRM2( N-1, X, INCX ) * IF( XNORM.EQ.ZERO ) THEN * * H = I * TAU = ZERO ELSE * * general case * BETA = -SIGN( DLAPY2( ALPHA, XNORM ), ALPHA ) SAFMIN = DLAMCH( 'S' ) / DLAMCH( 'E' ) KNT = 0 IF( ABS( BETA ).LT.SAFMIN ) THEN * * XNORM, BETA may be inaccurate; scale X and recompute them * RSAFMN = ONE / SAFMIN 10 CONTINUE KNT = KNT + 1 CALL DSCAL( N-1, RSAFMN, X, INCX ) BETA = BETA*RSAFMN ALPHA = ALPHA*RSAFMN IF( ABS( BETA ).LT.SAFMIN ) $ GO TO 10 * * New BETA is at most 1, at least SAFMIN * XNORM = DNRM2( N-1, X, INCX ) BETA = -SIGN( DLAPY2( ALPHA, XNORM ), ALPHA ) END IF TAU = ( BETA-ALPHA ) / BETA CALL DSCAL( N-1, ONE / ( ALPHA-BETA ), X, INCX ) * * If ALPHA is subnormal, it may lose relative accuracy * DO 20 J = 1, KNT BETA = BETA*SAFMIN 20 CONTINUE ALPHA = BETA END IF * RETURN * * End of DLARFG * END piqp-octave/src/eigen/lapack/slarf.f0000644000175100001770000001374514107270226017255 0ustar runnerdocker*> \brief \b SLARF * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLARF + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE SLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * .. Scalar Arguments .. * CHARACTER SIDE * INTEGER INCV, LDC, M, N * REAL TAU * .. * .. Array Arguments .. * REAL C( LDC, * ), V( * ), WORK( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLARF applies a real elementary reflector H to a real m by n matrix *> C, from either the left or the right. H is represented in the form *> *> H = I - tau * v * v**T *> *> where tau is a real scalar and v is a real vector. *> *> If tau = 0, then H is taken to be the unit matrix. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': form H * C *> = 'R': form C * H *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is REAL array, dimension *> (1 + (M-1)*abs(INCV)) if SIDE = 'L' *> or (1 + (N-1)*abs(INCV)) if SIDE = 'R' *> The vector v in the representation of H. V is not used if *> TAU = 0. *> \endverbatim *> *> \param[in] INCV *> \verbatim *> INCV is INTEGER *> The increment between elements of v. INCV <> 0. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is REAL *> The value tau in the representation of H. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is REAL array, dimension (LDC,N) *> On entry, the m by n matrix C. *> On exit, C is overwritten by the matrix H * C if SIDE = 'L', *> or C * H if SIDE = 'R'. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is REAL array, dimension *> (N) if SIDE = 'L' *> or (M) if SIDE = 'R' *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup realOTHERauxiliary * * ===================================================================== SUBROUTINE SLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER SIDE INTEGER INCV, LDC, M, N REAL TAU * .. * .. Array Arguments .. REAL C( LDC, * ), V( * ), WORK( * ) * .. * * ===================================================================== * * .. Parameters .. REAL ONE, ZERO PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 ) * .. * .. Local Scalars .. LOGICAL APPLYLEFT INTEGER I, LASTV, LASTC * .. * .. External Subroutines .. EXTERNAL SGEMV, SGER * .. * .. External Functions .. LOGICAL LSAME INTEGER ILASLR, ILASLC EXTERNAL LSAME, ILASLR, ILASLC * .. * .. Executable Statements .. * APPLYLEFT = LSAME( SIDE, 'L' ) LASTV = 0 LASTC = 0 IF( TAU.NE.ZERO ) THEN ! Set up variables for scanning V. LASTV begins pointing to the end ! of V. IF( APPLYLEFT ) THEN LASTV = M ELSE LASTV = N END IF IF( INCV.GT.0 ) THEN I = 1 + (LASTV-1) * INCV ELSE I = 1 END IF ! Look for the last non-zero row in V. DO WHILE( LASTV.GT.0 .AND. V( I ).EQ.ZERO ) LASTV = LASTV - 1 I = I - INCV END DO IF( APPLYLEFT ) THEN ! Scan for the last non-zero column in C(1:lastv,:). LASTC = ILASLC(LASTV, N, C, LDC) ELSE ! Scan for the last non-zero row in C(:,1:lastv). LASTC = ILASLR(M, LASTV, C, LDC) END IF END IF ! Note that lastc.eq.0 renders the BLAS operations null; no special ! case is needed at this level. IF( APPLYLEFT ) THEN * * Form H * C * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastv,1:lastc)**T * v(1:lastv,1) * CALL SGEMV( 'Transpose', LASTV, LASTC, ONE, C, LDC, V, INCV, $ ZERO, WORK, 1 ) * * C(1:lastv,1:lastc) := C(...) - v(1:lastv,1) * w(1:lastc,1)**T * CALL SGER( LASTV, LASTC, -TAU, V, INCV, WORK, 1, C, LDC ) END IF ELSE * * Form C * H * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastc,1:lastv) * v(1:lastv,1) * CALL SGEMV( 'No transpose', LASTC, LASTV, ONE, C, LDC, $ V, INCV, ZERO, WORK, 1 ) * * C(1:lastc,1:lastv) := C(...) - w(1:lastc,1) * v(1:lastv,1)**T * CALL SGER( LASTC, LASTV, -TAU, WORK, 1, V, INCV, C, LDC ) END IF END IF RETURN * * End of SLARF * END piqp-octave/src/eigen/lapack/lapack_common.h0000644000175100001770000000155514107270226020747 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_LAPACK_COMMON_H #define EIGEN_LAPACK_COMMON_H #include "../blas/common.h" #include "../Eigen/src/misc/lapack.h" #define EIGEN_LAPACK_FUNC(FUNC,ARGLIST) \ extern "C" { int EIGEN_BLAS_FUNC(FUNC) ARGLIST; } \ int EIGEN_BLAS_FUNC(FUNC) ARGLIST typedef Eigen::Map > PivotsType; #if ISCOMPLEX #define EIGEN_LAPACK_ARG_IF_COMPLEX(X) X, #else #define EIGEN_LAPACK_ARG_IF_COMPLEX(X) #endif #endif // EIGEN_LAPACK_COMMON_H piqp-octave/src/eigen/lapack/svd.cpp0000644000175100001770000001143314107270226017267 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include "lapack_common.h" #include // computes the singular values/vectors a general M-by-N matrix A using divide-and-conquer EIGEN_LAPACK_FUNC(gesdd,(char *jobz, int *m, int* n, Scalar* a, int *lda, RealScalar *s, Scalar *u, int *ldu, Scalar *vt, int *ldvt, Scalar* /*work*/, int* lwork, EIGEN_LAPACK_ARG_IF_COMPLEX(RealScalar */*rwork*/) int * /*iwork*/, int *info)) { // TODO exploit the work buffer bool query_size = *lwork==-1; int diag_size = (std::min)(*m,*n); *info = 0; if(*jobz!='A' && *jobz!='S' && *jobz!='O' && *jobz!='N') *info = -1; else if(*m<0) *info = -2; else if(*n<0) *info = -3; else if(*lda=*n && *ldvt<*n)) *info = -10; if(*info!=0) { int e = -*info; return xerbla_(SCALAR_SUFFIX_UP"GESDD ", &e, 6); } if(query_size) { *lwork = 0; return 0; } if(*n==0 || *m==0) return 0; PlainMatrixType mat(*m,*n); mat = matrix(a,*m,*n,*lda); int option = *jobz=='A' ? ComputeFullU|ComputeFullV : *jobz=='S' ? ComputeThinU|ComputeThinV : *jobz=='O' ? ComputeThinU|ComputeThinV : 0; BDCSVD svd(mat,option); make_vector(s,diag_size) = svd.singularValues().head(diag_size); if(*jobz=='A') { matrix(u,*m,*m,*ldu) = svd.matrixU(); matrix(vt,*n,*n,*ldvt) = svd.matrixV().adjoint(); } else if(*jobz=='S') { matrix(u,*m,diag_size,*ldu) = svd.matrixU(); matrix(vt,diag_size,*n,*ldvt) = svd.matrixV().adjoint(); } else if(*jobz=='O' && *m>=*n) { matrix(a,*m,*n,*lda) = svd.matrixU(); matrix(vt,*n,*n,*ldvt) = svd.matrixV().adjoint(); } else if(*jobz=='O') { matrix(u,*m,*m,*ldu) = svd.matrixU(); matrix(a,diag_size,*n,*lda) = svd.matrixV().adjoint(); } return 0; } // computes the singular values/vectors a general M-by-N matrix A using two sided jacobi algorithm EIGEN_LAPACK_FUNC(gesvd,(char *jobu, char *jobv, int *m, int* n, Scalar* a, int *lda, RealScalar *s, Scalar *u, int *ldu, Scalar *vt, int *ldvt, Scalar* /*work*/, int* lwork, EIGEN_LAPACK_ARG_IF_COMPLEX(RealScalar */*rwork*/) int *info)) { // TODO exploit the work buffer bool query_size = *lwork==-1; int diag_size = (std::min)(*m,*n); *info = 0; if( *jobu!='A' && *jobu!='S' && *jobu!='O' && *jobu!='N') *info = -1; else if((*jobv!='A' && *jobv!='S' && *jobv!='O' && *jobv!='N') || (*jobu=='O' && *jobv=='O')) *info = -2; else if(*m<0) *info = -3; else if(*n<0) *info = -4; else if(*lda svd(mat,option); make_vector(s,diag_size) = svd.singularValues().head(diag_size); { if(*jobu=='A') matrix(u,*m,*m,*ldu) = svd.matrixU(); else if(*jobu=='S') matrix(u,*m,diag_size,*ldu) = svd.matrixU(); else if(*jobu=='O') matrix(a,*m,diag_size,*lda) = svd.matrixU(); } { if(*jobv=='A') matrix(vt,*n,*n,*ldvt) = svd.matrixV().adjoint(); else if(*jobv=='S') matrix(vt,diag_size,*n,*ldvt) = svd.matrixV().adjoint(); else if(*jobv=='O') matrix(a,diag_size,*n,*lda) = svd.matrixV().adjoint(); } return 0; } piqp-octave/src/eigen/lapack/dlarf.f0000644000175100001770000001402714107270226017230 0ustar runnerdocker*> \brief \b DLARF * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLARF + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE DLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * .. Scalar Arguments .. * CHARACTER SIDE * INTEGER INCV, LDC, M, N * DOUBLE PRECISION TAU * .. * .. Array Arguments .. * DOUBLE PRECISION C( LDC, * ), V( * ), WORK( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLARF applies a real elementary reflector H to a real m by n matrix *> C, from either the left or the right. H is represented in the form *> *> H = I - tau * v * v**T *> *> where tau is a real scalar and v is a real vector. *> *> If tau = 0, then H is taken to be the unit matrix. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': form H * C *> = 'R': form C * H *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is DOUBLE PRECISION array, dimension *> (1 + (M-1)*abs(INCV)) if SIDE = 'L' *> or (1 + (N-1)*abs(INCV)) if SIDE = 'R' *> The vector v in the representation of H. V is not used if *> TAU = 0. *> \endverbatim *> *> \param[in] INCV *> \verbatim *> INCV is INTEGER *> The increment between elements of v. INCV <> 0. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is DOUBLE PRECISION *> The value tau in the representation of H. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is DOUBLE PRECISION array, dimension (LDC,N) *> On entry, the m by n matrix C. *> On exit, C is overwritten by the matrix H * C if SIDE = 'L', *> or C * H if SIDE = 'R'. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is DOUBLE PRECISION array, dimension *> (N) if SIDE = 'L' *> or (M) if SIDE = 'R' *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup doubleOTHERauxiliary * * ===================================================================== SUBROUTINE DLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER SIDE INTEGER INCV, LDC, M, N DOUBLE PRECISION TAU * .. * .. Array Arguments .. DOUBLE PRECISION C( LDC, * ), V( * ), WORK( * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE, ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. * .. Local Scalars .. LOGICAL APPLYLEFT INTEGER I, LASTV, LASTC * .. * .. External Subroutines .. EXTERNAL DGEMV, DGER * .. * .. External Functions .. LOGICAL LSAME INTEGER ILADLR, ILADLC EXTERNAL LSAME, ILADLR, ILADLC * .. * .. Executable Statements .. * APPLYLEFT = LSAME( SIDE, 'L' ) LASTV = 0 LASTC = 0 IF( TAU.NE.ZERO ) THEN ! Set up variables for scanning V. LASTV begins pointing to the end ! of V. IF( APPLYLEFT ) THEN LASTV = M ELSE LASTV = N END IF IF( INCV.GT.0 ) THEN I = 1 + (LASTV-1) * INCV ELSE I = 1 END IF ! Look for the last non-zero row in V. DO WHILE( LASTV.GT.0 .AND. V( I ).EQ.ZERO ) LASTV = LASTV - 1 I = I - INCV END DO IF( APPLYLEFT ) THEN ! Scan for the last non-zero column in C(1:lastv,:). LASTC = ILADLC(LASTV, N, C, LDC) ELSE ! Scan for the last non-zero row in C(:,1:lastv). LASTC = ILADLR(M, LASTV, C, LDC) END IF END IF ! Note that lastc.eq.0 renders the BLAS operations null; no special ! case is needed at this level. IF( APPLYLEFT ) THEN * * Form H * C * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastv,1:lastc)**T * v(1:lastv,1) * CALL DGEMV( 'Transpose', LASTV, LASTC, ONE, C, LDC, V, INCV, $ ZERO, WORK, 1 ) * * C(1:lastv,1:lastc) := C(...) - v(1:lastv,1) * w(1:lastc,1)**T * CALL DGER( LASTV, LASTC, -TAU, V, INCV, WORK, 1, C, LDC ) END IF ELSE * * Form C * H * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastc,1:lastv) * v(1:lastv,1) * CALL DGEMV( 'No transpose', LASTC, LASTV, ONE, C, LDC, $ V, INCV, ZERO, WORK, 1 ) * * C(1:lastc,1:lastv) := C(...) - w(1:lastc,1) * v(1:lastv,1)**T * CALL DGER( LASTC, LASTV, -TAU, WORK, 1, V, INCV, C, LDC ) END IF END IF RETURN * * End of DLARF * END piqp-octave/src/eigen/lapack/dlapy2.f0000644000175100001770000000472214107270226017334 0ustar runnerdocker*> \brief \b DLAPY2 * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download DLAPY2 + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * DOUBLE PRECISION FUNCTION DLAPY2( X, Y ) * * .. Scalar Arguments .. * DOUBLE PRECISION X, Y * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> DLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary *> overflow. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is DOUBLE PRECISION *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is DOUBLE PRECISION *> X and Y specify the values x and y. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup auxOTHERauxiliary * * ===================================================================== DOUBLE PRECISION FUNCTION DLAPY2( X, Y ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. DOUBLE PRECISION X, Y * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ZERO PARAMETER ( ZERO = 0.0D0 ) DOUBLE PRECISION ONE PARAMETER ( ONE = 1.0D0 ) * .. * .. Local Scalars .. DOUBLE PRECISION W, XABS, YABS, Z * .. * .. Intrinsic Functions .. INTRINSIC ABS, MAX, MIN, SQRT * .. * .. Executable Statements .. * XABS = ABS( X ) YABS = ABS( Y ) W = MAX( XABS, YABS ) Z = MIN( XABS, YABS ) IF( Z.EQ.ZERO ) THEN DLAPY2 = W ELSE DLAPY2 = W*SQRT( ONE+( Z / W )**2 ) END IF RETURN * * End of DLAPY2 * END piqp-octave/src/eigen/lapack/cladiv.f0000644000175100001770000000444414107270226017404 0ustar runnerdocker*> \brief \b CLADIV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLADIV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * COMPLEX FUNCTION CLADIV( X, Y ) * * .. Scalar Arguments .. * COMPLEX X, Y * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLADIV := X / Y, where X and Y are complex. The computation of X / Y *> will not overflow on an intermediary step unless the results *> overflows. *> \endverbatim * * Arguments: * ========== * *> \param[in] X *> \verbatim *> X is COMPLEX *> \endverbatim *> *> \param[in] Y *> \verbatim *> Y is COMPLEX *> The complex scalars X and Y. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * * ===================================================================== COMPLEX FUNCTION CLADIV( X, Y ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. COMPLEX X, Y * .. * * ===================================================================== * * .. Local Scalars .. REAL ZI, ZR * .. * .. External Subroutines .. EXTERNAL SLADIV * .. * .. Intrinsic Functions .. INTRINSIC AIMAG, CMPLX, REAL * .. * .. Executable Statements .. * CALL SLADIV( REAL( X ), AIMAG( X ), REAL( Y ), AIMAG( Y ), ZR, $ ZI ) CLADIV = CMPLX( ZR, ZI ) * RETURN * * End of CLADIV * END piqp-octave/src/eigen/lapack/slarfg.f0000644000175100001770000001145414107270226017417 0ustar runnerdocker*> \brief \b SLARFG * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download SLARFG + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE SLARFG( N, ALPHA, X, INCX, TAU ) * * .. Scalar Arguments .. * INTEGER INCX, N * REAL ALPHA, TAU * .. * .. Array Arguments .. * REAL X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> SLARFG generates a real elementary reflector H of order n, such *> that *> *> H * ( alpha ) = ( beta ), H**T * H = I. *> ( x ) ( 0 ) *> *> where alpha and beta are scalars, and x is an (n-1)-element real *> vector. H is represented in the form *> *> H = I - tau * ( 1 ) * ( 1 v**T ) , *> ( v ) *> *> where tau is a real scalar and v is a real (n-1)-element *> vector. *> *> If the elements of x are all zero, then tau = 0 and H is taken to be *> the unit matrix. *> *> Otherwise 1 <= tau <= 2. *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the elementary reflector. *> \endverbatim *> *> \param[in,out] ALPHA *> \verbatim *> ALPHA is REAL *> On entry, the value alpha. *> On exit, it is overwritten with the value beta. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is REAL array, dimension *> (1+(N-2)*abs(INCX)) *> On entry, the vector x. *> On exit, it is overwritten with the vector v. *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The increment between elements of X. INCX > 0. *> \endverbatim *> *> \param[out] TAU *> \verbatim *> TAU is REAL *> The value tau. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup realOTHERauxiliary * * ===================================================================== SUBROUTINE SLARFG( N, ALPHA, X, INCX, TAU ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N REAL ALPHA, TAU * .. * .. Array Arguments .. REAL X( * ) * .. * * ===================================================================== * * .. Parameters .. REAL ONE, ZERO PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 ) * .. * .. Local Scalars .. INTEGER J, KNT REAL BETA, RSAFMN, SAFMIN, XNORM * .. * .. External Functions .. REAL SLAMCH, SLAPY2, SNRM2 EXTERNAL SLAMCH, SLAPY2, SNRM2 * .. * .. Intrinsic Functions .. INTRINSIC ABS, SIGN * .. * .. External Subroutines .. EXTERNAL SSCAL * .. * .. Executable Statements .. * IF( N.LE.1 ) THEN TAU = ZERO RETURN END IF * XNORM = SNRM2( N-1, X, INCX ) * IF( XNORM.EQ.ZERO ) THEN * * H = I * TAU = ZERO ELSE * * general case * BETA = -SIGN( SLAPY2( ALPHA, XNORM ), ALPHA ) SAFMIN = SLAMCH( 'S' ) / SLAMCH( 'E' ) KNT = 0 IF( ABS( BETA ).LT.SAFMIN ) THEN * * XNORM, BETA may be inaccurate; scale X and recompute them * RSAFMN = ONE / SAFMIN 10 CONTINUE KNT = KNT + 1 CALL SSCAL( N-1, RSAFMN, X, INCX ) BETA = BETA*RSAFMN ALPHA = ALPHA*RSAFMN IF( ABS( BETA ).LT.SAFMIN ) $ GO TO 10 * * New BETA is at most 1, at least SAFMIN * XNORM = SNRM2( N-1, X, INCX ) BETA = -SIGN( SLAPY2( ALPHA, XNORM ), ALPHA ) END IF TAU = ( BETA-ALPHA ) / BETA CALL SSCAL( N-1, ONE / ( ALPHA-BETA ), X, INCX ) * * If ALPHA is subnormal, it may lose relative accuracy * DO 20 J = 1, KNT BETA = BETA*SAFMIN 20 CONTINUE ALPHA = BETA END IF * RETURN * * End of SLARFG * END piqp-octave/src/eigen/lapack/zlarf.f0000644000175100001770000001420614107270226017255 0ustar runnerdocker*> \brief \b ZLARF * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLARF + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE ZLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * .. Scalar Arguments .. * CHARACTER SIDE * INTEGER INCV, LDC, M, N * COMPLEX*16 TAU * .. * .. Array Arguments .. * COMPLEX*16 C( LDC, * ), V( * ), WORK( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLARF applies a complex elementary reflector H to a complex M-by-N *> matrix C, from either the left or the right. H is represented in the *> form *> *> H = I - tau * v * v**H *> *> where tau is a complex scalar and v is a complex vector. *> *> If tau = 0, then H is taken to be the unit matrix. *> *> To apply H**H, supply conjg(tau) instead *> tau. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': form H * C *> = 'R': form C * H *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX*16 array, dimension *> (1 + (M-1)*abs(INCV)) if SIDE = 'L' *> or (1 + (N-1)*abs(INCV)) if SIDE = 'R' *> The vector v in the representation of H. V is not used if *> TAU = 0. *> \endverbatim *> *> \param[in] INCV *> \verbatim *> INCV is INTEGER *> The increment between elements of v. INCV <> 0. *> \endverbatim *> *> \param[in] TAU *> \verbatim *> TAU is COMPLEX*16 *> The value tau in the representation of H. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is COMPLEX*16 array, dimension (LDC,N) *> On entry, the M-by-N matrix C. *> On exit, C is overwritten by the matrix H * C if SIDE = 'L', *> or C * H if SIDE = 'R'. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is COMPLEX*16 array, dimension *> (N) if SIDE = 'L' *> or (M) if SIDE = 'R' *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== SUBROUTINE ZLARF( SIDE, M, N, V, INCV, TAU, C, LDC, WORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER SIDE INTEGER INCV, LDC, M, N COMPLEX*16 TAU * .. * .. Array Arguments .. COMPLEX*16 C( LDC, * ), V( * ), WORK( * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX*16 ONE, ZERO PARAMETER ( ONE = ( 1.0D+0, 0.0D+0 ), $ ZERO = ( 0.0D+0, 0.0D+0 ) ) * .. * .. Local Scalars .. LOGICAL APPLYLEFT INTEGER I, LASTV, LASTC * .. * .. External Subroutines .. EXTERNAL ZGEMV, ZGERC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILAZLR, ILAZLC EXTERNAL LSAME, ILAZLR, ILAZLC * .. * .. Executable Statements .. * APPLYLEFT = LSAME( SIDE, 'L' ) LASTV = 0 LASTC = 0 IF( TAU.NE.ZERO ) THEN * Set up variables for scanning V. LASTV begins pointing to the end * of V. IF( APPLYLEFT ) THEN LASTV = M ELSE LASTV = N END IF IF( INCV.GT.0 ) THEN I = 1 + (LASTV-1) * INCV ELSE I = 1 END IF * Look for the last non-zero row in V. DO WHILE( LASTV.GT.0 .AND. V( I ).EQ.ZERO ) LASTV = LASTV - 1 I = I - INCV END DO IF( APPLYLEFT ) THEN * Scan for the last non-zero column in C(1:lastv,:). LASTC = ILAZLC(LASTV, N, C, LDC) ELSE * Scan for the last non-zero row in C(:,1:lastv). LASTC = ILAZLR(M, LASTV, C, LDC) END IF END IF * Note that lastc.eq.0 renders the BLAS operations null; no special * case is needed at this level. IF( APPLYLEFT ) THEN * * Form H * C * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastv,1:lastc)**H * v(1:lastv,1) * CALL ZGEMV( 'Conjugate transpose', LASTV, LASTC, ONE, $ C, LDC, V, INCV, ZERO, WORK, 1 ) * * C(1:lastv,1:lastc) := C(...) - v(1:lastv,1) * w(1:lastc,1)**H * CALL ZGERC( LASTV, LASTC, -TAU, V, INCV, WORK, 1, C, LDC ) END IF ELSE * * Form C * H * IF( LASTV.GT.0 ) THEN * * w(1:lastc,1) := C(1:lastc,1:lastv) * v(1:lastv,1) * CALL ZGEMV( 'No transpose', LASTC, LASTV, ONE, C, LDC, $ V, INCV, ZERO, WORK, 1 ) * * C(1:lastc,1:lastv) := C(...) - w(1:lastc,1) * v(1:lastv,1)**H * CALL ZGERC( LASTC, LASTV, -TAU, WORK, 1, V, INCV, C, LDC ) END IF END IF RETURN * * End of ZLARF * END piqp-octave/src/eigen/lapack/zlarfb.f0000644000175100001770000005571214107270226017426 0ustar runnerdocker*> \brief \b ZLARFB * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLARFB + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE ZLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, * T, LDT, C, LDC, WORK, LDWORK ) * * .. Scalar Arguments .. * CHARACTER DIRECT, SIDE, STOREV, TRANS * INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. * COMPLEX*16 C( LDC, * ), T( LDT, * ), V( LDV, * ), * $ WORK( LDWORK, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLARFB applies a complex block reflector H or its transpose H**H to a *> complex M-by-N matrix C, from either the left or the right. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': apply H or H**H from the Left *> = 'R': apply H or H**H from the Right *> \endverbatim *> *> \param[in] TRANS *> \verbatim *> TRANS is CHARACTER*1 *> = 'N': apply H (No transpose) *> = 'C': apply H**H (Conjugate transpose) *> \endverbatim *> *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Indicates how H is formed from a product of elementary *> reflectors *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Indicates how the vectors which define the elementary *> reflectors are stored: *> = 'C': Columnwise *> = 'R': Rowwise *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the matrix T (= the number of elementary *> reflectors whose product defines the block reflector). *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX*16 array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,M) if STOREV = 'R' and SIDE = 'L' *> (LDV,N) if STOREV = 'R' and SIDE = 'R' *> See Further Details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); *> if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); *> if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] T *> \verbatim *> T is COMPLEX*16 array, dimension (LDT,K) *> The triangular K-by-K matrix T in the representation of the *> block reflector. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is COMPLEX*16 array, dimension (LDC,N) *> On entry, the M-by-N matrix C. *> On exit, C is overwritten by H*C or H**H*C or C*H or C*H**H. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is COMPLEX*16 array, dimension (LDWORK,K) *> \endverbatim *> *> \param[in] LDWORK *> \verbatim *> LDWORK is INTEGER *> The leading dimension of the array WORK. *> If SIDE = 'L', LDWORK >= max(1,N); *> if SIDE = 'R', LDWORK >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored; the corresponding *> array elements are modified but restored on exit. The rest of the *> array is not used. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE ZLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, $ T, LDT, C, LDC, WORK, LDWORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER DIRECT, SIDE, STOREV, TRANS INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. COMPLEX*16 C( LDC, * ), T( LDT, * ), V( LDV, * ), $ WORK( LDWORK, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX*16 ONE PARAMETER ( ONE = ( 1.0D+0, 0.0D+0 ) ) * .. * .. Local Scalars .. CHARACTER TRANST INTEGER I, J, LASTV, LASTC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILAZLR, ILAZLC EXTERNAL LSAME, ILAZLR, ILAZLC * .. * .. External Subroutines .. EXTERNAL ZCOPY, ZGEMM, ZLACGV, ZTRMM * .. * .. Intrinsic Functions .. INTRINSIC DCONJG * .. * .. Executable Statements .. * * Quick return if possible * IF( M.LE.0 .OR. N.LE.0 ) $ RETURN * IF( LSAME( TRANS, 'N' ) ) THEN TRANST = 'C' ELSE TRANST = 'N' END IF * IF( LSAME( STOREV, 'C' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 ) (first K rows) * ( V2 ) * where V1 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILAZLR( M, K, V, LDV ) ) LASTC = ILAZLC( LASTV, N, C, LDC ) * * W := C**H * V = (C1**H * V1 + C2**H * V2) (stored in WORK) * * W := C1**H * DO 10 J = 1, K CALL ZCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) CALL ZLACGV( LASTC, WORK( 1, J ), 1 ) 10 CONTINUE * * W := W * V1 * CALL ZTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**H *V2 * CALL ZGEMM( 'Conjugate transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C( K+1, 1 ), LDC, $ V( K+1, 1 ), LDV, ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL ZTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**H * IF( M.GT.K ) THEN * * C2 := C2 - V2 * W**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTV-K, LASTC, K, $ -ONE, V( K+1, 1 ), LDV, WORK, LDWORK, $ ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1**H * CALL ZTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**H * DO 30 J = 1, K DO 20 I = 1, LASTC C( J, I ) = C( J, I ) - DCONJG( WORK( I, J ) ) 20 CONTINUE 30 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILAZLR( N, K, V, LDV ) ) LASTC = ILAZLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C1 * DO 40 J = 1, K CALL ZCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 40 CONTINUE * * W := W * V1 * CALL ZTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2 * CALL ZGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL ZTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**H * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( K+1, 1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1**H * CALL ZTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 60 J = 1, K DO 50 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 50 CONTINUE 60 CONTINUE END IF * ELSE * * Let V = ( V1 ) * ( V2 ) (last K rows) * where V2 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILAZLR( M, K, V, LDV ) ) LASTC = ILAZLC( LASTV, N, C, LDC ) * * W := C**H * V = (C1**H * V1 + C2**H * V2) (stored in WORK) * * W := C2**H * DO 70 J = 1, K CALL ZCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) CALL ZLACGV( LASTC, WORK( 1, J ), 1 ) 70 CONTINUE * * W := W * V2 * CALL ZTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**H*V1 * CALL ZGEMM( 'Conjugate transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL ZTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1 * W**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTV-K, LASTC, K, $ -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2**H * CALL ZTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**H * DO 90 J = 1, K DO 80 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - $ DCONJG( WORK( I, J ) ) 80 CONTINUE 90 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILAZLR( N, K, V, LDV ) ) LASTC = ILAZLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C2 * DO 100 J = 1, K CALL ZCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 100 CONTINUE * * W := W * V2 * CALL ZTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1 * CALL ZGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C, LDC, V, LDV, ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL ZTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2**H * CALL ZTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W * DO 120 J = 1, K DO 110 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) $ - WORK( I, J ) 110 CONTINUE 120 CONTINUE END IF END IF * ELSE IF( LSAME( STOREV, 'R' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 V2 ) (V1: first K columns) * where V1 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILAZLC( K, M, V, LDV ) ) LASTC = ILAZLC( LASTV, N, C, LDC ) * * W := C**H * V**H = (C1**H * V1**H + C2**H * V2**H) (stored in WORK) * * W := C1**H * DO 130 J = 1, K CALL ZCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) CALL ZLACGV( LASTC, WORK( 1, J ), 1 ) 130 CONTINUE * * W := W * V1**H * CALL ZTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**H*V2**H * CALL ZGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL ZTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**H * W**H * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2**H * W**H * CALL ZGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTV-K, LASTC, K, $ -ONE, V( 1, K+1 ), LDV, WORK, LDWORK, $ ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1 * CALL ZTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**H * DO 150 J = 1, K DO 140 I = 1, LASTC C( J, I ) = C( J, I ) - DCONJG( WORK( I, J ) ) 140 CONTINUE 150 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILAZLC( K, N, V, LDV ) ) LASTC = ILAZLR( M, LASTV, C, LDC ) * * W := C * V**H = (C1*V1**H + C2*V2**H) (stored in WORK) * * W := C1 * DO 160 J = 1, K CALL ZCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 160 CONTINUE * * W := W * V1**H * CALL ZTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, K, LASTV-K, ONE, C( 1, K+1 ), LDC, $ V( 1, K+1 ), LDV, ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL ZTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2 * CALL ZGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( 1, K+1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1 * CALL ZTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 180 J = 1, K DO 170 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 170 CONTINUE 180 CONTINUE * END IF * ELSE * * Let V = ( V1 V2 ) (V2: last K columns) * where V2 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILAZLC( K, M, V, LDV ) ) LASTC = ILAZLC( LASTV, N, C, LDC ) * * W := C**H * V**H = (C1**H * V1**H + C2**H * V2**H) (stored in WORK) * * W := C2**H * DO 190 J = 1, K CALL ZCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) CALL ZLACGV( LASTC, WORK( 1, J ), 1 ) 190 CONTINUE * * W := W * V2**H * CALL ZTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**H * V1**H * CALL ZGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTC, K, LASTV-K, $ ONE, C, LDC, V, LDV, ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL ZTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**H * W**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1**H * W**H * CALL ZGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTV-K, LASTC, K, $ -ONE, V, LDV, WORK, LDWORK, ONE, C, LDC ) END IF * * W := W * V2 * CALL ZTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**H * DO 210 J = 1, K DO 200 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - $ DCONJG( WORK( I, J ) ) 200 CONTINUE 210 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILAZLC( K, N, V, LDV ) ) LASTC = ILAZLR( M, LASTV, C, LDC ) * * W := C * V**H = (C1*V1**H + C2*V2**H) (stored in WORK) * * W := C2 * DO 220 J = 1, K CALL ZCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 220 CONTINUE * * W := W * V2**H * CALL ZTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1**H * CALL ZGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, ONE, $ WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL ZTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1 * CALL ZGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL ZTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C1 := C1 - W * DO 240 J = 1, K DO 230 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) $ - WORK( I, J ) 230 CONTINUE 240 CONTINUE * END IF * END IF END IF * RETURN * * End of ZLARFB * END piqp-octave/src/eigen/lapack/zlarfg.f0000644000175100001770000001235714107270226017431 0ustar runnerdocker*> \brief \b ZLARFG * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLARFG + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE ZLARFG( N, ALPHA, X, INCX, TAU ) * * .. Scalar Arguments .. * INTEGER INCX, N * COMPLEX*16 ALPHA, TAU * .. * .. Array Arguments .. * COMPLEX*16 X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLARFG generates a complex elementary reflector H of order n, such *> that *> *> H**H * ( alpha ) = ( beta ), H**H * H = I. *> ( x ) ( 0 ) *> *> where alpha and beta are scalars, with beta real, and x is an *> (n-1)-element complex vector. H is represented in the form *> *> H = I - tau * ( 1 ) * ( 1 v**H ) , *> ( v ) *> *> where tau is a complex scalar and v is a complex (n-1)-element *> vector. Note that H is not hermitian. *> *> If the elements of x are all zero and alpha is real, then tau = 0 *> and H is taken to be the unit matrix. *> *> Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1 . *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The order of the elementary reflector. *> \endverbatim *> *> \param[in,out] ALPHA *> \verbatim *> ALPHA is COMPLEX*16 *> On entry, the value alpha. *> On exit, it is overwritten with the value beta. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is COMPLEX*16 array, dimension *> (1+(N-2)*abs(INCX)) *> On entry, the vector x. *> On exit, it is overwritten with the vector v. *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The increment between elements of X. INCX > 0. *> \endverbatim *> *> \param[out] TAU *> \verbatim *> TAU is COMPLEX*16 *> The value tau. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== SUBROUTINE ZLARFG( N, ALPHA, X, INCX, TAU ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N COMPLEX*16 ALPHA, TAU * .. * .. Array Arguments .. COMPLEX*16 X( * ) * .. * * ===================================================================== * * .. Parameters .. DOUBLE PRECISION ONE, ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. * .. Local Scalars .. INTEGER J, KNT DOUBLE PRECISION ALPHI, ALPHR, BETA, RSAFMN, SAFMIN, XNORM * .. * .. External Functions .. DOUBLE PRECISION DLAMCH, DLAPY3, DZNRM2 COMPLEX*16 ZLADIV EXTERNAL DLAMCH, DLAPY3, DZNRM2, ZLADIV * .. * .. Intrinsic Functions .. INTRINSIC ABS, DBLE, DCMPLX, DIMAG, SIGN * .. * .. External Subroutines .. EXTERNAL ZDSCAL, ZSCAL * .. * .. Executable Statements .. * IF( N.LE.0 ) THEN TAU = ZERO RETURN END IF * XNORM = DZNRM2( N-1, X, INCX ) ALPHR = DBLE( ALPHA ) ALPHI = DIMAG( ALPHA ) * IF( XNORM.EQ.ZERO .AND. ALPHI.EQ.ZERO ) THEN * * H = I * TAU = ZERO ELSE * * general case * BETA = -SIGN( DLAPY3( ALPHR, ALPHI, XNORM ), ALPHR ) SAFMIN = DLAMCH( 'S' ) / DLAMCH( 'E' ) RSAFMN = ONE / SAFMIN * KNT = 0 IF( ABS( BETA ).LT.SAFMIN ) THEN * * XNORM, BETA may be inaccurate; scale X and recompute them * 10 CONTINUE KNT = KNT + 1 CALL ZDSCAL( N-1, RSAFMN, X, INCX ) BETA = BETA*RSAFMN ALPHI = ALPHI*RSAFMN ALPHR = ALPHR*RSAFMN IF( ABS( BETA ).LT.SAFMIN ) $ GO TO 10 * * New BETA is at most 1, at least SAFMIN * XNORM = DZNRM2( N-1, X, INCX ) ALPHA = DCMPLX( ALPHR, ALPHI ) BETA = -SIGN( DLAPY3( ALPHR, ALPHI, XNORM ), ALPHR ) END IF TAU = DCMPLX( ( BETA-ALPHR ) / BETA, -ALPHI / BETA ) ALPHA = ZLADIV( DCMPLX( ONE ), ALPHA-BETA ) CALL ZSCAL( N-1, ALPHA, X, INCX ) * * If ALPHA is subnormal, it may lose relative accuracy * DO 20 J = 1, KNT BETA = BETA*SAFMIN 20 CONTINUE ALPHA = BETA END IF * RETURN * * End of ZLARFG * END piqp-octave/src/eigen/lapack/zlacgv.f0000644000175100001770000000542714107270226017432 0ustar runnerdocker*> \brief \b ZLACGV * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download ZLACGV + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE ZLACGV( N, X, INCX ) * * .. Scalar Arguments .. * INTEGER INCX, N * .. * .. Array Arguments .. * COMPLEX*16 X( * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> ZLACGV conjugates a complex vector of length N. *> \endverbatim * * Arguments: * ========== * *> \param[in] N *> \verbatim *> N is INTEGER *> The length of the vector X. N >= 0. *> \endverbatim *> *> \param[in,out] X *> \verbatim *> X is COMPLEX*16 array, dimension *> (1+(N-1)*abs(INCX)) *> On entry, the vector of length N to be conjugated. *> On exit, X is overwritten with conjg(X). *> \endverbatim *> *> \param[in] INCX *> \verbatim *> INCX is INTEGER *> The spacing between successive elements of X. *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complex16OTHERauxiliary * * ===================================================================== SUBROUTINE ZLACGV( N, X, INCX ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. INTEGER INCX, N * .. * .. Array Arguments .. COMPLEX*16 X( * ) * .. * * ===================================================================== * * .. Local Scalars .. INTEGER I, IOFF * .. * .. Intrinsic Functions .. INTRINSIC DCONJG * .. * .. Executable Statements .. * IF( INCX.EQ.1 ) THEN DO 10 I = 1, N X( I ) = DCONJG( X( I ) ) 10 CONTINUE ELSE IOFF = 1 IF( INCX.LT.0 ) $ IOFF = 1 - ( N-1 )*INCX DO 20 I = 1, N X( IOFF ) = DCONJG( X( IOFF ) ) IOFF = IOFF + INCX 20 CONTINUE END IF RETURN * * End of ZLACGV * END piqp-octave/src/eigen/lapack/clarfb.f0000644000175100001770000005560014107270226017373 0ustar runnerdocker*> \brief \b CLARFB * * =========== DOCUMENTATION =========== * * Online html documentation available at * http://www.netlib.org/lapack/explore-html/ * *> \htmlonly *> Download CLARFB + dependencies *> *> [TGZ] *> *> [ZIP] *> *> [TXT] *> \endhtmlonly * * Definition: * =========== * * SUBROUTINE CLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, * T, LDT, C, LDC, WORK, LDWORK ) * * .. Scalar Arguments .. * CHARACTER DIRECT, SIDE, STOREV, TRANS * INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. * COMPLEX C( LDC, * ), T( LDT, * ), V( LDV, * ), * $ WORK( LDWORK, * ) * .. * * *> \par Purpose: * ============= *> *> \verbatim *> *> CLARFB applies a complex block reflector H or its transpose H**H to a *> complex M-by-N matrix C, from either the left or the right. *> \endverbatim * * Arguments: * ========== * *> \param[in] SIDE *> \verbatim *> SIDE is CHARACTER*1 *> = 'L': apply H or H**H from the Left *> = 'R': apply H or H**H from the Right *> \endverbatim *> *> \param[in] TRANS *> \verbatim *> TRANS is CHARACTER*1 *> = 'N': apply H (No transpose) *> = 'C': apply H**H (Conjugate transpose) *> \endverbatim *> *> \param[in] DIRECT *> \verbatim *> DIRECT is CHARACTER*1 *> Indicates how H is formed from a product of elementary *> reflectors *> = 'F': H = H(1) H(2) . . . H(k) (Forward) *> = 'B': H = H(k) . . . H(2) H(1) (Backward) *> \endverbatim *> *> \param[in] STOREV *> \verbatim *> STOREV is CHARACTER*1 *> Indicates how the vectors which define the elementary *> reflectors are stored: *> = 'C': Columnwise *> = 'R': Rowwise *> \endverbatim *> *> \param[in] M *> \verbatim *> M is INTEGER *> The number of rows of the matrix C. *> \endverbatim *> *> \param[in] N *> \verbatim *> N is INTEGER *> The number of columns of the matrix C. *> \endverbatim *> *> \param[in] K *> \verbatim *> K is INTEGER *> The order of the matrix T (= the number of elementary *> reflectors whose product defines the block reflector). *> \endverbatim *> *> \param[in] V *> \verbatim *> V is COMPLEX array, dimension *> (LDV,K) if STOREV = 'C' *> (LDV,M) if STOREV = 'R' and SIDE = 'L' *> (LDV,N) if STOREV = 'R' and SIDE = 'R' *> The matrix V. See Further Details. *> \endverbatim *> *> \param[in] LDV *> \verbatim *> LDV is INTEGER *> The leading dimension of the array V. *> If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); *> if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); *> if STOREV = 'R', LDV >= K. *> \endverbatim *> *> \param[in] T *> \verbatim *> T is COMPLEX array, dimension (LDT,K) *> The triangular K-by-K matrix T in the representation of the *> block reflector. *> \endverbatim *> *> \param[in] LDT *> \verbatim *> LDT is INTEGER *> The leading dimension of the array T. LDT >= K. *> \endverbatim *> *> \param[in,out] C *> \verbatim *> C is COMPLEX array, dimension (LDC,N) *> On entry, the M-by-N matrix C. *> On exit, C is overwritten by H*C or H**H*C or C*H or C*H**H. *> \endverbatim *> *> \param[in] LDC *> \verbatim *> LDC is INTEGER *> The leading dimension of the array C. LDC >= max(1,M). *> \endverbatim *> *> \param[out] WORK *> \verbatim *> WORK is COMPLEX array, dimension (LDWORK,K) *> \endverbatim *> *> \param[in] LDWORK *> \verbatim *> LDWORK is INTEGER *> The leading dimension of the array WORK. *> If SIDE = 'L', LDWORK >= max(1,N); *> if SIDE = 'R', LDWORK >= max(1,M). *> \endverbatim * * Authors: * ======== * *> \author Univ. of Tennessee *> \author Univ. of California Berkeley *> \author Univ. of Colorado Denver *> \author NAG Ltd. * *> \date November 2011 * *> \ingroup complexOTHERauxiliary * *> \par Further Details: * ===================== *> *> \verbatim *> *> The shape of the matrix V and the storage of the vectors which define *> the H(i) is best illustrated by the following example with n = 5 and *> k = 3. The elements equal to 1 are not stored; the corresponding *> array elements are modified but restored on exit. The rest of the *> array is not used. *> *> DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': *> *> V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) *> ( v1 1 ) ( 1 v2 v2 v2 ) *> ( v1 v2 1 ) ( 1 v3 v3 ) *> ( v1 v2 v3 ) *> ( v1 v2 v3 ) *> *> DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': *> *> V = ( v1 v2 v3 ) V = ( v1 v1 1 ) *> ( v1 v2 v3 ) ( v2 v2 v2 1 ) *> ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) *> ( 1 v3 ) *> ( 1 ) *> \endverbatim *> * ===================================================================== SUBROUTINE CLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, LDV, $ T, LDT, C, LDC, WORK, LDWORK ) * * -- LAPACK auxiliary routine (version 3.4.0) -- * -- LAPACK is a software package provided by Univ. of Tennessee, -- * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- * November 2011 * * .. Scalar Arguments .. CHARACTER DIRECT, SIDE, STOREV, TRANS INTEGER K, LDC, LDT, LDV, LDWORK, M, N * .. * .. Array Arguments .. COMPLEX C( LDC, * ), T( LDT, * ), V( LDV, * ), $ WORK( LDWORK, * ) * .. * * ===================================================================== * * .. Parameters .. COMPLEX ONE PARAMETER ( ONE = ( 1.0E+0, 0.0E+0 ) ) * .. * .. Local Scalars .. CHARACTER TRANST INTEGER I, J, LASTV, LASTC * .. * .. External Functions .. LOGICAL LSAME INTEGER ILACLR, ILACLC EXTERNAL LSAME, ILACLR, ILACLC * .. * .. External Subroutines .. EXTERNAL CCOPY, CGEMM, CLACGV, CTRMM * .. * .. Intrinsic Functions .. INTRINSIC CONJG * .. * .. Executable Statements .. * * Quick return if possible * IF( M.LE.0 .OR. N.LE.0 ) $ RETURN * IF( LSAME( TRANS, 'N' ) ) THEN TRANST = 'C' ELSE TRANST = 'N' END IF * IF( LSAME( STOREV, 'C' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 ) (first K rows) * ( V2 ) * where V1 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILACLR( M, K, V, LDV ) ) LASTC = ILACLC( LASTV, N, C, LDC ) * * W := C**H * V = (C1**H * V1 + C2**H * V2) (stored in WORK) * * W := C1**H * DO 10 J = 1, K CALL CCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) CALL CLACGV( LASTC, WORK( 1, J ), 1 ) 10 CONTINUE * * W := W * V1 * CALL CTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**H *V2 * CALL CGEMM( 'Conjugate transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C( K+1, 1 ), LDC, $ V( K+1, 1 ), LDV, ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL CTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**H * IF( M.GT.K ) THEN * * C2 := C2 - V2 * W**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTV-K, LASTC, K, -ONE, V( K+1, 1 ), LDV, $ WORK, LDWORK, ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1**H * CALL CTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**H * DO 30 J = 1, K DO 20 I = 1, LASTC C( J, I ) = C( J, I ) - CONJG( WORK( I, J ) ) 20 CONTINUE 30 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILACLR( N, K, V, LDV ) ) LASTC = ILACLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C1 * DO 40 J = 1, K CALL CCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 40 CONTINUE * * W := W * V1 * CALL CTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2 * CALL CGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C( 1, K+1 ), LDC, V( K+1, 1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL CTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**H * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( K+1, 1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1**H * CALL CTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 60 J = 1, K DO 50 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 50 CONTINUE 60 CONTINUE END IF * ELSE * * Let V = ( V1 ) * ( V2 ) (last K rows) * where V2 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILACLR( M, K, V, LDV ) ) LASTC = ILACLC( LASTV, N, C, LDC ) * * W := C**H * V = (C1**H * V1 + C2**H * V2) (stored in WORK) * * W := C2**H * DO 70 J = 1, K CALL CCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) CALL CLACGV( LASTC, WORK( 1, J ), 1 ) 70 CONTINUE * * W := W * V2 * CALL CTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**H*V1 * CALL CGEMM( 'Conjugate transpose', 'No transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL CTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V * W**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1 * W**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTV-K, LASTC, K, -ONE, V, LDV, WORK, LDWORK, $ ONE, C, LDC ) END IF * * W := W * V2**H * CALL CTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**H * DO 90 J = 1, K DO 80 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - $ CONJG( WORK( I, J ) ) 80 CONTINUE 90 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILACLR( N, K, V, LDV ) ) LASTC = ILACLR( M, LASTV, C, LDC ) * * W := C * V = (C1*V1 + C2*V2) (stored in WORK) * * W := C2 * DO 100 J = 1, K CALL CCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 100 CONTINUE * * W := W * V2 * CALL CTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1 * CALL CGEMM( 'No transpose', 'No transpose', $ LASTC, K, LASTV-K, $ ONE, C, LDC, V, LDV, ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL CTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2**H * CALL CTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( LASTV-K+1, 1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W * DO 120 J = 1, K DO 110 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) $ - WORK( I, J ) 110 CONTINUE 120 CONTINUE END IF END IF * ELSE IF( LSAME( STOREV, 'R' ) ) THEN * IF( LSAME( DIRECT, 'F' ) ) THEN * * Let V = ( V1 V2 ) (V1: first K columns) * where V1 is unit upper triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILACLC( K, M, V, LDV ) ) LASTC = ILACLC( LASTV, N, C, LDC ) * * W := C**H * V**H = (C1**H * V1**H + C2**H * V2**H) (stored in WORK) * * W := C1**H * DO 130 J = 1, K CALL CCOPY( LASTC, C( J, 1 ), LDC, WORK( 1, J ), 1 ) CALL CLACGV( LASTC, WORK( 1, J ), 1 ) 130 CONTINUE * * W := W * V1**H * CALL CTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2**H*V2**H * CALL CGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTC, K, LASTV-K, $ ONE, C( K+1, 1 ), LDC, V( 1, K+1 ), LDV, $ ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL CTRMM( 'Right', 'Upper', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**H * W**H * IF( LASTV.GT.K ) THEN * * C2 := C2 - V2**H * W**H * CALL CGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTV-K, LASTC, K, $ -ONE, V( 1, K+1 ), LDV, WORK, LDWORK, $ ONE, C( K+1, 1 ), LDC ) END IF * * W := W * V1 * CALL CTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W**H * DO 150 J = 1, K DO 140 I = 1, LASTC C( J, I ) = C( J, I ) - CONJG( WORK( I, J ) ) 140 CONTINUE 150 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILACLC( K, N, V, LDV ) ) LASTC = ILACLR( M, LASTV, C, LDC ) * * W := C * V**H = (C1*V1**H + C2*V2**H) (stored in WORK) * * W := C1 * DO 160 J = 1, K CALL CCOPY( LASTC, C( 1, J ), 1, WORK( 1, J ), 1 ) 160 CONTINUE * * W := W * V1**H * CALL CTRMM( 'Right', 'Upper', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V, LDV, WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C2 * V2**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, K, LASTV-K, ONE, C( 1, K+1 ), LDC, $ V( 1, K+1 ), LDV, ONE, WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL CTRMM( 'Right', 'Upper', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C2 := C2 - W * V2 * CALL CGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, $ -ONE, WORK, LDWORK, V( 1, K+1 ), LDV, $ ONE, C( 1, K+1 ), LDC ) END IF * * W := W * V1 * CALL CTRMM( 'Right', 'Upper', 'No transpose', 'Unit', $ LASTC, K, ONE, V, LDV, WORK, LDWORK ) * * C1 := C1 - W * DO 180 J = 1, K DO 170 I = 1, LASTC C( I, J ) = C( I, J ) - WORK( I, J ) 170 CONTINUE 180 CONTINUE * END IF * ELSE * * Let V = ( V1 V2 ) (V2: last K columns) * where V2 is unit lower triangular. * IF( LSAME( SIDE, 'L' ) ) THEN * * Form H * C or H**H * C where C = ( C1 ) * ( C2 ) * LASTV = MAX( K, ILACLC( K, M, V, LDV ) ) LASTC = ILACLC( LASTV, N, C, LDC ) * * W := C**H * V**H = (C1**H * V1**H + C2**H * V2**H) (stored in WORK) * * W := C2**H * DO 190 J = 1, K CALL CCOPY( LASTC, C( LASTV-K+J, 1 ), LDC, $ WORK( 1, J ), 1 ) CALL CLACGV( LASTC, WORK( 1, J ), 1 ) 190 CONTINUE * * W := W * V2**H * CALL CTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1**H * V1**H * CALL CGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTC, K, LASTV-K, $ ONE, C, LDC, V, LDV, ONE, WORK, LDWORK ) END IF * * W := W * T**H or W * T * CALL CTRMM( 'Right', 'Lower', TRANST, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - V**H * W**H * IF( LASTV.GT.K ) THEN * * C1 := C1 - V1**H * W**H * CALL CGEMM( 'Conjugate transpose', $ 'Conjugate transpose', LASTV-K, LASTC, K, $ -ONE, V, LDV, WORK, LDWORK, ONE, C, LDC ) END IF * * W := W * V2 * CALL CTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C2 := C2 - W**H * DO 210 J = 1, K DO 200 I = 1, LASTC C( LASTV-K+J, I ) = C( LASTV-K+J, I ) - $ CONJG( WORK( I, J ) ) 200 CONTINUE 210 CONTINUE * ELSE IF( LSAME( SIDE, 'R' ) ) THEN * * Form C * H or C * H**H where C = ( C1 C2 ) * LASTV = MAX( K, ILACLC( K, N, V, LDV ) ) LASTC = ILACLR( M, LASTV, C, LDC ) * * W := C * V**H = (C1*V1**H + C2*V2**H) (stored in WORK) * * W := C2 * DO 220 J = 1, K CALL CCOPY( LASTC, C( 1, LASTV-K+J ), 1, $ WORK( 1, J ), 1 ) 220 CONTINUE * * W := W * V2**H * CALL CTRMM( 'Right', 'Lower', 'Conjugate transpose', $ 'Unit', LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) IF( LASTV.GT.K ) THEN * * W := W + C1 * V1**H * CALL CGEMM( 'No transpose', 'Conjugate transpose', $ LASTC, K, LASTV-K, ONE, C, LDC, V, LDV, ONE, $ WORK, LDWORK ) END IF * * W := W * T or W * T**H * CALL CTRMM( 'Right', 'Lower', TRANS, 'Non-unit', $ LASTC, K, ONE, T, LDT, WORK, LDWORK ) * * C := C - W * V * IF( LASTV.GT.K ) THEN * * C1 := C1 - W * V1 * CALL CGEMM( 'No transpose', 'No transpose', $ LASTC, LASTV-K, K, -ONE, WORK, LDWORK, V, LDV, $ ONE, C, LDC ) END IF * * W := W * V2 * CALL CTRMM( 'Right', 'Lower', 'No transpose', 'Unit', $ LASTC, K, ONE, V( 1, LASTV-K+1 ), LDV, $ WORK, LDWORK ) * * C1 := C1 - W * DO 240 J = 1, K DO 230 I = 1, LASTC C( I, LASTV-K+J ) = C( I, LASTV-K+J ) $ - WORK( I, J ) 230 CONTINUE 240 CONTINUE * END IF * END IF END IF * RETURN * * End of CLARFB * END piqp-octave/src/eigen/COPYING.README0000644000175100001770000000141314107270226016520 0ustar runnerdockerEigen is primarily MPL2 licensed. See COPYING.MPL2 and these links: http://www.mozilla.org/MPL/2.0/ http://www.mozilla.org/MPL/2.0/FAQ.html Some files contain third-party code under BSD or LGPL licenses, whence the other COPYING.* files here. All the LGPL code is either LGPL 2.1-only, or LGPL 2.1-or-later. For this reason, the COPYING.LGPL file contains the LGPL 2.1 text. If you want to guarantee that the Eigen code that you are #including is licensed under the MPL2 and possibly more permissive licenses (like BSD), #define this preprocessor symbol: EIGEN_MPL2_ONLY For example, with most compilers, you could add this to your project CXXFLAGS: -DEIGEN_MPL2_ONLY This will cause a compilation error to be generated if you #include any code that is LGPL licensed. piqp-octave/src/eigen/scripts/0000755000175100001770000000000014107270226016221 5ustar runnerdockerpiqp-octave/src/eigen/scripts/relicense.py0000644000175100001770000000450014107270226020543 0ustar runnerdocker# This file is part of Eigen, a lightweight C++ template library # for linear algebra. # # Copyright (C) 2012 Keir Mierle # # This Source Code Form is subject to the terms of the Mozilla # Public License v. 2.0. If a copy of the MPL was not distributed # with this file, You can obtain one at http://mozilla.org/MPL/2.0/. # # Author: mierle@gmail.com (Keir Mierle) # # Make the long-awaited conversion to MPL. lgpl3_header = ''' // Eigen is free software; you can redistribute it and/or // modify it under the terms of the GNU Lesser General Public // License as published by the Free Software Foundation; either // version 3 of the License, or (at your option) any later version. // // Alternatively, you can redistribute it and/or // modify it under the terms of the GNU General Public License as // published by the Free Software Foundation; either version 2 of // the License, or (at your option) any later version. // // Eigen is distributed in the hope that it will be useful, but WITHOUT ANY // WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS // FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License or the // GNU General Public License for more details. // // You should have received a copy of the GNU Lesser General Public // License and a copy of the GNU General Public License along with // Eigen. If not, see . ''' mpl2_header = """ // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. """ import os import sys exclusions = set(['relicense.py']) def update(text): if text.find(lgpl3_header) == -1: return text, False return text.replace(lgpl3_header, mpl2_header), True rootdir = sys.argv[1] for root, sub_folders, files in os.walk(rootdir): for basename in files: if basename in exclusions: print 'SKIPPED', filename continue filename = os.path.join(root, basename) fo = file(filename) text = fo.read() fo.close() text, updated = update(text) if updated: fo = file(filename, "w") fo.write(text) fo.close() print 'UPDATED', filename else: print ' ', filename piqp-octave/src/eigen/scripts/check.in0000755000175100001770000000123614107270226017633 0ustar runnerdocker#!/bin/bash # check : shorthand for make and ctest -R if [[ $# != 1 || $1 == *help ]] then echo "usage: $0 regexp" echo " Builds and runs tests matching the regexp." echo " The EIGEN_MAKE_ARGS environment variable allows to pass args to 'make'." echo " For example, to launch 5 concurrent builds, use EIGEN_MAKE_ARGS='-j5'" echo " The EIGEN_CTEST_ARGS environment variable allows to pass args to 'ctest'." echo " For example, with CTest 2.8, you can use EIGEN_CTEST_ARGS='-j5'." exit 0 fi if [ -n "${EIGEN_CTEST_ARGS:+x}" ] then ./buildtests.sh "$1" && ctest -R "$1" ${EIGEN_CTEST_ARGS} else ./buildtests.sh "$1" && ctest -R "$1" fi exit $? piqp-octave/src/eigen/scripts/debug.in0000755000175100001770000000005414107270226017641 0ustar runnerdocker#!/bin/sh cmake -DCMAKE_BUILD_TYPE=Debug . piqp-octave/src/eigen/scripts/eigen_gen_credits.cpp0000644000175100001770000001436014107270226022366 0ustar runnerdocker#include #include #include #include #include #include #include using namespace std; // this function takes a line that may contain a name and/or email address, // and returns just the name, while fixing the "bad cases". std::string contributor_name(const std::string& line) { string result; // let's first take care of the case of isolated email addresses, like // "user@localhost.localdomain" entries if(line.find("markb@localhost.localdomain") != string::npos) { return "Mark Borgerding"; } if(line.find("kayhman@contact.intra.cea.fr") != string::npos) { return "Guillaume Saupin"; } // from there on we assume that we have a entry of the form // either: // Bla bli Blurp // or: // Bla bli Blurp size_t position_of_email_address = line.find_first_of('<'); if(position_of_email_address != string::npos) { // there is an e-mail address in <...>. // Hauke once committed as "John Smith", fix that. if(line.find("hauke.heibel") != string::npos) result = "Hauke Heibel"; else { // just remove the e-mail address result = line.substr(0, position_of_email_address); } } else { // there is no e-mail address in <...>. if(line.find("convert-repo") != string::npos) result = ""; else result = line; } // remove trailing spaces size_t length = result.length(); while(length >= 1 && result[length-1] == ' ') result.erase(--length); return result; } // parses hg churn output to generate a contributors map. map contributors_map_from_churn_output(const char *filename) { map contributors_map; string line; ifstream churn_out; churn_out.open(filename, ios::in); while(!getline(churn_out,line).eof()) { // remove the histograms "******" that hg churn may draw at the end of some lines size_t first_star = line.find_first_of('*'); if(first_star != string::npos) line.erase(first_star); // remove trailing spaces size_t length = line.length(); while(length >= 1 && line[length-1] == ' ') line.erase(--length); // now the last space indicates where the number starts size_t last_space = line.find_last_of(' '); // get the number (of changesets or of modified lines for each contributor) int number; istringstream(line.substr(last_space+1)) >> number; // get the name of the contributor line.erase(last_space); string name = contributor_name(line); map::iterator it = contributors_map.find(name); // if new contributor, insert if(it == contributors_map.end()) contributors_map.insert(pair(name, number)); // if duplicate, just add the number else it->second += number; } churn_out.close(); return contributors_map; } // find the last name, i.e. the last word. // for "van den Schbling" types of last names, that's not a problem, that's actually what we want. string lastname(const string& name) { size_t last_space = name.find_last_of(' '); if(last_space >= name.length()-1) return name; else return name.substr(last_space+1); } struct contributor { string name; int changedlines; int changesets; string url; string misc; contributor() : changedlines(0), changesets(0) {} bool operator < (const contributor& other) { return lastname(name).compare(lastname(other.name)) < 0; } }; void add_online_info_into_contributors_list(list& contributors_list, const char *filename) { string line; ifstream online_info; online_info.open(filename, ios::in); while(!getline(online_info,line).eof()) { string hgname, realname, url, misc; size_t last_bar = line.find_last_of('|'); if(last_bar == string::npos) continue; if(last_bar < line.length()) misc = line.substr(last_bar+1); line.erase(last_bar); last_bar = line.find_last_of('|'); if(last_bar == string::npos) continue; if(last_bar < line.length()) url = line.substr(last_bar+1); line.erase(last_bar); last_bar = line.find_last_of('|'); if(last_bar == string::npos) continue; if(last_bar < line.length()) realname = line.substr(last_bar+1); line.erase(last_bar); hgname = line; // remove the example line if(hgname.find("MercurialName") != string::npos) continue; list::iterator it; for(it=contributors_list.begin(); it != contributors_list.end() && it->name != hgname; ++it) {} if(it == contributors_list.end()) { contributor c; c.name = realname; c.url = url; c.misc = misc; contributors_list.push_back(c); } else { it->name = realname; it->url = url; it->misc = misc; } } } int main() { // parse the hg churn output files map contributors_map_for_changedlines = contributors_map_from_churn_output("churn-changedlines.out"); //map contributors_map_for_changesets = contributors_map_from_churn_output("churn-changesets.out"); // merge into the contributors list list contributors_list; map::iterator it; for(it=contributors_map_for_changedlines.begin(); it != contributors_map_for_changedlines.end(); ++it) { contributor c; c.name = it->first; c.changedlines = it->second; c.changesets = 0; //contributors_map_for_changesets.find(it->first)->second; contributors_list.push_back(c); } add_online_info_into_contributors_list(contributors_list, "online-info.out"); contributors_list.sort(); cout << "{| cellpadding=\"5\"\n"; cout << "!\n"; cout << "! Lines changed\n"; cout << "!\n"; list::iterator itc; int i = 0; for(itc=contributors_list.begin(); itc != contributors_list.end(); ++itc) { if(itc->name.length() == 0) continue; if(i%2) cout << "|-\n"; else cout << "|- style=\"background:#FFFFD0\"\n"; if(itc->url.length()) cout << "| [" << itc->url << " " << itc->name << "]\n"; else cout << "| " << itc->name << "\n"; if(itc->changedlines) cout << "| " << itc->changedlines << "\n"; else cout << "| (no information)\n"; cout << "| " << itc->misc << "\n"; i++; } cout << "|}" << endl; } piqp-octave/src/eigen/scripts/eigen_gen_split_test_help.cmake0000644000175100001770000000050314107270226024423 0ustar runnerdocker#!cmake -P file(WRITE split_test_helper.h "") foreach(i RANGE 1 999) file(APPEND split_test_helper.h "#if defined(EIGEN_TEST_PART_${i}) || defined(EIGEN_TEST_PART_ALL)\n" "#define CALL_SUBTEST_${i}(FUNC) CALL_SUBTEST(FUNC)\n" "#else\n" "#define CALL_SUBTEST_${i}(FUNC)\n" "#endif\n\n" ) endforeach()piqp-octave/src/eigen/scripts/eigen_gen_docs0000644000175100001770000000134214107270226021074 0ustar runnerdocker#!/bin/sh # configuration # You should call this script with USER set as you want, else some default # will be used USER=${USER:-'orzel'} UPLOAD_DIR=dox-devel #ulimit -v 1024000 # step 1 : build rm build/doc/html -Rf mkdir build -p (cd build && cmake .. && make doc) || { echo "make failed"; exit 1; } #step 2 : upload # (the '/' at the end of path is very important, see rsync documentation) rsync -az --no-p --delete build/doc/html/ $USER@ssh.tuxfamily.org:eigen/eigen.tuxfamily.org-web/htdocs/$UPLOAD_DIR/ || { echo "upload failed"; exit 1; } #step 3 : fix the perm ssh $USER@ssh.tuxfamily.org "chmod -R g+w /home/eigen/eigen.tuxfamily.org-web/htdocs/$UPLOAD_DIR" || { echo "perm failed"; exit 1; } echo "Uploaded successfully" piqp-octave/src/eigen/scripts/buildtests.in0000755000175100001770000000110114107270226020727 0ustar runnerdocker#!/bin/bash if [[ $# != 1 || $1 == *help ]] then echo "usage: $0 regexp" echo " Builds tests matching the regexp." echo " The EIGEN_MAKE_ARGS environment variable allows to pass args to 'make'." echo " For example, to launch 5 concurrent builds, use EIGEN_MAKE_ARGS='-j5'" exit 0 fi TESTSLIST="@EIGEN_TESTS_LIST@" targets_to_make=$(echo "$TESTSLIST" | grep -E "$1" | xargs echo) if [ -n "${EIGEN_MAKE_ARGS:+x}" ] then @CMAKE_MAKE_PROGRAM@ $targets_to_make ${EIGEN_MAKE_ARGS} else @CMAKE_MAKE_PROGRAM@ $targets_to_make @EIGEN_TEST_BUILD_FLAGS@ fi exit $? piqp-octave/src/eigen/scripts/CMakeLists.txt0000644000175100001770000000051014107270226020755 0ustar runnerdockerget_property(EIGEN_TESTS_LIST GLOBAL PROPERTY EIGEN_TESTS_LIST) configure_file(buildtests.in ${CMAKE_BINARY_DIR}/buildtests.sh @ONLY) configure_file(check.in ${CMAKE_BINARY_DIR}/check.sh COPYONLY) configure_file(debug.in ${CMAKE_BINARY_DIR}/debug.sh COPYONLY) configure_file(release.in ${CMAKE_BINARY_DIR}/release.sh COPYONLY) piqp-octave/src/eigen/scripts/cdashtesting.cmake.in0000644000175100001770000000304114107270226022306 0ustar runnerdocker set(CTEST_SOURCE_DIRECTORY "@CMAKE_SOURCE_DIR@") set(CTEST_BINARY_DIRECTORY "@CMAKE_BINARY_DIR@") set(CTEST_CMAKE_GENERATOR "@CMAKE_GENERATOR@") set(CTEST_BUILD_NAME "@BUILDNAME@") set(CTEST_SITE "@SITE@") set(MODEL Experimental) if(${CTEST_SCRIPT_ARG} MATCHES Nightly) set(MODEL Nightly) elseif(${CTEST_SCRIPT_ARG} MATCHES Continuous) set(MODEL Continuous) endif() find_program(CTEST_GIT_COMMAND NAMES git) set(CTEST_UPDATE_COMMAND "${CTEST_GIT_COMMAND}") ctest_start(${MODEL} ${CTEST_SOURCE_DIRECTORY} ${CTEST_BINARY_DIRECTORY}) ctest_update(SOURCE "${CTEST_SOURCE_DIRECTORY}") ctest_submit(PARTS Update Notes) # to get CTEST_PROJECT_SUBPROJECTS definition: include("${CTEST_SOURCE_DIRECTORY}/CTestConfig.cmake") foreach(subproject ${CTEST_PROJECT_SUBPROJECTS}) message("") message("Process ${subproject}") set_property(GLOBAL PROPERTY SubProject ${subproject}) set_property(GLOBAL PROPERTY Label ${subproject}) ctest_configure(BUILD ${CTEST_BINARY_DIRECTORY} SOURCE ${CTEST_SOURCE_DIRECTORY} ) ctest_submit(PARTS Configure) set(CTEST_BUILD_TARGET "Build${subproject}") message("Build ${CTEST_BUILD_TARGET}") ctest_build(BUILD "${CTEST_BINARY_DIRECTORY}" APPEND) # builds target ${CTEST_BUILD_TARGET} ctest_submit(PARTS Build) ctest_test(BUILD "${CTEST_BINARY_DIRECTORY}" INCLUDE_LABEL "${subproject}" ) # runs only tests that have a LABELS property matching "${subproject}" ctest_coverage(BUILD "${CTEST_BINARY_DIRECTORY}" LABELS "${subproject}" ) ctest_submit(PARTS Test) endforeach() piqp-octave/src/eigen/scripts/release.in0000755000175100001770000000005614107270226020175 0ustar runnerdocker#!/bin/sh cmake -DCMAKE_BUILD_TYPE=Release . piqp-octave/src/eigen/scripts/eigen_monitor_perf.sh0000755000175100001770000000176114107270226022437 0ustar runnerdocker#!/bin/bash # This is a script example to automatically update and upload performance unit tests. # The following five variables must be adjusted to match your settings. USER='ggael' UPLOAD_DIR=perf_monitoring/ggaelmacbook26 EIGEN_SOURCE_PATH=$HOME/Eigen/eigen export PREFIX="haswell-fma" export CXX_FLAGS="-mfma -w" #### BENCH_PATH=$EIGEN_SOURCE_PATH/bench/perf_monitoring/$PREFIX PREVPATH=$(pwd) cd $EIGEN_SOURCE_PATH/bench/perf_monitoring && ./runall.sh "Haswell 2.6GHz, FMA, Apple's clang" "$@" cd $PREVPATH || exit 1 ALLFILES="$BENCH_PATH/*.png $BENCH_PATH/*.html $BENCH_PATH/index.html $BENCH_PATH/s1.js $BENCH_PATH/s2.js" # (the '/' at the end of path is very important, see rsync documentation) rsync -az --no-p --delete $ALLFILES $USER@ssh.tuxfamily.org:eigen/eigen.tuxfamily.org-web/htdocs/$UPLOAD_DIR/ || { echo "upload failed"; exit 1; } # fix the perm ssh $USER@ssh.tuxfamily.org "chmod -R g+w /home/eigen/eigen.tuxfamily.org-web/htdocs/perf_monitoring" || { echo "perm failed"; exit 1; } piqp-octave/src/eigen/doc/0000755000175100001770000000000014107270226015277 5ustar runnerdockerpiqp-octave/src/eigen/doc/TutorialLinearAlgebra.dox0000644000175100001770000002712414107270226022235 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions This page explains how to solve linear systems, compute various decompositions such as LU, QR, %SVD, eigendecompositions... After reading this page, don't miss our \link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions. \eigenAutoToc \section TutorialLinAlgBasicSolve Basic linear solving \b The \b problem: You have a system of equations, that you have written as a single matrix equation \f[ Ax \: = \: b \f] Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x. \b The \b solution: You can choose between various decompositions, depending on the properties of your matrix \a A, and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases, and is a good compromise:
Example:Output:
\include TutorialLinAlgExSolveColPivHouseholderQR.cpp \verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out
In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the matrix is of type Matrix3f, this line could have been replaced by: \code ColPivHouseholderQR dec(A); Vector3f x = dec.solve(b); \endcode Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from, depending on your matrix, the problem you are trying to solve, and the trade-off you want to make:
Decomposition Method Requirements
on the matrix
Speed
(small-to-medium)
Speed
(large)
Accuracy
PartialPivLU partialPivLu() Invertible ++ ++ +
FullPivLU fullPivLu() None - - - +++
HouseholderQR householderQr() None ++ ++ +
ColPivHouseholderQR colPivHouseholderQr() None + - +++
FullPivHouseholderQR fullPivHouseholderQr() None - - - +++
CompleteOrthogonalDecomposition completeOrthogonalDecomposition() None + - +++
LLT llt() Positive definite +++ +++ +
LDLT ldlt() Positive or negative
semidefinite
+++ + ++
BDCSVD bdcSvd() None - - +++
JacobiSVD jacobiSvd() None - - - - +++
To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink. All of these decompositions offer a solve() method that works as in the above example. If you know more about the properties of your matrix, you can use the above table to select the best method. For example, a good choice for solving linear systems with a non-symmetric matrix of full rank is PartialPivLU. If you know that your matrix is also symmetric and positive definite, the above table says that a very good choice is the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general matrix (not a vector) as right hand side is possible:
Example:Output:
\include TutorialLinAlgExSolveLDLT.cpp \verbinclude TutorialLinAlgExSolveLDLT.out
For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen supports many other decompositions), see our special page on \ref TopicLinearAlgebraDecompositions "this topic". \section TutorialLinAlgLeastsquares Least squares solving The most general and accurate method to solve under- or over-determined linear systems in the least squares sense, is the SVD decomposition. Eigen provides two implementations. The recommended one is the BDCSVD class, which scales well for large problems and automatically falls back to the JacobiSVD class for smaller problems. For both classes, their solve() method solved the linear system in the least-squares sense. Here is an example:
Example:Output:
\include TutorialLinAlgSVDSolve.cpp \verbinclude TutorialLinAlgSVDSolve.out
An alternative to the SVD, which is usually faster and about as accurate, is CompleteOrthogonalDecomposition. Again, if you know more about the problem, the table above contains methods that are potentially faster. If your matrix is full rank, HouseHolderQR is the method of choice. If your matrix is full rank and well conditioned, using the Cholesky decomposition (LLT) on the matrix of the normal equations can be faster still. Our page on \link LeastSquares least squares solving \endlink has more details. \section TutorialLinAlgSolutionExists Checking if a matrix is singular Only you know what error margin you want to allow for a solution to be considered valid. So Eigen lets you do this computation for yourself, if you want to, as in this example:
Example:Output:
\include TutorialLinAlgExComputeSolveError.cpp \verbinclude TutorialLinAlgExComputeSolveError.out
\section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page". Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver. The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is very rare. The call to info() is to check for this possibility.
Example:Output:
\include TutorialLinAlgSelfAdjointEigenSolver.cpp \verbinclude TutorialLinAlgSelfAdjointEigenSolver.out
\section TutorialLinAlgInverse Computing inverse and determinant First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts, in \em numerical linear algebra they are not as useful as in pure mathematics. Inverse computations are often advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix is invertible. However, for \em very \em small matrices, the above may not be true, and inverse and determinant can be very useful. While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices. Here is an example:
Example:Output:
\include TutorialLinAlgInverseDeterminant.cpp \verbinclude TutorialLinAlgInverseDeterminant.out
\section TutorialLinAlgSeparateComputation Separating the computation from the construction In the above examples, the decomposition was computed at the same time that the decomposition object was constructed. There are however situations where you might want to separate these two things, for example if you don't know, at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing decomposition object. What makes this possible is that: \li all decompositions have a default constructor, \li all decompositions have a compute(matrix) method that does the computation, and that may be called again on an already-computed decomposition, reinitializing it. For example:
Example:Output:
\include TutorialLinAlgComputeTwice.cpp \verbinclude TutorialLinAlgComputeTwice.out
Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size, so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just passing the size to the decomposition constructor, as in this example: \code HouseholderQR qr(50,50); MatrixXf A = MatrixXf::Random(50,50); qr.compute(A); // no dynamic memory allocation \endcode \section TutorialLinAlgRankRevealing Rank-revealing decompositions Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions whether they are rank-revealing or not. Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(), and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the case with FullPivLU:
Example:Output:
\include TutorialLinAlgRankRevealing.cpp \verbinclude TutorialLinAlgRankRevealing.out
Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold() on your decomposition object before calling rank() or any other method that needs to use such a threshold. The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the decomposition after you've changed the threshold.
Example:Output:
\include TutorialLinAlgSetThreshold.cpp \verbinclude TutorialLinAlgSetThreshold.out
*/ } piqp-octave/src/eigen/doc/Eigen_Silly_Professor_64x64.png0000644000175100001770000002024314107270226023126 0ustar runnerdocker‰PNG  IHDR@@ªiqÞsRGB®ÎébKGDÿÿÿ ½§“ pHYs  šœtIMEØ 1 ³8¬‚ IDATxÚÝ›yÐfgYæÏóœýݾ÷ýöî¯;NºÓéNg˜°ƒIPEq,·b¬rfJKÇR§f±,gÜFE§D—ÂQÀD!²§»“ô×Ýßþ½ûÙŸeþx;Ja€N¬ÑSõþóþqê<×¹—ëºîû(þ]qܸûäõ·üD¯·¸røðÑééÓ/ž;÷¸ýrî)þ1 ”:pÕác÷]uÕ5Ëi6)›æ…àÝa}°ÝžÛÿíßþÏ õ )UÁmKËNzžç )Ž­¿QÞ4N®Žãd{ww{°ÿ$pÎÚÀ\³ÙúF)•¨ªŠ ˜ëtçÊ¢¼Ýó¼·”e¡Ótúñ’\NÚ}kÌë;s½y!ÎZ²,wÖ9ÑéôÂF£yK»ÝýµÝÝÍ슢êÛùëºÚ¦“ÿã¬! Bj­IÓTà *KÂ0éåyúºpD@Èó€µÖ A%¥zKœ4¼ƒ—CŸ4Í1ÖQ׵Ȳ´™çùoh]¹¯h¬¶å[O,©_PŸ^nŠ_n¼ð¿’]Èó<Çͨ?ØKƒÀG˜ë¶8yêZêº"ÏS<Ï¿5I§¯è~_»©ýU%å‹_´æÝ3HÅ ¹æíVˆ‡ÝÖ?;)Ýïú™H›‹Ä‘ņxõ´â!H{¡.JíÒié>(ñÐrGþåS{æÎkÝyòÔ͇.]\g<Ñl´èõ–ðý€…ÅÇ_ˇ>ô1VV0>óðÃ÷¿¤(²â+M„Âkæå{š¾¸ëê®T{ž˜Ô0—H&¹ek¨Í‰O%¡À÷%ÍD²ÔV„ ¹%ÀXÐÆÑŸhžÜ©yzO#€f(AB‘[>uÞ@ûIšN)ÊœÕÕC~ˆuŽ·½í¼ï}F«Õ¥®«Éã?tÏÆÆúÇ¿ÒmÐDÇçq£6ÄN;QT–¬°Ä¾ B:•›UÉÊ8òÒ¡­`{d©µ ¬ Ò0²ÜÒ ¡%Êž÷¨SÍöТíV‡ ˆ˜¦c„4mÚíaèsÛm/â‰'΢”¤éd8ì}à9p÷ÝwϽò•¯|{žçã½½½þ—DÀ2±ŽSÚ²&A jVa¹ÃHAàK¦¥¤*=ŠÒ1˜ÒÂ1*µ€$-,g¶jtíÈJG tÖ8DméO û®A¯·„”‚,O)ŠŒ……eš&++ ÜqÇmœzýýÈót!ϳßÕºž^qBø·Þzë÷¿á oøI’¼öJB לٚº_ö”¸·´dÃÚ9¥ÚÁ‘EŸë–}.MF›1á)6¶{¬÷›Ô%„J¡¤GY Æ9øR¢5 V[”u8‹ uU“$‰év{¬¼ ­k´®‘J¢”äýïûS®¹ö0“ÉcìÕaßuÅ0ŸÈc/}ÕÝ·¼óïü±?ú£?ºôÞ÷¾÷ßXk'W€Aig:‘ÐBŠ»‰\˜K”8´äÓí(þz«É«¿÷§ø†ïú>Ž¾ø¥ì”%OŸYGš‚¥¶¡¸w]°U.’å1ƒ¬buN ¤`'uèzëûZ?Ñ×Û‹‹ËÝÅŤ”LÆ#´®éõ8th•3gÏóè£pûí·2TYTát:zÏçt¤/€<8çýôÃ×üËiš7>ö±ýÄúúúGžKOŸ–îUó ù '–<µ«h5ýÊçØ~ˆ»^÷F:mÖVW¸ãå¯@{Ÿúè_pdÁç3Û‚{Þõïyë÷~?k'od{{ʹ§/rí²àɾf±3ëmÖj˜›J†M±°°h­ÑÚ°·¿ÍêêQä³»3 Ï .m\¢ßïS–e§,‹÷Zkv¾(BñòÛoüÚã×ßpÛŸüί î{ôüO:çv¯ôô½Ḋÿåôª¿Ú‹†BIÄ×ñúïý·Ì7ÍÀ'n¸éfÞ÷Þ?$t¼k_Å¿þbiaG¯á¦W|5Ÿ}r“Ñ…Gð=GV9[’í¡{™-wÆÅ…$i.9ë(«’Z×h]1NP*@ëš²¬IÓÏóâ4ž1Fò‹Ö€×¼æ5Á;þÅ=qâºò…ìNè¾ïJ‰Ò©UO\Ý“ß~ ­Ž·'ÎN°¸v”n«I7høŠÐSø|!¸ýÅw22!ßù®ï'‚FèÓŽæÚ|ý7¿¸Ñd©åqxÞ£´‚•Ÿµž×íøù§Ïž{¬_V%íN‡……%vw·Ñúƒ}ò" ÑhaŒ‘A¾QÑù¢Üu×]ï¸þäõ§ÿû¯übRh§®[V߬ßÌ})ú©=úò%K ©;‘`®!ð}B"… ò$¾’ Æh¾ç»¿‹7¿é›ÐÚÐ>pŒ?ûÓ?ã•/9y–áI‰±©<æ{W-,4“ÒR9A'‘òÖ#ÞͺL?”ç)Óɘ,R9;»[”e‰£5Ežá¬ÃóüJ©nü¢Txww÷¾ó;¾cwºýd<é©àä’×dö??¶m^ü+ ÿB¹¿5¶/ßOíÆ=ÇŒqŒrK(‚H±=œ ­£6)Î:Þò¶·#‚ˆ÷þצ³r„o<Í 'Ž% ã¢"¯ çþ ‹IÔ`縸[³52ôZR–eõëûýÝ×4’f¢µÆ÷CŠ"£Ñhbg¢çGDQ„÷…ù˜sÎ>k H§é½gï?)»| £uŠ•¶jlOÌM¥vŸ0Ž³_„M&I Þxý²w-Ö «žX¡˜hS¯~žçáüþïýúàxàoîãÜýæÁ'.pis‡iZpûK^Æ´Ò\ÚÚæ/ßýŸè¹]"eK:5œÛ®)j§Ç•ØU’_L«¯RÊ[3Æ ­k¦é˜fsmj’ª*Ѻ¦Õj†áAp¿[–ÅäYèïnÜÏ5§o<èû¥…($TJ²ÐOí{¬£þ ܶԔ÷œXT Ê"= ¾'ÑH¬­ØÑ-z+M3~å玷ë;ØXŒñt“î¯}ÙaÎœét¸¸ßç·~ùg9Z}††gFSÍÅÝš¬°ÔŽAfØÙÚÿVTzÓ!¾:ðÃD)<Ÿ:Ï ÄeûçJ)ææº<°6çÞþéÓ/ø«§ž:ëžM Èn,~æÄ¢ú¾ãË»—F+KUþñCÅ Ç™‡7êß¾1 øž»¯ñ_<ïÆ‘¤Óö©Q¬.6ytW²m–øœ}ê,DMn¸~™7¼îÅ\|øOxðÁ]ÊZpiK‘4–ƒ)7ô ¨™L*vG5÷5Eíx¤o?•i6&¥{û¤¤’RýH%Gœsãª*ÎGQãßu:=ÿ™ƒEQL³Õdn®ÍîÆ™GúÃá[»£Ñ³Éa;)Ý_Ç»<O4EnYêzœ>„A¬~àáõ²xrO"¯ÜÆçp/ŽOI)NYë–j‹R€vì÷ ®íÆVJK*`šòÒ;¯ã/ ø‹s{<ÕpرãªA,sŽö ´©öÆ“þ¹ÍKÅîuä«FýÂÿ‚~€¶|ø‘móWk]õ’¥yŹ=CVÕîzœ\ æv†š\åÿðÇÏTßûy¬¯¶åýÓÒ&Ö9ÑBåki˜ktãƒÌÐi(FåWß3äàê òäHORLg…0”³z5.¸0qû©&-4Ú=ã@}.Ã3Ú˜wõû;? "”’½nwqº²²6>ûÄ}Gó²¹TÔ6ÿbb¨¨ ã´to8ÔSrµ-ÙÏÝ€YþYHBq€ÁÖØ>ô¹6ØbSÜ(áõüÀÌ7y騴C s xžBHÉVšrÕÜ”ûªI<ŸÅ†ä`K ó’ýaÁ83dzF‚¦•#­±SÍû©ûüBœs¶tÎî8g7¬µÃ4d;;¦©ƒÂÛ.í•¿_ýx ôCO¼®Ë¥Ú§F;Z¡dZºðè²wwà‰kÖûæA 0‹x/u¯\nÉ®’BúJP×׎²²Œ&iaPRàùŠ'.Lf) Õ J:!(g¸°±¾W±Ÿ*í(´c;ƒÒaö2~qTºÿù|üÇ“GçîHByÕùíìµù[—éYåp:Ÿˆ²Ô¼¶ ‘×k`cdpÆQ9ÁJG)%Åé«çÕcÅ Úº…‰›k…ò–Èg­vˆF AèK|Ob”F€”x?Üåýñ¾îÎìõB†ãŒ½aÁÙÍœ¬´Ärf7Uš¡ ²B´"¡Kí®«-ç­cð\X[n¼¹× ä©É§¿”'è¶&öÝÛûÚ«»òëãH%á3šS‹ é[pdÁƒþ— ¾íÜŽþ¶‹mƒB;±ÚRD ´ƒ…98ðP¾O§Ý >­N“ÞB‹ÛN`®0é)+MY®Yò0µÀc ¶'+„”|Mˆ¯Yš7Wš·¬í}Ÿã~QÕÛL¼-t“½"CÄ8jã®vä=Ç{*±Æ‘ÕŽÒ@¡!”Ð EåHKG'’´#)–ÚJÌ7%ê2HBI+”ÔB²ÐKèöÌuºs íN‹›Ïƒ©éïOðÐtBG7f" ‰çI’PÑíX •  M5¿;5wæ•Ûñ%g´ýâ£0WŸ<Ú}û8«åì…ÉôJ-±s‘~7ä•uXnHÖÇ)íHÒôya)*Ç(³¾ Kڑı/h†ÏStZ! óMší*Jh´$Í&^àaEW%UU“%-ÚXœ…¬vhÖ8´ƒÀ—Ä¡$ñÄR„ûêX9¿qo¿ø‚$Fì]}úø >ùàÎoN3­¯Ø¬´;ãI^HŽxº¡`¹¥è%‚ÚB#¼\Ý/—Ï¢vX žOIŒ¡O«D!AÄ I³‰ |”çaŒÅZƒ‡¥(k|eÉ M^Zœ%ˆ™jSRÐ%ÃÜÒ @—&öq/Í4Q¨øð¤~öH8¸߸¶Ô\þëwÞûœLÑÒ0M+÷i+Äݽ†èÆ:_€ÔB_àyëfÅ*òfÿk+žÏÜ\/Šèt[ÄÍ&Q#áÄ1~äYŽªÖH,“\ST–$ž ÝPHOЈ$q ñä tˆ„“‰'n-¸w7v½²ÐòNaëéùíâ¾çì çš]mÙ<4ïÝÓŒEP—{ºu0-¡/$ÎB¢<? YXlÓl7iµ[H?¼ @ùBH)Éò’ÀW(á¨Êšºª)KCì‹YÛôf²ÚS‚0X;«|q,éD’~á¼Ü‰ëÓŠÿ]ÊÏ?C§á®k»·;ªÏ>/[<«9›VîÇ–ü ‰Oí¬;#Fd•£®ÝL{+I-;¹ ÛkqpµK«ÓÂcü0B*…T åùd•¡ÙjR|Å´0üÕÃ{ØZJ‡/a/³x¤”j q 0n–vs …*ös·ìKú»©û °¤Ï<ÿ\ËBîöÇõúó ˜qé>*áõ¾¤wUWáIA|O0ξ€0ĉÏC;†;O-²²<‡ñ"æçÛøaˆsòoõ—ç{øA@E(/@)ïyXcØå^ð1ÚW–Ÿ)駖Q1« ÆB+””¥EJX]è5”xp£¾Õv;ßÓ\h¾5fìušAxêØÒ?Ñ©¥µÝ~ú‰Ñ´ÖÏ€)¸×ܲ\ù‚gªŸ§$¾/èÄ’V+ ÕŠ˜Ô7žXâÉ=E\íáÖÎL(!$^'1ž =‰P3®úŠ¿¸‹ÕnLäÃpZ3œvRÇ“{†Í‘fÎT¥Å€(@°Ü”ñ@…m݉çÓ½tàŒûpª»n=}ø§^|ËÁ—]ÚìóÔFú‘ÏåW<Ž<^zͼ÷ân<`ìN •™A¸4ç±8Ò›O˜X?  Æ¥#+-Fê²¼¼æ"‘R"8@ùaÔ j´ ›mÝyº½Û™ÂcºÍ€ +MÁWùìN,oÖÀYKUœs˜º¦®+&ãk4 OòÒ[¯¦Ñl5[ÔøD¾Búc/¢^Y¥wÕ½Ãp8!K >øØ„ó™Ç·¿ñ«xÛןæ«o?†'q;¦±Ú‚ù„§UH§!䎇.U0B[Ö®_Gn9rOØ ïü¼•›+@[–¢¨Œc.z&zeé–…¶)'Žô°ÎáL©+¬18ëÂqûéCÄ¥|ªÚ²±=ăRJyXã¨.ûq0Ô•ÆóŠV,‰CŸÅnÀZ»EÒLhöš$ó ®áóÑmx¬ôhQk]ÑêµX¼z1h¯´ÁÑçÊphmN¾¾ˆO ‘‚Z;b µ+K½Ab‘¼ÿ¸éè¡´ÔU…³çaÐn—±Ž§.öyà‰M®]›ÃYƒ1%%F¡b8%[ úS¼8åÒ~Æ OÇèœf’ðÔzŸ %u]C(©=ÉdX1)V*¢ƒ‹H%Iæ’^>ί/'å|n|)bàdì‹—¯¶ÕjèAÃì >°“;:íf„çûxžB[ÇÞ ã@/œ‘Eq9×ÅÌàpnÖ7ö&ô)ÇÍÍÒÄYpn&%\ØØ]yø†'ÎØÈ Ò*Ž]B×5Un¹Ø·c¢8Â}‚8„^›¨“.÷ÉŒcŒ#¬³×äÃüqgÜWZsOÒjWŽ¥D2-,s¡ Ó3!’•– û%ÇšQÕœYrÝáÖZL­‘~„PÒW(ßÇj Îrd¥Ež¦˜ºÂ¹™Ñb­C*]k¶v†ôht$ËÝ+!gS>õÈYVC\àE7¶xòO÷©òŠ¸7c<v¤ºª¡OLFꬮ&—&ï3¥ùÓçRœ'Ùó¤Øæ¶rŽYF’0äÚQ»™D +ƒ´f®2SÅ ¤$nÄ4ZM„”XFW4bÅñÃ]tU ËŒé$åÌ“[è2§*r®YMΠ„g˜‹bÇþ`§¼H–U4’W½èã Ct¥ñ”‡ø)¨ŠŠÉ`B§ÏþÓûû—î¿ôƒé^ú`û¹  ÍêÆÈž]í(3®•ƒÐ—xJŠN# ÛŠ0NЊ}Ž,7ˆo6÷|Â0ÀcÆ9¬µäyÑ5¾4Ty†Õ5ûý!øѳÔå ]dll (¦)e–Ó WµIYòÔÞ€‡_Çê”[n8µ‹Ë¤£)y‘S—5žòpΑ 3vþfÝn=pñ7ª´úøû^Á•¡¿iEÂÛÏ짅xr&g­´>ó˜V«AfÉ…„IVQ•šºª©µÁšÙ[”J!,uY`ª!,8M]×ì Rû{Lö÷ØÛ0MI'Ö·§kQÎÒÁM >{n›é4Çšœ;n>„ÞËɳÙoçÒÕ° ýÐwoŒå+¬û–ny>{‚˜ZÇÂ\"—"_4¢YÔNJO7BÂ(B³7ßx=síˆ ç÷¸°µq†Ñæh(Û)ÞPšèæ à¶ååø,<½YU}>†ˆ¯-{BpS䋱oFÒ+Àó}¬ôˆãˆAsŽÑ8§.+6w'¬u}”tgȧŠiJ‘e躢®*t]âêŠ2Ë°uÅœoÑEÎ`˜qæâg5ó‰ ¬ ƒé¬óøž@JØžrãX[jsàÀ"¾ðá?ÀTYõHº¾©mXÓââ]·ßx#k'Oòg››Û“,{OþŒ {b(³Ž§¶Æö>O²–,y_W;ˆCÁÆ äX1ždèZЦ´|ËÑy Z‘F¸ºD)B*t=c{ ¨Œ%ÀÖ5UQáy Ë ]ÕL²’ǶJN¯*ÊÒ°¾¯©KË~n]7–BHXö5ë—öùË¿z˜ñ¤àýø›Ï˜Ò|_½›=‚cPDZ´Gâg|ÝÁ`·ú¼Ã_)°ÖñôîÔÞy`Î9‹@[ljŸÈ3 G)•…ý}ÃÕóŠí~ÎBì0¥"›NG¤D¾$ðY® E+ŠJ¨ÙÌ9‹1ŒáŽ#>XËöÐaÉÄw„RpjIº§µ»÷þ'E±~®>уCÇý{?øxU4µ^ê?þ8ŸtŽçùôBYþÎôY¦HW*‡@?¯Éjí|’퉹®×¨yj¯âÔÁ„AªÙ׌„`2ªÐ™ ×p|9`Z—¸ËËÐÍ@—³õÙó!…qt…‚¢2•F GY;0Ž CCä ´YïØ™=ß⥇$ûwaZ¾jk=+ê4µþÎl<æ^çøˆµïÀ½_Î÷ÓËJª’ŸÜÛwv9ÿÄVunµ­nŸL2ªÜReš¾¬cT@;‘ ¦5ž'˜d–²r¨†$ 0É*‚‰5h¥¶ §š²4ô§¥—Æ–SËŠó#G¤fVäB$¨jkG¥³žº™¨Bók•¡Ä¢ïz¡R÷L€‡¬Ý?ÿñóUàsÀ?fî³Ã:lâýÇ®îÞ%Dn¹XÚN$U¡8›ZçIÄÁ–$-Ƥ€qnYh* ãfƒOëØ›Z¬§ÍP0-fÏKÇ°t%˜”–\+•sÂÀ¨²øž°“‘­XºÀ“Y ¿ià/>ðDe€à°_›YËÓÎñ”sgþ_­Ëoö3§§•Û¼öÚÞõoý¦ÓßýÐùaqq$w´t²²2¯YÖh±9ÖæÒÄV{™µié˜O¤!…£Û”ÒÂR––in) CšYòÊQPìfÎÕ;×PîàÚœõ"O쎵lÍd¹ýT®17ò|tÛüÈËŽG—þòLeiû¦®iIIO©ù©*sîq¦öË@_¥ôΓÞ7š:»éÜÓÃϾã­/<ÜmÖÑÙbý[¿åÖn1Ø¿~lzÿ¸p r··ÖUƒ8V­0õJGi¥D}~_K¨´ÅˆÍ±u…AÄ¡`P2xd×L¯¿éptÏ+›átT¿åÇÃÎMFíÀìúÙæPË$wïþª£áî>oÿà{ÆÏ°;þˆ4öï}àÒ«Ç»Ã=¾©?pé©­“çׇ¿{vÛþ‡IÉf¡™¤µ;3*QãÂ}$ðEùØ–Þ?³«‰;±žVÎGPìå®Ê-»gûæâöÔ}ó«“å8 1ÜÚßSå@¼ç£ã?%ïzìRùäÎО)4|òézçýõç­¹¡¼?±±Ì£IEND®B`‚piqp-octave/src/eigen/doc/TopicAssertions.dox0000644000175100001770000001224514107270226021150 0ustar runnerdockernamespace Eigen { /** \page TopicAssertions Assertions \eigenAutoToc \section PlainAssert Assertions The macro eigen_assert is defined to be \c eigen_plain_assert by default. We use eigen_plain_assert instead of \c assert to work around a known bug for GCC <= 4.3. Basically, eigen_plain_assert \a is \c assert. \subsection RedefineAssert Redefining assertions Both eigen_assert and eigen_plain_assert are defined in Macros.h. Defining eigen_assert indirectly gives you a chance to change its behavior. You can redefine this macro if you want to do something else such as throwing an exception, and fall back to its default behavior with eigen_plain_assert. The code below tells Eigen to throw an std::runtime_error: \code #include #undef eigen_assert #define eigen_assert(x) \ if (!(x)) { throw (std::runtime_error("Put your message here")); } \endcode \subsection DisableAssert Disabling assertions Assertions cost run time and can be turned off. You can suppress eigen_assert by defining \c EIGEN_NO_DEBUG \b before including Eigen headers. \c EIGEN_NO_DEBUG is undefined by default unless \c NDEBUG is defined. \section StaticAssert Static assertions Static assertions are not standardized until C++11. However, in the Eigen library, there are many conditions can and should be detectedat compile time. For instance, we use static assertions to prevent the code below from compiling. \code Matrix3d() + Matrix4d(); // adding matrices of different sizes Matrix4cd() * Vector3cd(); // invalid product known at compile time \endcode Static assertions are defined in StaticAssert.h. If there is native static_assert, we use it. Otherwise, we have implemented an assertion macro that can show a limited range of messages. One can easily come up with static assertions without messages, such as: \code #define STATIC_ASSERT(x) \ switch(0) { case 0: case x:; } \endcode However, the example above obviously cannot tell why the assertion failed. Therefore, we define a \c struct in namespace Eigen::internal to handle available messages. \code template struct static_assertion {}; template<> struct static_assertion { enum { YOU_TRIED_CALLING_A_VECTOR_METHOD_ON_A_MATRIX, YOU_MIXED_VECTORS_OF_DIFFERENT_SIZES, // see StaticAssert.h for all enums. }; }; \endcode And then, we define EIGEN_STATIC_ASSERT(CONDITION,MSG) to access Eigen::internal::static_assertion::MSG. If the condition evaluates into \c false, your compiler displays a lot of messages explaining there is no MSG in static_assert. Nevertheless, this is \a not in what we are interested. As you can see, all members of static_assert are ALL_CAPS_AND_THEY_ARE_SHOUTING. \warning When using this macro, MSG should be a member of static_assertion, or the static assertion \b always fails. Currently, it can only be used in function scope. \subsection DerivedStaticAssert Derived static assertions There are other macros derived from EIGEN_STATIC_ASSERT to enhance readability. Their names are self-explanatory. - \b EIGEN_STATIC_ASSERT_FIXED_SIZE(TYPE) - passes if \a TYPE is fixed size. - \b EIGEN_STATIC_ASSERT_DYNAMIC_SIZE(TYPE) - passes if \a TYPE is dynamic size. - \b EIGEN_STATIC_ASSERT_LVALUE(Derived) - failes if \a Derived is read-only. - \b EIGEN_STATIC_ASSERT_ARRAYXPR(Derived) - passes if \a Derived is an array expression. - EIGEN_STATIC_ASSERT_SAME_XPR_KIND(Derived1, Derived2) - failes if the two expressions are an array one and a matrix one. Because Eigen handles both fixed-size and dynamic-size expressions, some conditions cannot be clearly determined at compile time. We classify them into strict assertions and permissive assertions. \subsubsection StrictAssertions Strict assertions These assertions fail if the condition may not be met. For example, MatrixXd may not be a vector, so it fails EIGEN_STATIC_ASSERT_VECTOR_ONLY. - \b EIGEN_STATIC_ASSERT_VECTOR_ONLY(TYPE) - passes if \a TYPE must be a vector type. - EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(TYPE, SIZE) - passes if \a TYPE must be a vector of the given size. - EIGEN_STATIC_ASSERT_MATRIX_SPECIFIC_SIZE(TYPE, ROWS, COLS) - passes if \a TYPE must be a matrix with given rows and columns. \subsubsection PermissiveAssertions Permissive assertions These assertions fail if the condition \b cannot be met. For example, MatrixXd and Matrix4d may have the same size, so they pass EIGEN_STATIC_ASSERT_SAME_MATRIX_SIZE. - \b EIGEN_STATIC_ASSERT_SAME_VECTOR_SIZE(TYPE0,TYPE1) - fails if the two vector expression types must have different sizes. - \b EIGEN_STATIC_ASSERT_SAME_MATRIX_SIZE(TYPE0,TYPE1) - fails if the two matrix expression types must have different sizes. - \b EIGEN_STATIC_ASSERT_SIZE_1x1(TYPE) - fails if \a TYPE cannot be an 1x1 expression. See StaticAssert.h for details such as what messages they throw. \subsection DisableStaticAssert Disabling static assertions If \c EIGEN_NO_STATIC_ASSERT is defined, static assertions turn into eigen_assert's, working like: \code #define EIGEN_STATIC_ASSERT(CONDITION,MSG) eigen_assert((CONDITION) && #MSG); \endcode This saves compile time but consumes more run time. \c EIGEN_NO_STATIC_ASSERT is undefined by default. */ } piqp-octave/src/eigen/doc/TutorialGeometry.dox0000644000175100001770000002300314107270226021330 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialGeometry Space transformations In this page, we will introduce the many possibilities offered by the \ref Geometry_Module "geometry module" to deal with 2D and 3D rotations and projective or affine transformations. \eigenAutoToc Eigen's Geometry module provides two different kinds of geometric transformations: - Abstract transformations, such as rotations (represented by \ref AngleAxis "angle and axis" or by a \ref Quaternion "quaternion"), \ref Translation "translations", \ref Scaling "scalings". These transformations are NOT represented as matrices, but you can nevertheless mix them with matrices and vectors in expressions, and convert them to matrices if you wish. - Projective or affine transformation matrices: see the Transform class. These are really matrices. \note If you are working with OpenGL 4x4 matrices then Affine3f and Affine3d are what you want. Since Eigen defaults to column-major storage, you can directly use the Transform::data() method to pass your transformation matrix to OpenGL. You can construct a Transform from an abstract transformation, like this: \code Transform t(AngleAxis(angle,axis)); \endcode or like this: \code Transform t; t = AngleAxis(angle,axis); \endcode But note that unfortunately, because of how C++ works, you can \b not do this: \code Transform t = AngleAxis(angle,axis); \endcode \b Explanation: In the C++ language, this would require Transform to have a non-explicit conversion constructor from AngleAxis, but we really don't want to allow implicit casting here. \section TutorialGeoElementaryTransformations Transformation types
Transformation typeTypical initialization code
\ref Rotation2D "2D rotation" from an angle\code Rotation2D rot2(angle_in_radian);\endcode
3D rotation as an \ref AngleAxis "angle + axis"\code AngleAxis aa(angle_in_radian, Vector3f(ax,ay,az));\endcode The axis vector must be normalized.
3D rotation as a \ref Quaternion "quaternion"\code Quaternion q; q = AngleAxis(angle_in_radian, axis);\endcode
N-D Scaling\code Scaling(sx, sy) Scaling(sx, sy, sz) Scaling(s) Scaling(vecN)\endcode
N-D Translation\code Translation(tx, ty) Translation(tx, ty, tz) Translation(s) Translation(vecN)\endcode
N-D \ref TutorialGeoTransform "Affine transformation"\code Transform t = concatenation_of_any_transformations; Transform t = Translation3f(p) * AngleAxisf(a,axis) * Scaling(s);\endcode
N-D Linear transformations \n (pure rotations, \n scaling, etc.)\code Matrix t = concatenation_of_rotations_and_scalings; Matrix t = Rotation2Df(a) * Scaling(s); Matrix t = AngleAxisf(a,axis) * Scaling(s);\endcode
Notes on rotations\n To transform more than a single vector the preferred representations are rotation matrices, while for other usages Quaternion is the representation of choice as they are compact, fast and stable. Finally Rotation2D and AngleAxis are mainly convenient types to create other rotation objects. Notes on Translation and Scaling\n Like AngleAxis, these classes were designed to simplify the creation/initialization of linear (Matrix) and affine (Transform) transformations. Nevertheless, unlike AngleAxis which is inefficient to use, these classes might still be interesting to write generic and efficient algorithms taking as input any kind of transformations. Any of the above transformation types can be converted to any other types of the same nature, or to a more generic type. Here are some additional examples:
\code Rotation2Df r; r = Matrix2f(..); // assumes a pure rotation matrix AngleAxisf aa; aa = Quaternionf(..); AngleAxisf aa; aa = Matrix3f(..); // assumes a pure rotation matrix Matrix2f m; m = Rotation2Df(..); Matrix3f m; m = Quaternionf(..); Matrix3f m; m = Scaling(..); Affine3f m; m = AngleAxis3f(..); Affine3f m; m = Scaling(..); Affine3f m; m = Translation3f(..); Affine3f m; m = Matrix3f(..); \endcode
top\section TutorialGeoCommontransformationAPI Common API across transformation types To some extent, Eigen's \ref Geometry_Module "geometry module" allows you to write generic algorithms working on any kind of transformation representations:
Concatenation of two transformations\code gen1 * gen2;\endcode
Apply the transformation to a vector\code vec2 = gen1 * vec1;\endcode
Get the inverse of the transformation\code gen2 = gen1.inverse();\endcode
Spherical interpolation \n (Rotation2D and Quaternion only)\code rot3 = rot1.slerp(alpha,rot2);\endcode
top\section TutorialGeoTransform Affine transformations Generic affine transformations are represented by the Transform class which internally is a (Dim+1)^2 matrix. In Eigen we have chosen to not distinghish between points and vectors such that all points are actually represented by displacement vectors from the origin ( \f$ \mathbf{p} \equiv \mathbf{p}-0 \f$ ). With that in mind, real points and vector distinguish when the transformation is applied.
Apply the transformation to a \b point \code VectorNf p1, p2; p2 = t * p1;\endcode
Apply the transformation to a \b vector \code VectorNf vec1, vec2; vec2 = t.linear() * vec1;\endcode
Apply a \em general transformation \n to a \b normal \b vector \n \code VectorNf n1, n2; MatrixNf normalMatrix = t.linear().inverse().transpose(); n2 = (normalMatrix * n1).normalized();\endcode
(See subject 5.27 of this faq for the explanations)
Apply a transformation with \em pure \em rotation \n to a \b normal \b vector (no scaling, no shear)\code n2 = t.linear() * n1;\endcode
OpenGL compatibility \b 3D \code glLoadMatrixf(t.data());\endcode
OpenGL compatibility \b 2D \code Affine3f aux(Affine3f::Identity()); aux.linear().topLeftCorner<2,2>() = t.linear(); aux.translation().start<2>() = t.translation(); glLoadMatrixf(aux.data());\endcode
\b Component \b accessors
full read-write access to the internal matrix\code t.matrix() = matN1xN1; // N1 means N+1 matN1xN1 = t.matrix(); \endcode
coefficient accessors\code t(i,j) = scalar; <=> t.matrix()(i,j) = scalar; scalar = t(i,j); <=> scalar = t.matrix()(i,j); \endcode
translation part\code t.translation() = vecN; vecN = t.translation(); \endcode
linear part\code t.linear() = matNxN; matNxN = t.linear(); \endcode
extract the rotation matrix\code matNxN = t.rotation(); \endcode
\b Transformation \b creation \n While transformation objects can be created and updated concatenating elementary transformations, the Transform class also features a procedural API:
procedural APIequivalent natural API
Translation\code t.translate(Vector_(tx,ty,..)); t.pretranslate(Vector_(tx,ty,..)); \endcode\code t *= Translation_(tx,ty,..); t = Translation_(tx,ty,..) * t; \endcode
\b Rotation \n In 2D and for the procedural API, any_rotation can also \n be an angle in radian\code t.rotate(any_rotation); t.prerotate(any_rotation); \endcode\code t *= any_rotation; t = any_rotation * t; \endcode
Scaling\code t.scale(Vector_(sx,sy,..)); t.scale(s); t.prescale(Vector_(sx,sy,..)); t.prescale(s); \endcode\code t *= Scaling(sx,sy,..); t *= Scaling(s); t = Scaling(sx,sy,..) * t; t = Scaling(s) * t; \endcode
Shear transformation \n ( \b 2D \b only ! )\code t.shear(sx,sy); t.preshear(sx,sy); \endcode
Note that in both API, any many transformations can be concatenated in a single expression as shown in the two following equivalent examples:
\code t.pretranslate(..).rotate(..).translate(..).scale(..); \endcode
\code t = Translation_(..) * t * RotationType(..) * Translation_(..) * Scaling(..); \endcode
top\section TutorialGeoEulerAngles Euler angles
Euler angles might be convenient to create rotation objects. On the other hand, since there exist 24 different conventions, they are pretty confusing to use. This example shows how to create a rotation matrix according to the 2-1-2 convention.\code Matrix3f m; m = AngleAxisf(angle1, Vector3f::UnitZ()) * * AngleAxisf(angle2, Vector3f::UnitY()) * * AngleAxisf(angle3, Vector3f::UnitZ()); \endcode
*/ } piqp-octave/src/eigen/doc/QuickReference.dox0000644000175100001770000007222414107270226020715 0ustar runnerdockernamespace Eigen { /** \eigenManualPage QuickRefPage Quick reference guide \eigenAutoToc
top \section QuickRef_Headers Modules and Header files The Eigen library is divided in a Core module and several additional modules. Each module has a corresponding header file which has to be included in order to use the module. The \c %Dense and \c Eigen header files are provided to conveniently gain access to several modules at once.
ModuleHeader fileContents
\link Core_Module Core \endlink\code#include \endcodeMatrix and Array classes, basic linear algebra (including triangular and selfadjoint products), array manipulation
\link Geometry_Module Geometry \endlink\code#include \endcodeTransform, Translation, Scaling, Rotation2D and 3D rotations (Quaternion, AngleAxis)
\link LU_Module LU \endlink\code#include \endcodeInverse, determinant, LU decompositions with solver (FullPivLU, PartialPivLU)
\link Cholesky_Module Cholesky \endlink\code#include \endcodeLLT and LDLT Cholesky factorization with solver
\link Householder_Module Householder \endlink\code#include \endcodeHouseholder transformations; this module is used by several linear algebra modules
\link SVD_Module SVD \endlink\code#include \endcodeSVD decompositions with least-squares solver (JacobiSVD, BDCSVD)
\link QR_Module QR \endlink\code#include \endcodeQR decomposition with solver (HouseholderQR, ColPivHouseholderQR, FullPivHouseholderQR)
\link Eigenvalues_Module Eigenvalues \endlink\code#include \endcodeEigenvalue, eigenvector decompositions (EigenSolver, SelfAdjointEigenSolver, ComplexEigenSolver)
\link Sparse_Module Sparse \endlink\code#include \endcode%Sparse matrix storage and related basic linear algebra (SparseMatrix, SparseVector) \n (see \ref SparseQuickRefPage for details on sparse modules)
\code#include \endcodeIncludes Core, Geometry, LU, Cholesky, SVD, QR, and Eigenvalues header files
\code#include \endcodeIncludes %Dense and %Sparse header files (the whole Eigen library)
top \section QuickRef_Types Array, matrix and vector types \b Recall: Eigen provides two kinds of dense objects: mathematical matrices and vectors which are both represented by the template class Matrix, and general 1D and 2D arrays represented by the template class Array: \code typedef Matrix MyMatrixType; typedef Array MyArrayType; \endcode \li \c Scalar is the scalar type of the coefficients (e.g., \c float, \c double, \c bool, \c int, etc.). \li \c RowsAtCompileTime and \c ColsAtCompileTime are the number of rows and columns of the matrix as known at compile-time or \c Dynamic. \li \c Options can be \c ColMajor or \c RowMajor, default is \c ColMajor. (see class Matrix for more options) All combinations are allowed: you can have a matrix with a fixed number of rows and a dynamic number of columns, etc. The following are all valid: \code Matrix // Dynamic number of columns (heap allocation) Matrix // Dynamic number of rows (heap allocation) Matrix // Fully dynamic, row major (heap allocation) Matrix // Fully fixed (usually allocated on stack) \endcode In most cases, you can simply use one of the convenience typedefs for \ref matrixtypedefs "matrices" and \ref arraytypedefs "arrays". Some examples:
MatricesArrays
\code Matrix <=> MatrixXf Matrix <=> VectorXd Matrix <=> RowVectorXi Matrix <=> Matrix3f Matrix <=> Vector4f \endcode\code Array <=> ArrayXXf Array <=> ArrayXd Array <=> RowArrayXi Array <=> Array33f Array <=> Array4f \endcode
Conversion between the matrix and array worlds: \code Array44f a1, a2; Matrix4f m1, m2; m1 = a1 * a2; // coeffwise product, implicit conversion from array to matrix. a1 = m1 * m2; // matrix product, implicit conversion from matrix to array. a2 = a1 + m1.array(); // mixing array and matrix is forbidden m2 = a1.matrix() + m1; // and explicit conversion is required. ArrayWrapper m1a(m1); // m1a is an alias for m1.array(), they share the same coefficients MatrixWrapper a1m(a1); \endcode In the rest of this document we will use the following symbols to emphasize the features which are specifics to a given kind of object: \li \matrixworld linear algebra matrix and vector only \li \arrayworld array objects only \subsection QuickRef_Basics Basic matrix manipulation
1D objects2D objectsNotes
Constructors \code Vector4d v4; Vector2f v1(x, y); Array3i v2(x, y, z); Vector4d v3(x, y, z, w); VectorXf v5; // empty object ArrayXf v6(size); \endcode\code Matrix4f m1; MatrixXf m5; // empty object MatrixXf m6(nb_rows, nb_columns); \endcode By default, the coefficients \n are left uninitialized
Comma initializer \code Vector3f v1; v1 << x, y, z; ArrayXf v2(4); v2 << 1, 2, 3, 4; \endcode\code Matrix3f m1; m1 << 1, 2, 3, 4, 5, 6, 7, 8, 9; \endcode
Comma initializer (bis) \include Tutorial_commainit_02.cpp output: \verbinclude Tutorial_commainit_02.out
Runtime info \code vector.size(); vector.innerStride(); vector.data(); \endcode\code matrix.rows(); matrix.cols(); matrix.innerSize(); matrix.outerSize(); matrix.innerStride(); matrix.outerStride(); matrix.data(); \endcodeInner/Outer* are storage order dependent
Compile-time info \code ObjectType::Scalar ObjectType::RowsAtCompileTime ObjectType::RealScalar ObjectType::ColsAtCompileTime ObjectType::Index ObjectType::SizeAtCompileTime \endcode
Resizing \code vector.resize(size); vector.resizeLike(other_vector); vector.conservativeResize(size); \endcode\code matrix.resize(nb_rows, nb_cols); matrix.resize(Eigen::NoChange, nb_cols); matrix.resize(nb_rows, Eigen::NoChange); matrix.resizeLike(other_matrix); matrix.conservativeResize(nb_rows, nb_cols); \endcodeno-op if the new sizes match,
otherwise data are lost

resizing with data preservation
Coeff access with \n range checking \code vector(i) vector.x() vector[i] vector.y() vector.z() vector.w() \endcode\code matrix(i,j) \endcodeRange checking is disabled if \n NDEBUG or EIGEN_NO_DEBUG is defined
Coeff access without \n range checking \code vector.coeff(i) vector.coeffRef(i) \endcode\code matrix.coeff(i,j) matrix.coeffRef(i,j) \endcode
Assignment/copy \code object = expression; object_of_float = expression_of_double.cast(); \endcodethe destination is automatically resized (if possible)
\subsection QuickRef_PredefMat Predefined Matrices
Fixed-size matrix or vector Dynamic-size matrix Dynamic-size vector
\code typedef {Matrix3f|Array33f} FixedXD; FixedXD x; x = FixedXD::Zero(); x = FixedXD::Ones(); x = FixedXD::Constant(value); x = FixedXD::Random(); x = FixedXD::LinSpaced(size, low, high); x.setZero(); x.setOnes(); x.setConstant(value); x.setRandom(); x.setLinSpaced(size, low, high); \endcode \code typedef {MatrixXf|ArrayXXf} Dynamic2D; Dynamic2D x; x = Dynamic2D::Zero(rows, cols); x = Dynamic2D::Ones(rows, cols); x = Dynamic2D::Constant(rows, cols, value); x = Dynamic2D::Random(rows, cols); N/A x.setZero(rows, cols); x.setOnes(rows, cols); x.setConstant(rows, cols, value); x.setRandom(rows, cols); N/A \endcode \code typedef {VectorXf|ArrayXf} Dynamic1D; Dynamic1D x; x = Dynamic1D::Zero(size); x = Dynamic1D::Ones(size); x = Dynamic1D::Constant(size, value); x = Dynamic1D::Random(size); x = Dynamic1D::LinSpaced(size, low, high); x.setZero(size); x.setOnes(size); x.setConstant(size, value); x.setRandom(size); x.setLinSpaced(size, low, high); \endcode
Identity and \link MatrixBase::Unit basis vectors \endlink \matrixworld
\code x = FixedXD::Identity(); x.setIdentity(); Vector3f::UnitX() // 1 0 0 Vector3f::UnitY() // 0 1 0 Vector3f::UnitZ() // 0 0 1 Vector4f::Unit(i) x.setUnit(i); \endcode \code x = Dynamic2D::Identity(rows, cols); x.setIdentity(rows, cols); N/A \endcode \code N/A VectorXf::Unit(size,i) x.setUnit(size,i); VectorXf::Unit(4,1) == Vector4f(0,1,0,0) == Vector4f::UnitY() \endcode
Note that it is allowed to call any of the \c set* functions to a dynamic-sized vector or matrix without passing new sizes. For instance: \code MatrixXi M(3,3); M.setIdentity(); \endcode \subsection QuickRef_Map Mapping external arrays
Contiguous \n memory \code float data[] = {1,2,3,4}; Map v1(data); // uses v1 as a Vector3f object Map v2(data,3); // uses v2 as a ArrayXf object Map m1(data); // uses m1 as a Array22f object Map m2(data,2,2); // uses m2 as a MatrixXf object \endcode
Typical usage \n of strides \code float data[] = {1,2,3,4,5,6,7,8,9}; Map > v1(data,3); // = [1,3,5] Map > v2(data,3,InnerStride<>(3)); // = [1,4,7] Map > m2(data,2,3); // both lines |1,4,7| Map > m1(data,2,3,OuterStride<>(3)); // are equal to: |2,5,8| \endcode
top \section QuickRef_ArithmeticOperators Arithmetic Operators
add \n subtract\code mat3 = mat1 + mat2; mat3 += mat1; mat3 = mat1 - mat2; mat3 -= mat1;\endcode
scalar product\code mat3 = mat1 * s1; mat3 *= s1; mat3 = s1 * mat1; mat3 = mat1 / s1; mat3 /= s1;\endcode
matrix/vector \n products \matrixworld\code col2 = mat1 * col1; row2 = row1 * mat1; row1 *= mat1; mat3 = mat1 * mat2; mat3 *= mat1; \endcode
transposition \n adjoint \matrixworld\code mat1 = mat2.transpose(); mat1.transposeInPlace(); mat1 = mat2.adjoint(); mat1.adjointInPlace(); \endcode
\link MatrixBase::dot dot \endlink product \n inner product \matrixworld\code scalar = vec1.dot(vec2); scalar = col1.adjoint() * col2; scalar = (col1.adjoint() * col2).value();\endcode
outer product \matrixworld\code mat = col1 * col2.transpose();\endcode
\link MatrixBase::norm() norm \endlink \n \link MatrixBase::normalized() normalization \endlink \matrixworld\code scalar = vec1.norm(); scalar = vec1.squaredNorm() vec2 = vec1.normalized(); vec1.normalize(); // inplace \endcode
\link MatrixBase::cross() cross product \endlink \matrixworld\code #include vec3 = vec1.cross(vec2);\endcode
top \section QuickRef_Coeffwise Coefficient-wise \& Array operators In addition to the aforementioned operators, Eigen supports numerous coefficient-wise operator and functions. Most of them unambiguously makes sense in array-world\arrayworld. The following operators are readily available for arrays, or available through .array() for vectors and matrices:
Arithmetic operators\code array1 * array2 array1 / array2 array1 *= array2 array1 /= array2 array1 + scalar array1 - scalar array1 += scalar array1 -= scalar \endcode
Comparisons\code array1 < array2 array1 > array2 array1 < scalar array1 > scalar array1 <= array2 array1 >= array2 array1 <= scalar array1 >= scalar array1 == array2 array1 != array2 array1 == scalar array1 != scalar array1.min(array2) array1.max(array2) array1.min(scalar) array1.max(scalar) \endcode
Trigo, power, and \n misc functions \n and the STL-like variants\code array1.abs2() array1.abs() abs(array1) array1.sqrt() sqrt(array1) array1.log() log(array1) array1.log10() log10(array1) array1.exp() exp(array1) array1.pow(array2) pow(array1,array2) array1.pow(scalar) pow(array1,scalar) pow(scalar,array2) array1.square() array1.cube() array1.inverse() array1.sin() sin(array1) array1.cos() cos(array1) array1.tan() tan(array1) array1.asin() asin(array1) array1.acos() acos(array1) array1.atan() atan(array1) array1.sinh() sinh(array1) array1.cosh() cosh(array1) array1.tanh() tanh(array1) array1.arg() arg(array1) array1.floor() floor(array1) array1.ceil() ceil(array1) array1.round() round(aray1) array1.isFinite() isfinite(array1) array1.isInf() isinf(array1) array1.isNaN() isnan(array1) \endcode
The following coefficient-wise operators are available for all kind of expressions (matrices, vectors, and arrays), and for both real or complex scalar types:
Eigen's APISTL-like APIs\arrayworld Comments
\code mat1.real() mat1.imag() mat1.conjugate() \endcode \code real(array1) imag(array1) conj(array1) \endcode \code // read-write, no-op for real expressions // read-only for real, read-write for complexes // no-op for real expressions \endcode
Some coefficient-wise operators are readily available for for matrices and vectors through the following cwise* methods:
Matrix API \matrixworldVia Array conversions
\code mat1.cwiseMin(mat2) mat1.cwiseMin(scalar) mat1.cwiseMax(mat2) mat1.cwiseMax(scalar) mat1.cwiseAbs2() mat1.cwiseAbs() mat1.cwiseSqrt() mat1.cwiseInverse() mat1.cwiseProduct(mat2) mat1.cwiseQuotient(mat2) mat1.cwiseEqual(mat2) mat1.cwiseEqual(scalar) mat1.cwiseNotEqual(mat2) \endcode \code mat1.array().min(mat2.array()) mat1.array().min(scalar) mat1.array().max(mat2.array()) mat1.array().max(scalar) mat1.array().abs2() mat1.array().abs() mat1.array().sqrt() mat1.array().inverse() mat1.array() * mat2.array() mat1.array() / mat2.array() mat1.array() == mat2.array() mat1.array() == scalar mat1.array() != mat2.array() \endcode
The main difference between the two API is that the one based on cwise* methods returns an expression in the matrix world, while the second one (based on .array()) returns an array expression. Recall that .array() has no cost, it only changes the available API and interpretation of the data. It is also very simple to apply any user defined function \c foo using DenseBase::unaryExpr together with std::ptr_fun (c++03, deprecated or removed in newer C++ versions), std::ref (c++11), or lambdas (c++11): \code mat1.unaryExpr(std::ptr_fun(foo)); mat1.unaryExpr(std::ref(foo)); mat1.unaryExpr([](double x) { return foo(x); }); \endcode Please note that it's not possible to pass a raw function pointer to \c unaryExpr, so please warp it as shown above. top \section QuickRef_Reductions Reductions Eigen provides several reduction methods such as: \link DenseBase::minCoeff() minCoeff() \endlink, \link DenseBase::maxCoeff() maxCoeff() \endlink, \link DenseBase::sum() sum() \endlink, \link DenseBase::prod() prod() \endlink, \link MatrixBase::trace() trace() \endlink \matrixworld, \link MatrixBase::norm() norm() \endlink \matrixworld, \link MatrixBase::squaredNorm() squaredNorm() \endlink \matrixworld, \link DenseBase::all() all() \endlink, and \link DenseBase::any() any() \endlink. All reduction operations can be done matrix-wise, \link DenseBase::colwise() column-wise \endlink or \link DenseBase::rowwise() row-wise \endlink. Usage example:
\code 5 3 1 mat = 2 7 8 9 4 6 \endcode \code mat.minCoeff(); \endcode\code 1 \endcode
\code mat.colwise().minCoeff(); \endcode\code 2 3 1 \endcode
\code mat.rowwise().minCoeff(); \endcode\code 1 2 4 \endcode
Special versions of \link DenseBase::minCoeff(IndexType*,IndexType*) const minCoeff \endlink and \link DenseBase::maxCoeff(IndexType*,IndexType*) const maxCoeff \endlink: \code int i, j; s = vector.minCoeff(&i); // s == vector[i] s = matrix.maxCoeff(&i, &j); // s == matrix(i,j) \endcode Typical use cases of all() and any(): \code if((array1 > 0).all()) ... // if all coefficients of array1 are greater than 0 ... if((array1 < array2).any()) ... // if there exist a pair i,j such that array1(i,j) < array2(i,j) ... \endcode top\section QuickRef_Blocks Sub-matrices
PLEASE HELP US IMPROVING THIS SECTION. %Eigen 3.4 supports a much improved API for sub-matrices, including, slicing and indexing from arrays: \ref TutorialSlicingIndexing
Read-write access to a \link DenseBase::col(Index) column \endlink or a \link DenseBase::row(Index) row \endlink of a matrix (or array): \code mat1.row(i) = mat2.col(j); mat1.col(j1).swap(mat1.col(j2)); \endcode Read-write access to sub-vectors:
Default versions Optimized versions when the size \n is known at compile time
\code vec1.head(n)\endcode\code vec1.head()\endcodethe first \c n coeffs
\code vec1.tail(n)\endcode\code vec1.tail()\endcodethe last \c n coeffs
\code vec1.segment(pos,n)\endcode\code vec1.segment(pos)\endcode the \c n coeffs in the \n range [\c pos : \c pos + \c n - 1]
Read-write access to sub-matrices:
\code mat1.block(i,j,rows,cols)\endcode \link DenseBase::block(Index,Index,Index,Index) (more) \endlink \code mat1.block(i,j)\endcode \link DenseBase::block(Index,Index) (more) \endlink the \c rows x \c cols sub-matrix \n starting from position (\c i,\c j)
\code mat1.topLeftCorner(rows,cols) mat1.topRightCorner(rows,cols) mat1.bottomLeftCorner(rows,cols) mat1.bottomRightCorner(rows,cols)\endcode \code mat1.topLeftCorner() mat1.topRightCorner() mat1.bottomLeftCorner() mat1.bottomRightCorner()\endcode the \c rows x \c cols sub-matrix \n taken in one of the four corners
\code mat1.topRows(rows) mat1.bottomRows(rows) mat1.leftCols(cols) mat1.rightCols(cols)\endcode \code mat1.topRows() mat1.bottomRows() mat1.leftCols() mat1.rightCols()\endcode specialized versions of block() \n when the block fit two corners
top\section QuickRef_Misc Miscellaneous operations
PLEASE HELP US IMPROVING THIS SECTION. %Eigen 3.4 supports a new API for reshaping: \ref TutorialReshape
\subsection QuickRef_Reverse Reverse Vectors, rows, and/or columns of a matrix can be reversed (see DenseBase::reverse(), DenseBase::reverseInPlace(), VectorwiseOp::reverse()). \code vec.reverse() mat.colwise().reverse() mat.rowwise().reverse() vec.reverseInPlace() \endcode \subsection QuickRef_Replicate Replicate Vectors, matrices, rows, and/or columns can be replicated in any direction (see DenseBase::replicate(), VectorwiseOp::replicate()) \code vec.replicate(times) vec.replicate mat.replicate(vertical_times, horizontal_times) mat.replicate() mat.colwise().replicate(vertical_times, horizontal_times) mat.colwise().replicate() mat.rowwise().replicate(vertical_times, horizontal_times) mat.rowwise().replicate() \endcode top\section QuickRef_DiagTriSymm Diagonal, Triangular, and Self-adjoint matrices (matrix world \matrixworld) \subsection QuickRef_Diagonal Diagonal matrices
OperationCode
view a vector \link MatrixBase::asDiagonal() as a diagonal matrix \endlink \n \code mat1 = vec1.asDiagonal();\endcode
Declare a diagonal matrix\code DiagonalMatrix diag1(size); diag1.diagonal() = vector;\endcode
Access the \link MatrixBase::diagonal() diagonal \endlink and \link MatrixBase::diagonal(Index) super/sub diagonals \endlink of a matrix as a vector (read/write) \code vec1 = mat1.diagonal(); mat1.diagonal() = vec1; // main diagonal vec1 = mat1.diagonal(+n); mat1.diagonal(+n) = vec1; // n-th super diagonal vec1 = mat1.diagonal(-n); mat1.diagonal(-n) = vec1; // n-th sub diagonal vec1 = mat1.diagonal<1>(); mat1.diagonal<1>() = vec1; // first super diagonal vec1 = mat1.diagonal<-2>(); mat1.diagonal<-2>() = vec1; // second sub diagonal \endcode
Optimized products and inverse \code mat3 = scalar * diag1 * mat1; mat3 += scalar * mat1 * vec1.asDiagonal(); mat3 = vec1.asDiagonal().inverse() * mat1 mat3 = mat1 * diag1.inverse() \endcode
\subsection QuickRef_TriangularView Triangular views TriangularView gives a view on a triangular part of a dense matrix and allows to perform optimized operations on it. The opposite triangular part is never referenced and can be used to store other information. \note The .triangularView() template member function requires the \c template keyword if it is used on an object of a type that depends on a template parameter; see \ref TopicTemplateKeyword for details.
OperationCode
Reference to a triangular with optional \n unit or null diagonal (read/write): \code m.triangularView() \endcode \n \c Xxx = ::Upper, ::Lower, ::StrictlyUpper, ::StrictlyLower, ::UnitUpper, ::UnitLower
Writing to a specific triangular part:\n (only the referenced triangular part is evaluated) \code m1.triangularView() = m2 + m3 \endcode
Conversion to a dense matrix setting the opposite triangular part to zero: \code m2 = m1.triangularView()\endcode
Products: \code m3 += s1 * m1.adjoint().triangularView() * m2 m3 -= s1 * m2.conjugate() * m1.adjoint().triangularView() \endcode
Solving linear equations:\n \f$ M_2 := L_1^{-1} M_2 \f$ \n \f$ M_3 := {L_1^*}^{-1} M_3 \f$ \n \f$ M_4 := M_4 U_1^{-1} \f$ \n \code L1.triangularView().solveInPlace(M2) L1.triangularView().adjoint().solveInPlace(M3) U1.triangularView().solveInPlace(M4)\endcode
\subsection QuickRef_SelfadjointMatrix Symmetric/selfadjoint views Just as for triangular matrix, you can reference any triangular part of a square matrix to see it as a selfadjoint matrix and perform special and optimized operations. Again the opposite triangular part is never referenced and can be used to store other information. \note The .selfadjointView() template member function requires the \c template keyword if it is used on an object of a type that depends on a template parameter; see \ref TopicTemplateKeyword for details.
OperationCode
Conversion to a dense matrix: \code m2 = m.selfadjointView();\endcode
Product with another general matrix or vector: \code m3 = s1 * m1.conjugate().selfadjointView() * m3; m3 -= s1 * m3.adjoint() * m1.selfadjointView();\endcode
Rank 1 and rank K update: \n \f$ upper(M_1) \mathrel{{+}{=}} s_1 M_2 M_2^* \f$ \n \f$ lower(M_1) \mathbin{{-}{=}} M_2^* M_2 \f$ \n \code M1.selfadjointView().rankUpdate(M2,s1); M1.selfadjointView().rankUpdate(M2.adjoint(),-1); \endcode
Rank 2 update: (\f$ M \mathrel{{+}{=}} s u v^* + s v u^* \f$) \code M.selfadjointView().rankUpdate(u,v,s); \endcode
Solving linear equations:\n(\f$ M_2 := M_1^{-1} M_2 \f$) \code // via a standard Cholesky factorization m2 = m1.selfadjointView().llt().solve(m2); // via a Cholesky factorization with pivoting m2 = m1.selfadjointView().ldlt().solve(m2); \endcode
*/ /*
\link MatrixBase::asDiagonal() make a diagonal matrix \endlink \n from a vector \code mat1 = vec1.asDiagonal();\endcode
Declare a diagonal matrix\code DiagonalMatrix diag1(size); diag1.diagonal() = vector;\endcode
Access \link MatrixBase::diagonal() the diagonal and super/sub diagonals of a matrix \endlink as a vector (read/write) \code vec1 = mat1.diagonal(); mat1.diagonal() = vec1; // main diagonal vec1 = mat1.diagonal(+n); mat1.diagonal(+n) = vec1; // n-th super diagonal vec1 = mat1.diagonal(-n); mat1.diagonal(-n) = vec1; // n-th sub diagonal vec1 = mat1.diagonal<1>(); mat1.diagonal<1>() = vec1; // first super diagonal vec1 = mat1.diagonal<-2>(); mat1.diagonal<-2>() = vec1; // second sub diagonal \endcode
View on a triangular part of a matrix (read/write) \code mat2 = mat1.triangularView(); // Xxx = Upper, Lower, StrictlyUpper, StrictlyLower, UnitUpper, UnitLower mat1.triangularView() = mat2 + mat3; // only the upper part is evaluated and referenced \endcode
View a triangular part as a symmetric/self-adjoint matrix (read/write) \code mat2 = mat1.selfadjointView(); // Xxx = Upper or Lower mat1.selfadjointView() = mat2 + mat2.adjoint(); // evaluated and write to the upper triangular part only \endcode
Optimized products: \code mat3 += scalar * vec1.asDiagonal() * mat1 mat3 += scalar * mat1 * vec1.asDiagonal() mat3.noalias() += scalar * mat1.triangularView() * mat2 mat3.noalias() += scalar * mat2 * mat1.triangularView() mat3.noalias() += scalar * mat1.selfadjointView() * mat2 mat3.noalias() += scalar * mat2 * mat1.selfadjointView() mat1.selfadjointView().rankUpdate(mat2); mat1.selfadjointView().rankUpdate(mat2.adjoint(), scalar); \endcode Inverse products: (all are optimized) \code mat3 = vec1.asDiagonal().inverse() * mat1 mat3 = mat1 * diag1.inverse() mat1.triangularView().solveInPlace(mat2) mat1.triangularView().solveInPlace(mat2) mat2 = mat1.selfadjointView().llt().solve(mat2) \endcode */ } piqp-octave/src/eigen/doc/CustomizingEigen_Plugins.dox0000644000175100001770000000711214107270226023000 0ustar runnerdockernamespace Eigen { /** \page TopicCustomizing_Plugins Extending MatrixBase (and other classes) In this section we will see how to add custom methods to MatrixBase. Since all expressions and matrix types inherit MatrixBase, adding a method to MatrixBase make it immediately available to all expressions ! A typical use case is, for instance, to make Eigen compatible with another API. You certainly know that in C++ it is not possible to add methods to an existing class. So how that's possible ? Here the trick is to include in the declaration of MatrixBase a file defined by the preprocessor token \c EIGEN_MATRIXBASE_PLUGIN: \code class MatrixBase { // ... #ifdef EIGEN_MATRIXBASE_PLUGIN #include EIGEN_MATRIXBASE_PLUGIN #endif }; \endcode Therefore to extend MatrixBase with your own methods you just have to create a file with your method declaration and define EIGEN_MATRIXBASE_PLUGIN before you include any Eigen's header file. You can extend many of the other classes used in Eigen by defining similarly named preprocessor symbols. For instance, define \c EIGEN_ARRAYBASE_PLUGIN if you want to extend the ArrayBase class. A full list of classes that can be extended in this way and the corresponding preprocessor symbols can be found on our page \ref TopicPreprocessorDirectives. Here is an example of an extension file for adding methods to MatrixBase: \n \b MatrixBaseAddons.h \code inline Scalar at(uint i, uint j) const { return this->operator()(i,j); } inline Scalar& at(uint i, uint j) { return this->operator()(i,j); } inline Scalar at(uint i) const { return this->operator[](i); } inline Scalar& at(uint i) { return this->operator[](i); } inline RealScalar squaredLength() const { return squaredNorm(); } inline RealScalar length() const { return norm(); } inline RealScalar invLength(void) const { return fast_inv_sqrt(squaredNorm()); } template inline Scalar squaredDistanceTo(const MatrixBase& other) const { return (derived() - other.derived()).squaredNorm(); } template inline RealScalar distanceTo(const MatrixBase& other) const { return internal::sqrt(derived().squaredDistanceTo(other)); } inline void scaleTo(RealScalar l) { RealScalar vl = norm(); if (vl>1e-9) derived() *= (l/vl); } inline Transpose transposed() {return this->transpose();} inline const Transpose transposed() const {return this->transpose();} inline uint minComponentId(void) const { int i; this->minCoeff(&i); return i; } inline uint maxComponentId(void) const { int i; this->maxCoeff(&i); return i; } template void makeFloor(const MatrixBase& other) { derived() = derived().cwiseMin(other.derived()); } template void makeCeil(const MatrixBase& other) { derived() = derived().cwiseMax(other.derived()); } const CwiseBinaryOp, const Derived, const ConstantReturnType> operator+(const Scalar& scalar) const { return CwiseBinaryOp, const Derived, const ConstantReturnType>(derived(), Constant(rows(),cols(),scalar)); } friend const CwiseBinaryOp, const ConstantReturnType, Derived> operator+(const Scalar& scalar, const MatrixBase& mat) { return CwiseBinaryOp, const ConstantReturnType, Derived>(Constant(rows(),cols(),scalar), mat.derived()); } \endcode Then one can the following declaration in the config.h or whatever prerequisites header file of his project: \code #define EIGEN_MATRIXBASE_PLUGIN "MatrixBaseAddons.h" \endcode */ } piqp-octave/src/eigen/doc/StructHavingEigenMembers.dox0000644000175100001770000001556214107270226022730 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicStructHavingEigenMembers Structures Having Eigen Members \eigenAutoToc \section StructHavingEigenMembers_summary Executive Summary If you define a structure having members of \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", you must ensure that calling operator new on it allocates properly aligned buffers. If you're compiling in \cpp17 mode only with a sufficiently recent compiler (e.g., GCC>=7, clang>=5, MSVC>=19.12), then everything is taken care by the compiler and you can stop reading. Otherwise, you have to overload its `operator new` so that it generates properly aligned pointers (e.g., 32-bytes-aligned for Vector4d and AVX). Fortunately, %Eigen provides you with a macro `EIGEN_MAKE_ALIGNED_OPERATOR_NEW` that does that for you. \section StructHavingEigenMembers_what What kind of code needs to be changed? The kind of code that needs to be changed is this: \code class Foo { ... Eigen::Vector2d v; ... }; ... Foo *foo = new Foo; \endcode In other words: you have a class that has as a member a \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen object", and then you dynamically create an object of that class. \section StructHavingEigenMembers_how How should such code be modified? Very easy, you just need to put a `EIGEN_MAKE_ALIGNED_OPERATOR_NEW` macro in a public part of your class, like this: \code class Foo { ... Eigen::Vector4d v; ... public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW }; ... Foo *foo = new Foo; \endcode This macro makes `new Foo` always return an aligned pointer. In \cpp17, this macro is empty. If this approach is too intrusive, see also the \ref StructHavingEigenMembers_othersolutions "other solutions". \section StructHavingEigenMembers_why Why is this needed? OK let's say that your code looks like this: \code class Foo { ... Eigen::Vector4d v; ... }; ... Foo *foo = new Foo; \endcode A Eigen::Vector4d consists of 4 doubles, which is 256 bits. This is exactly the size of an AVX register, which makes it possible to use AVX for all sorts of operations on this vector. But AVX instructions (at least the ones that %Eigen uses, which are the fast ones) require 256-bit alignment. Otherwise you get a segmentation fault. For this reason, %Eigen takes care by itself to require 256-bit alignment for Eigen::Vector4d, by doing two things: \li %Eigen requires 256-bit alignment for the Eigen::Vector4d's array (of 4 doubles). With \cpp11 this is done with the alignas keyword, or compiler's extensions for c++98/03. \li %Eigen overloads the `operator new` of Eigen::Vector4d so it will always return 256-bit aligned pointers. (removed in \cpp17) Thus, normally, you don't have to worry about anything, %Eigen handles alignment of operator new for you... ... except in one case. When you have a `class Foo` like above, and you dynamically allocate a new `Foo` as above, then, since `Foo` doesn't have aligned `operator new`, the returned pointer foo is not necessarily 256-bit aligned. The alignment attribute of the member `v` is then relative to the start of the class `Foo`. If the `foo` pointer wasn't aligned, then `foo->v` won't be aligned either! The solution is to let `class Foo` have an aligned `operator new`, as we showed in the previous section. This explanation also holds for SSE/NEON/MSA/Altivec/VSX targets, which require 16-bytes alignment, and AVX512 which requires 64-bytes alignment for fixed-size objects multiple of 64 bytes (e.g., Eigen::Matrix4d). \section StructHavingEigenMembers_movetotop Should I then put all the members of Eigen types at the beginning of my class? That's not required. Since %Eigen takes care of declaring adequate alignment, all members that need it are automatically aligned relatively to the class. So code like this works fine: \code class Foo { double x; Eigen::Vector4d v; public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW }; \endcode That said, as usual, it is recommended to sort the members so that alignment does not waste memory. In the above example, with AVX, the compiler will have to reserve 24 empty bytes between `x` and `v`. \section StructHavingEigenMembers_dynamicsize What about dynamic-size matrices and vectors? Dynamic-size matrices and vectors, such as Eigen::VectorXd, allocate dynamically their own array of coefficients, so they take care of requiring absolute alignment automatically. So they don't cause this issue. The issue discussed here is only with \ref TopicFixedSizeVectorizable "fixed-size vectorizable matrices and vectors". \section StructHavingEigenMembers_bugineigen So is this a bug in Eigen? No, it's not our bug. It's more like an inherent problem of the c++ language specification that has been solved in c++17 through the feature known as dynamic memory allocation for over-aligned data. \section StructHavingEigenMembers_conditional What if I want to do this conditionally (depending on template parameters) ? For this situation, we offer the macro `EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)`. It will generate aligned operators like `EIGEN_MAKE_ALIGNED_OPERATOR_NEW` if `NeedsToAlign` is true. It will generate operators with the default alignment if `NeedsToAlign` is false. In \cpp17, this macro is empty. Example: \code template class Foo { typedef Eigen::Matrix Vector; enum { NeedsToAlign = (sizeof(Vector)%16)==0 }; ... Vector v; ... public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign) }; ... Foo<4> *foo4 = new Foo<4>; // foo4 is guaranteed to be 128bit-aligned Foo<3> *foo3 = new Foo<3>; // foo3 has only the system default alignment guarantee \endcode \section StructHavingEigenMembers_othersolutions Other solutions In case putting the `EIGEN_MAKE_ALIGNED_OPERATOR_NEW` macro everywhere is too intrusive, there exists at least two other solutions. \subsection othersolutions1 Disabling alignment The first is to disable alignment requirement for the fixed size members: \code class Foo { ... Eigen::Matrix v; ... }; \endcode This `v` is fully compatible with aligned Eigen::Vector4d. This has only for effect to make load/stores to `v` more expensive (usually slightly, but that's hardware dependent). \subsection othersolutions2 Private structure The second consist in storing the fixed-size objects into a private struct which will be dynamically allocated at the construction time of the main object: \code struct Foo_d { EIGEN_MAKE_ALIGNED_OPERATOR_NEW Vector4d v; ... }; struct Foo { Foo() { init_d(); } ~Foo() { delete d; } void bar() { // use d->v instead of v ... } private: void init_d() { d = new Foo_d; } Foo_d* d; }; \endcode The clear advantage here is that the class `Foo` remains unchanged regarding alignment issues. The drawback is that an additional heap allocation will be required whatsoever. */ } piqp-octave/src/eigen/doc/TopicAliasing.dox0000644000175100001770000002403514107270226020545 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicAliasing Aliasing In %Eigen, aliasing refers to assignment statement in which the same matrix (or array or vector) appears on the left and on the right of the assignment operators. Statements like mat = 2 * mat; or mat = mat.transpose(); exhibit aliasing. The aliasing in the first example is harmless, but the aliasing in the second example leads to unexpected results. This page explains what aliasing is, when it is harmful, and what to do about it. \eigenAutoToc \section TopicAliasingExamples Examples Here is a simple example exhibiting aliasing:
ExampleOutput
\include TopicAliasing_block.cpp \verbinclude TopicAliasing_block.out
The output is not what one would expect. The problem is the assignment \code mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2); \endcode This assignment exhibits aliasing: the coefficient \c mat(1,1) appears both in the block mat.bottomRightCorner(2,2) on the left-hand side of the assignment and the block mat.topLeftCorner(2,2) on the right-hand side. After the assignment, the (2,2) entry in the bottom right corner should have the value of \c mat(1,1) before the assignment, which is 5. However, the output shows that \c mat(2,2) is actually 1. The problem is that %Eigen uses lazy evaluation (see \ref TopicEigenExpressionTemplates) for mat.topLeftCorner(2,2). The result is similar to \code mat(1,1) = mat(0,0); mat(1,2) = mat(0,1); mat(2,1) = mat(1,0); mat(2,2) = mat(1,1); \endcode Thus, \c mat(2,2) is assigned the \e new value of \c mat(1,1) instead of the old value. The next section explains how to solve this problem by calling \link DenseBase::eval() eval()\endlink. Aliasing occurs more naturally when trying to shrink a matrix. For example, the expressions vec = vec.head(n) and mat = mat.block(i,j,r,c) exhibit aliasing. In general, aliasing cannot be detected at compile time: if \c mat in the first example were a bit bigger, then the blocks would not overlap, and there would be no aliasing problem. However, %Eigen does detect some instances of aliasing, albeit at run time. The following example exhibiting aliasing was mentioned in \ref TutorialMatrixArithmetic :
ExampleOutput
\include tut_arithmetic_transpose_aliasing.cpp \verbinclude tut_arithmetic_transpose_aliasing.out
Again, the output shows the aliasing issue. However, by default %Eigen uses a run-time assertion to detect this and exits with a message like \verbatim void Eigen::DenseBase::checkTransposeAliasing(const OtherDerived&) const [with OtherDerived = Eigen::Transpose >, Derived = Eigen::Matrix]: Assertion `(!internal::check_transpose_aliasing_selector::IsTransposed,OtherDerived>::run(internal::extract_data(derived()), other)) && "aliasing detected during transposition, use transposeInPlace() or evaluate the rhs into a temporary using .eval()"' failed. \endverbatim The user can turn %Eigen's run-time assertions like the one to detect this aliasing problem off by defining the EIGEN_NO_DEBUG macro, and the above program was compiled with this macro turned off in order to illustrate the aliasing problem. See \ref TopicAssertions for more information about %Eigen's run-time assertions. \section TopicAliasingSolution Resolving aliasing issues If you understand the cause of the aliasing issue, then it is obvious what must happen to solve it: %Eigen has to evaluate the right-hand side fully into a temporary matrix/array and then assign it to the left-hand side. The function \link DenseBase::eval() eval() \endlink does precisely that. For example, here is the corrected version of the first example above:
ExampleOutput
\include TopicAliasing_block_correct.cpp \verbinclude TopicAliasing_block_correct.out
Now, \c mat(2,2) equals 5 after the assignment, as it should be. The same solution also works for the second example, with the transpose: simply replace the line a = a.transpose(); with a = a.transpose().eval();. However, in this common case there is a better solution. %Eigen provides the special-purpose function \link DenseBase::transposeInPlace() transposeInPlace() \endlink which replaces a matrix by its transpose. This is shown below:
ExampleOutput
\include tut_arithmetic_transpose_inplace.cpp \verbinclude tut_arithmetic_transpose_inplace.out
If an xxxInPlace() function is available, then it is best to use it, because it indicates more clearly what you are doing. This may also allow %Eigen to optimize more aggressively. These are some of the xxxInPlace() functions provided:
Original functionIn-place function
MatrixBase::adjoint() MatrixBase::adjointInPlace()
DenseBase::reverse() DenseBase::reverseInPlace()
LDLT::solve() LDLT::solveInPlace()
LLT::solve() LLT::solveInPlace()
TriangularView::solve() TriangularView::solveInPlace()
DenseBase::transpose() DenseBase::transposeInPlace()
In the special case where a matrix or vector is shrunk using an expression like vec = vec.head(n), you can use \link PlainObjectBase::conservativeResize() conservativeResize() \endlink. \section TopicAliasingCwise Aliasing and component-wise operations As explained above, it may be dangerous if the same matrix or array occurs on both the left-hand side and the right-hand side of an assignment operator, and it is then often necessary to evaluate the right-hand side explicitly. However, applying component-wise operations (such as matrix addition, scalar multiplication and array multiplication) is safe. The following example has only component-wise operations. Thus, there is no need for \link DenseBase::eval() eval() \endlink even though the same matrix appears on both sides of the assignments.
ExampleOutput
\include TopicAliasing_cwise.cpp \verbinclude TopicAliasing_cwise.out
In general, an assignment is safe if the (i,j) entry of the expression on the right-hand side depends only on the (i,j) entry of the matrix or array on the left-hand side and not on any other entries. In that case it is not necessary to evaluate the right-hand side explicitly. \section TopicAliasingMatrixMult Aliasing and matrix multiplication Matrix multiplication is the only operation in %Eigen that assumes aliasing by default, under the condition that the destination matrix is not resized. Thus, if \c matA is a \b squared matrix, then the statement matA = matA * matA; is safe. All other operations in %Eigen assume that there are no aliasing problems, either because the result is assigned to a different matrix or because it is a component-wise operation.
ExampleOutput
\include TopicAliasing_mult1.cpp \verbinclude TopicAliasing_mult1.out
However, this comes at a price. When executing the expression matA = matA * matA, %Eigen evaluates the product in a temporary matrix which is assigned to \c matA after the computation. This is fine. But %Eigen does the same when the product is assigned to a different matrix (e.g., matB = matA * matA). In that case, it is more efficient to evaluate the product directly into \c matB instead of evaluating it first into a temporary matrix and copying that matrix to \c matB. The user can indicate with the \link MatrixBase::noalias() noalias()\endlink function that there is no aliasing, as follows: matB.noalias() = matA * matA. This allows %Eigen to evaluate the matrix product matA * matA directly into \c matB.
ExampleOutput
\include TopicAliasing_mult2.cpp \verbinclude TopicAliasing_mult2.out
Of course, you should not use \c noalias() when there is in fact aliasing taking place. If you do, then you may get wrong results:
ExampleOutput
\include TopicAliasing_mult3.cpp \verbinclude TopicAliasing_mult3.out
Moreover, starting in Eigen 3.3, aliasing is \b not assumed if the destination matrix is resized and the product is not directly assigned to the destination. Therefore, the following example is also wrong:
ExampleOutput
\include TopicAliasing_mult4.cpp \verbinclude TopicAliasing_mult4.out
As for any aliasing issue, you can resolve it by explicitly evaluating the expression prior to assignment:
ExampleOutput
\include TopicAliasing_mult5.cpp \verbinclude TopicAliasing_mult5.out
\section TopicAliasingSummary Summary Aliasing occurs when the same matrix or array coefficients appear both on the left- and the right-hand side of an assignment operator. - Aliasing is harmless with coefficient-wise computations; this includes scalar multiplication and matrix or array addition. - When you multiply two matrices, %Eigen assumes that aliasing occurs. If you know that there is no aliasing, then you can use \link MatrixBase::noalias() noalias()\endlink. - In all other situations, %Eigen assumes that there is no aliasing issue and thus gives the wrong result if aliasing does in fact occur. To prevent this, you have to use \link DenseBase::eval() eval() \endlink or one of the xxxInPlace() functions. */ } piqp-octave/src/eigen/doc/TopicMultithreading.dox0000644000175100001770000000724514107270226022002 0ustar runnerdockernamespace Eigen { /** \page TopicMultiThreading Eigen and multi-threading \section TopicMultiThreading_MakingEigenMT Make Eigen run in parallel Some %Eigen's algorithms can exploit the multiple cores present in your hardware. To this end, it is enough to enable OpenMP on your compiler, for instance: - GCC: \c -fopenmp - ICC: \c -openmp - MSVC: check the respective option in the build properties. You can control the number of threads that will be used using either the OpenMP API or %Eigen's API using the following priority: \code OMP_NUM_THREADS=n ./my_program omp_set_num_threads(n); Eigen::setNbThreads(n); \endcode Unless `setNbThreads` has been called, %Eigen uses the number of threads specified by OpenMP. You can restore this behavior by calling `setNbThreads(0);`. You can query the number of threads that will be used with: \code n = Eigen::nbThreads( ); \endcode You can disable %Eigen's multi threading at compile time by defining the \link TopicPreprocessorDirectivesPerformance EIGEN_DONT_PARALLELIZE \endlink preprocessor token. Currently, the following algorithms can make use of multi-threading: - general dense matrix - matrix products - PartialPivLU - row-major-sparse * dense vector/matrix products - ConjugateGradient with \c Lower|Upper as the \c UpLo template parameter. - BiCGSTAB with a row-major sparse matrix format. - LeastSquaresConjugateGradient \warning On most OS it is very important to limit the number of threads to the number of physical cores, otherwise significant slowdowns are expected, especially for operations involving dense matrices. Indeed, the principle of hyper-threading is to run multiple threads (in most cases 2) on a single core in an interleaved manner. However, %Eigen's matrix-matrix product kernel is fully optimized and already exploits nearly 100% of the CPU capacity. Consequently, there is no room for running multiple such threads on a single core, and the performance would drops significantly because of cache pollution and other sources of overheads. At this stage of reading you're probably wondering why %Eigen does not limit itself to the number of physical cores? This is simply because OpenMP does not allow to know the number of physical cores, and thus %Eigen will launch as many threads as cores reported by OpenMP. \section TopicMultiThreading_UsingEigenWithMT Using Eigen in a multi-threaded application In the case your own application is multithreaded, and multiple threads make calls to %Eigen, then you have to initialize %Eigen by calling the following routine \b before creating the threads: \code #include int main(int argc, char** argv) { Eigen::initParallel(); ... } \endcode \note With %Eigen 3.3, and a fully C++11 compliant compiler (i.e., thread-safe static local variable initialization), then calling \c initParallel() is optional. \warning Note that all functions generating random matrices are \b not re-entrant nor thread-safe. Those include DenseBase::Random(), and DenseBase::setRandom() despite a call to `Eigen::initParallel()`. This is because these functions are based on `std::rand` which is not re-entrant. For thread-safe random generator, we recommend the use of c++11 random generators (\link DenseBase::NullaryExpr(Index, const CustomNullaryOp&) example \endlink) or `boost::random`. In the case your application is parallelized with OpenMP, you might want to disable %Eigen's own parallelization as detailed in the previous section. \warning Using OpenMP with custom scalar types that might throw exceptions can lead to unexpected behaviour in the event of throwing. */ } piqp-octave/src/eigen/doc/AsciiQuickReference.txt0000644000175100001770000002563514107270226021717 0ustar runnerdocker// A simple quickref for Eigen. Add anything that's missing. // Main author: Keir Mierle #include Matrix A; // Fixed rows and cols. Same as Matrix3d. Matrix B; // Fixed rows, dynamic cols. Matrix C; // Full dynamic. Same as MatrixXd. Matrix E; // Row major; default is column-major. Matrix3f P, Q, R; // 3x3 float matrix. Vector3f x, y, z; // 3x1 float matrix. RowVector3f a, b, c; // 1x3 float matrix. VectorXd v; // Dynamic column vector of doubles double s; // Basic usage // Eigen // Matlab // comments x.size() // length(x) // vector size C.rows() // size(C,1) // number of rows C.cols() // size(C,2) // number of columns x(i) // x(i+1) // Matlab is 1-based C(i,j) // C(i+1,j+1) // A.resize(4, 4); // Runtime error if assertions are on. B.resize(4, 9); // Runtime error if assertions are on. A.resize(3, 3); // Ok; size didn't change. B.resize(3, 9); // Ok; only dynamic cols changed. A << 1, 2, 3, // Initialize A. The elements can also be 4, 5, 6, // matrices, which are stacked along cols 7, 8, 9; // and then the rows are stacked. B << A, A, A; // B is three horizontally stacked A's. A.fill(10); // Fill A with all 10's. // Eigen // Matlab MatrixXd::Identity(rows,cols) // eye(rows,cols) C.setIdentity(rows,cols) // C = eye(rows,cols) MatrixXd::Zero(rows,cols) // zeros(rows,cols) C.setZero(rows,cols) // C = zeros(rows,cols) MatrixXd::Ones(rows,cols) // ones(rows,cols) C.setOnes(rows,cols) // C = ones(rows,cols) MatrixXd::Random(rows,cols) // rand(rows,cols)*2-1 // MatrixXd::Random returns uniform random numbers in (-1, 1). C.setRandom(rows,cols) // C = rand(rows,cols)*2-1 VectorXd::LinSpaced(size,low,high) // linspace(low,high,size)' v.setLinSpaced(size,low,high) // v = linspace(low,high,size)' VectorXi::LinSpaced(((hi-low)/step)+1, // low:step:hi low,low+step*(size-1)) // // Matrix slicing and blocks. All expressions listed here are read/write. // Templated size versions are faster. Note that Matlab is 1-based (a size N // vector is x(1)...x(N)). /******************************************************************************/ /* PLEASE HELP US IMPROVING THIS SECTION */ /* Eigen 3.4 supports a much improved API for sub-matrices, including, */ /* slicing and indexing from arrays: */ /* http://eigen.tuxfamily.org/dox-devel/group__TutorialSlicingIndexing.html */ /******************************************************************************/ // Eigen // Matlab x.head(n) // x(1:n) x.head() // x(1:n) x.tail(n) // x(end - n + 1: end) x.tail() // x(end - n + 1: end) x.segment(i, n) // x(i+1 : i+n) x.segment(i) // x(i+1 : i+n) P.block(i, j, rows, cols) // P(i+1 : i+rows, j+1 : j+cols) P.block(i, j) // P(i+1 : i+rows, j+1 : j+cols) P.row(i) // P(i+1, :) P.col(j) // P(:, j+1) P.leftCols() // P(:, 1:cols) P.leftCols(cols) // P(:, 1:cols) P.middleCols(j) // P(:, j+1:j+cols) P.middleCols(j, cols) // P(:, j+1:j+cols) P.rightCols() // P(:, end-cols+1:end) P.rightCols(cols) // P(:, end-cols+1:end) P.topRows() // P(1:rows, :) P.topRows(rows) // P(1:rows, :) P.middleRows(i) // P(i+1:i+rows, :) P.middleRows(i, rows) // P(i+1:i+rows, :) P.bottomRows() // P(end-rows+1:end, :) P.bottomRows(rows) // P(end-rows+1:end, :) P.topLeftCorner(rows, cols) // P(1:rows, 1:cols) P.topRightCorner(rows, cols) // P(1:rows, end-cols+1:end) P.bottomLeftCorner(rows, cols) // P(end-rows+1:end, 1:cols) P.bottomRightCorner(rows, cols) // P(end-rows+1:end, end-cols+1:end) P.topLeftCorner() // P(1:rows, 1:cols) P.topRightCorner() // P(1:rows, end-cols+1:end) P.bottomLeftCorner() // P(end-rows+1:end, 1:cols) P.bottomRightCorner() // P(end-rows+1:end, end-cols+1:end) // Of particular note is Eigen's swap function which is highly optimized. // Eigen // Matlab R.row(i) = P.col(j); // R(i, :) = P(:, j) R.col(j1).swap(mat1.col(j2)); // R(:, [j1 j2]) = R(:, [j2, j1]) // Views, transpose, etc; /******************************************************************************/ /* PLEASE HELP US IMPROVING THIS SECTION */ /* Eigen 3.4 supports a new API for reshaping: */ /* http://eigen.tuxfamily.org/dox-devel/group__TutorialReshape.html */ /******************************************************************************/ // Eigen // Matlab R.adjoint() // R' R.transpose() // R.' or conj(R') // Read-write R.diagonal() // diag(R) // Read-write x.asDiagonal() // diag(x) R.transpose().colwise().reverse() // rot90(R) // Read-write R.rowwise().reverse() // fliplr(R) R.colwise().reverse() // flipud(R) R.replicate(i,j) // repmat(P,i,j) // All the same as Matlab, but matlab doesn't have *= style operators. // Matrix-vector. Matrix-matrix. Matrix-scalar. y = M*x; R = P*Q; R = P*s; a = b*M; R = P - Q; R = s*P; a *= M; R = P + Q; R = P/s; R *= Q; R = s*P; R += Q; R *= s; R -= Q; R /= s; // Vectorized operations on each element independently // Eigen // Matlab R = P.cwiseProduct(Q); // R = P .* Q R = P.array() * s.array(); // R = P .* s R = P.cwiseQuotient(Q); // R = P ./ Q R = P.array() / Q.array(); // R = P ./ Q R = P.array() + s.array(); // R = P + s R = P.array() - s.array(); // R = P - s R.array() += s; // R = R + s R.array() -= s; // R = R - s R.array() < Q.array(); // R < Q R.array() <= Q.array(); // R <= Q R.cwiseInverse(); // 1 ./ P R.array().inverse(); // 1 ./ P R.array().sin() // sin(P) R.array().cos() // cos(P) R.array().pow(s) // P .^ s R.array().square() // P .^ 2 R.array().cube() // P .^ 3 R.cwiseSqrt() // sqrt(P) R.array().sqrt() // sqrt(P) R.array().exp() // exp(P) R.array().log() // log(P) R.cwiseMax(P) // max(R, P) R.array().max(P.array()) // max(R, P) R.cwiseMin(P) // min(R, P) R.array().min(P.array()) // min(R, P) R.cwiseAbs() // abs(P) R.array().abs() // abs(P) R.cwiseAbs2() // abs(P.^2) R.array().abs2() // abs(P.^2) (R.array() < s).select(P,Q ); // (R < s ? P : Q) R = (Q.array()==0).select(P,R) // R(Q==0) = P(Q==0) R = P.unaryExpr(ptr_fun(func)) // R = arrayfun(func, P) // with: scalar func(const scalar &x); // Reductions. int r, c; // Eigen // Matlab R.minCoeff() // min(R(:)) R.maxCoeff() // max(R(:)) s = R.minCoeff(&r, &c) // [s, i] = min(R(:)); [r, c] = ind2sub(size(R), i); s = R.maxCoeff(&r, &c) // [s, i] = max(R(:)); [r, c] = ind2sub(size(R), i); R.sum() // sum(R(:)) R.colwise().sum() // sum(R) R.rowwise().sum() // sum(R, 2) or sum(R')' R.prod() // prod(R(:)) R.colwise().prod() // prod(R) R.rowwise().prod() // prod(R, 2) or prod(R')' R.trace() // trace(R) R.all() // all(R(:)) R.colwise().all() // all(R) R.rowwise().all() // all(R, 2) R.any() // any(R(:)) R.colwise().any() // any(R) R.rowwise().any() // any(R, 2) // Dot products, norms, etc. // Eigen // Matlab x.norm() // norm(x). Note that norm(R) doesn't work in Eigen. x.squaredNorm() // dot(x, x) Note the equivalence is not true for complex x.dot(y) // dot(x, y) x.cross(y) // cross(x, y) Requires #include //// Type conversion // Eigen // Matlab A.cast(); // double(A) A.cast(); // single(A) A.cast(); // int32(A) A.real(); // real(A) A.imag(); // imag(A) // if the original type equals destination type, no work is done // Note that for most operations Eigen requires all operands to have the same type: MatrixXf F = MatrixXf::Zero(3,3); A += F; // illegal in Eigen. In Matlab A = A+F is allowed A += F.cast(); // F converted to double and then added (generally, conversion happens on-the-fly) // Eigen can map existing memory into Eigen matrices. float array[3]; Vector3f::Map(array).fill(10); // create a temporary Map over array and sets entries to 10 int data[4] = {1, 2, 3, 4}; Matrix2i mat2x2(data); // copies data into mat2x2 Matrix2i::Map(data) = 2*mat2x2; // overwrite elements of data with 2*mat2x2 MatrixXi::Map(data, 2, 2) += mat2x2; // adds mat2x2 to elements of data (alternative syntax if size is not know at compile time) // Solve Ax = b. Result stored in x. Matlab: x = A \ b. x = A.ldlt().solve(b)); // A sym. p.s.d. #include x = A.llt() .solve(b)); // A sym. p.d. #include x = A.lu() .solve(b)); // Stable and fast. #include x = A.qr() .solve(b)); // No pivoting. #include x = A.svd() .solve(b)); // Stable, slowest. #include // .ldlt() -> .matrixL() and .matrixD() // .llt() -> .matrixL() // .lu() -> .matrixL() and .matrixU() // .qr() -> .matrixQ() and .matrixR() // .svd() -> .matrixU(), .singularValues(), and .matrixV() // Eigenvalue problems // Eigen // Matlab A.eigenvalues(); // eig(A); EigenSolver eig(A); // [vec val] = eig(A) eig.eigenvalues(); // diag(val) eig.eigenvectors(); // vec // For self-adjoint matrices use SelfAdjointEigenSolver<> piqp-octave/src/eigen/doc/TutorialSparse_example_details.dox0000644000175100001770000000013114107270226024207 0ustar runnerdocker/** \page TutorialSparse_example_details \include Tutorial_sparse_example_details.cpp */ piqp-octave/src/eigen/doc/CustomizingEigen_InheritingMatrix.dox0000644000175100001770000000247114107270226024647 0ustar runnerdockernamespace Eigen { /** \page TopicCustomizing_InheritingMatrix Inheriting from Matrix Before inheriting from Matrix, be really, I mean REALLY, sure that using EIGEN_MATRIX_PLUGIN is not what you really want (see previous section). If you just need to add few members to Matrix, this is the way to go. An example of when you actually need to inherit Matrix, is when you have several layers of heritage such as MyVerySpecificVector1, MyVerySpecificVector2 -> MyVector1 -> Matrix and MyVerySpecificVector3, MyVerySpecificVector4 -> MyVector2 -> Matrix. In order for your object to work within the %Eigen framework, you need to define a few members in your inherited class. Here is a minimalistic example: \include CustomizingEigen_Inheritance.cpp Output: \verbinclude CustomizingEigen_Inheritance.out This is the kind of error you can get if you don't provide those methods \verbatim error: no match for ‘operator=’ in ‘v = Eigen::operator*( const Eigen::MatrixBase >::Scalar&, const Eigen::MatrixBase >::StorageBaseType&) (((const Eigen::MatrixBase >::StorageBaseType&) ((const Eigen::MatrixBase >::StorageBaseType*)(& v))))’ \endverbatim */ } piqp-octave/src/eigen/doc/eigendoxy_footer.html.in0000644000175100001770000000125014107270226022141 0ustar runnerdocker piqp-octave/src/eigen/doc/special_examples/0000755000175100001770000000000014107270226020615 5ustar runnerdockerpiqp-octave/src/eigen/doc/special_examples/random_cpp11.cpp0000644000175100001770000000052014107270226023602 0ustar runnerdocker#include #include #include using namespace Eigen; int main() { std::default_random_engine generator; std::poisson_distribution distribution(4.1); auto poisson = [&] () {return distribution(generator);}; RowVectorXi v = RowVectorXi::NullaryExpr(10, poisson ); std::cout << v << "\n"; } piqp-octave/src/eigen/doc/special_examples/Tutorial_sparse_example.cpp0000644000175100001770000000224014107270226026212 0ustar runnerdocker#include #include #include typedef Eigen::SparseMatrix SpMat; // declares a column-major sparse matrix type of double typedef Eigen::Triplet T; void buildProblem(std::vector& coefficients, Eigen::VectorXd& b, int n); void saveAsBitmap(const Eigen::VectorXd& x, int n, const char* filename); int main(int argc, char** argv) { if(argc!=2) { std::cerr << "Error: expected one and only one argument.\n"; return -1; } int n = 300; // size of the image int m = n*n; // number of unknowns (=number of pixels) // Assembly: std::vector coefficients; // list of non-zeros coefficients Eigen::VectorXd b(m); // the right hand side-vector resulting from the constraints buildProblem(coefficients, b, n); SpMat A(m,m); A.setFromTriplets(coefficients.begin(), coefficients.end()); // Solving: Eigen::SimplicialCholesky chol(A); // performs a Cholesky factorization of A Eigen::VectorXd x = chol.solve(b); // use the factorization to solve for the given right hand side // Export the result to a file: saveAsBitmap(x, n, argv[1]); return 0; } piqp-octave/src/eigen/doc/special_examples/CMakeLists.txt0000644000175100001770000000213214107270226023353 0ustar runnerdockerif(NOT EIGEN_TEST_NOQT) find_package(Qt4) if(QT4_FOUND) include(${QT_USE_FILE}) endif() endif() if(QT4_FOUND) add_executable(Tutorial_sparse_example Tutorial_sparse_example.cpp Tutorial_sparse_example_details.cpp) target_link_libraries(Tutorial_sparse_example ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO} ${QT_QTCORE_LIBRARY} ${QT_QTGUI_LIBRARY}) add_custom_command( TARGET Tutorial_sparse_example POST_BUILD COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/../html/ COMMAND Tutorial_sparse_example ARGS ${CMAKE_CURRENT_BINARY_DIR}/../html/Tutorial_sparse_example.jpeg ) add_dependencies(all_examples Tutorial_sparse_example) endif() if(EIGEN_COMPILER_SUPPORT_CPP11) add_executable(random_cpp11 random_cpp11.cpp) target_link_libraries(random_cpp11 ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO}) add_dependencies(all_examples random_cpp11) ei_add_target_property(random_cpp11 COMPILE_FLAGS "-std=c++11") add_custom_command( TARGET random_cpp11 POST_BUILD COMMAND random_cpp11 ARGS >${CMAKE_CURRENT_BINARY_DIR}/random_cpp11.out ) endif() piqp-octave/src/eigen/doc/special_examples/Tutorial_sparse_example_details.cpp0000644000175100001770000000305014107270226027717 0ustar runnerdocker#include #include #include typedef Eigen::SparseMatrix SpMat; // declares a column-major sparse matrix type of double typedef Eigen::Triplet T; void insertCoefficient(int id, int i, int j, double w, std::vector& coeffs, Eigen::VectorXd& b, const Eigen::VectorXd& boundary) { int n = int(boundary.size()); int id1 = i+j*n; if(i==-1 || i==n) b(id) -= w * boundary(j); // constrained coefficient else if(j==-1 || j==n) b(id) -= w * boundary(i); // constrained coefficient else coeffs.push_back(T(id,id1,w)); // unknown coefficient } void buildProblem(std::vector& coefficients, Eigen::VectorXd& b, int n) { b.setZero(); Eigen::ArrayXd boundary = Eigen::ArrayXd::LinSpaced(n, 0,M_PI).sin().pow(2); for(int j=0; j bits = (x*255).cast(); QImage img(bits.data(), n,n,QImage::Format_Indexed8); img.setColorCount(256); for(int i=0;i<256;i++) img.setColor(i,qRgb(i,i,i)); img.save(filename); } piqp-octave/src/eigen/doc/WrongStackAlignment.dox0000644000175100001770000000555414107270226021745 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicWrongStackAlignment Compiler making a wrong assumption on stack alignment

It appears that this was a GCC bug that has been fixed in GCC 4.5. If you hit this issue, please upgrade to GCC 4.5 and report to us, so we can update this page.

This is an issue that, so far, we met only with GCC on Windows: for instance, MinGW and TDM-GCC. By default, in a function like this, \code void foo() { Eigen::Quaternionf q; //... } \endcode GCC assumes that the stack is already 16-byte-aligned so that the object \a q will be created at a 16-byte-aligned location. For this reason, it doesn't take any special care to explicitly align the object \a q, as Eigen requires. The problem is that, in some particular cases, this assumption can be wrong on Windows, where the stack is only guaranteed to have 4-byte alignment. Indeed, even though GCC takes care of aligning the stack in the main function and does its best to keep it aligned, when a function is called from another thread or from a binary compiled with another compiler, the stack alignment can be corrupted. This results in the object 'q' being created at an unaligned location, making your program crash with the \ref TopicUnalignedArrayAssert "assertion on unaligned arrays". So far we found the three following solutions. \section sec_sol1 Local solution A local solution is to mark such a function with this attribute: \code __attribute__((force_align_arg_pointer)) void foo() { Eigen::Quaternionf q; //... } \endcode Read this GCC documentation to understand what this does. Of course this should only be done on GCC on Windows, so for portability you'll have to encapsulate this in a macro which you leave empty on other platforms. The advantage of this solution is that you can finely select which function might have a corrupted stack alignment. Of course on the downside this has to be done for every such function, so you may prefer one of the following two global solutions. \section sec_sol2 Global solutions A global solution is to edit your project so that when compiling with GCC on Windows, you pass this option to GCC: \code -mincoming-stack-boundary=2 \endcode Explanation: this tells GCC that the stack is only required to be aligned to 2^2=4 bytes, so that GCC now knows that it really must take extra care to honor the 16 byte alignment of \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" when needed. Another global solution is to pass this option to gcc: \code -mstackrealign \endcode which has the same effect than adding the \c force_align_arg_pointer attribute to all functions. These global solutions are easy to use, but note that they may slowdown your program because they lead to extra prologue/epilogue instructions for every function. */ } piqp-octave/src/eigen/doc/FixedSizeVectorizable.dox0000644000175100001770000000356214107270226022265 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicFixedSizeVectorizable Fixed-size vectorizable %Eigen objects The goal of this page is to explain what we mean by "fixed-size vectorizable". \section FixedSizeVectorizable_summary Executive Summary An Eigen object is called "fixed-size vectorizable" if it has fixed size and that size is a multiple of 16 bytes. Examples include: \li Eigen::Vector2d \li Eigen::Vector4d \li Eigen::Vector4f \li Eigen::Matrix2d \li Eigen::Matrix2f \li Eigen::Matrix4d \li Eigen::Matrix4f \li Eigen::Affine3d \li Eigen::Affine3f \li Eigen::Quaterniond \li Eigen::Quaternionf \section FixedSizeVectorizable_explanation Explanation First, "fixed-size" should be clear: an %Eigen object has fixed size if its number of rows and its number of columns are fixed at compile-time. So for example \ref Matrix3f has fixed size, but \ref MatrixXf doesn't (the opposite of fixed-size is dynamic-size). The array of coefficients of a fixed-size %Eigen object is a plain "static array", it is not dynamically allocated. For example, the data behind a \ref Matrix4f is just a "float array[16]". Fixed-size objects are typically very small, which means that we want to handle them with zero runtime overhead -- both in terms of memory usage and of speed. Now, vectorization works with 128-bit packets (e.g., SSE, AltiVec, NEON), 256-bit packets (e.g., AVX), or 512-bit packets (e.g., AVX512). Moreover, for performance reasons, these packets are most efficiently read and written if they have the same alignment as the packet size, that is 16 bytes, 32 bytes, and 64 bytes respectively. So it turns out that the best way that fixed-size %Eigen objects can be vectorized, is if their size is a multiple of 16 bytes (or more). %Eigen will then request 16-byte alignment (or more) for these objects, and henceforth rely on these objects being aligned to achieve maximal efficiency. */ } piqp-octave/src/eigen/doc/InsideEigenExample.dox0000644000175100001770000007357114107270226021527 0ustar runnerdockernamespace Eigen { /** \page TopicInsideEigenExample What happens inside Eigen, on a simple example \eigenAutoToc
Consider the following example program: \code #include int main() { int size = 50; // VectorXf is a vector of floats, with dynamic size. Eigen::VectorXf u(size), v(size), w(size); u = v + w; } \endcode The goal of this page is to understand how Eigen compiles it, assuming that SSE2 vectorization is enabled (GCC option -msse2). \section WhyInteresting Why it's interesting Maybe you think, that the above example program is so simple, that compiling it shouldn't involve anything interesting. So before starting, let us explain what is nontrivial in compiling it correctly -- that is, producing optimized code -- so that the complexity of Eigen, that we'll explain here, is really useful. Look at the line of code \code u = v + w; // (*) \endcode The first important thing about compiling it, is that the arrays should be traversed only once, like \code for(int i = 0; i < size; i++) u[i] = v[i] + w[i]; \endcode The problem is that if we make a naive C++ library where the VectorXf class has an operator+ returning a VectorXf, then the line of code (*) will amount to: \code VectorXf tmp = v + w; VectorXf u = tmp; \endcode Obviously, the introduction of the temporary \a tmp here is useless. It has a very bad effect on performance, first because the creation of \a tmp requires a dynamic memory allocation in this context, and second as there are now two for loops: \code for(int i = 0; i < size; i++) tmp[i] = v[i] + w[i]; for(int i = 0; i < size; i++) u[i] = tmp[i]; \endcode Traversing the arrays twice instead of once is terrible for performance, as it means that we do many redundant memory accesses. The second important thing about compiling the above program, is to make correct use of SSE2 instructions. Notice that Eigen also supports AltiVec and that all the discussion that we make here applies also to AltiVec. SSE2, like AltiVec, is a set of instructions allowing to perform computations on packets of 128 bits at once. Since a float is 32 bits, this means that SSE2 instructions can handle 4 floats at once. This means that, if correctly used, they can make our computation go up to 4x faster. However, in the above program, we have chosen size=50, so our vectors consist of 50 float's, and 50 is not a multiple of 4. This means that we cannot hope to do all of that computation using SSE2 instructions. The second best thing, to which we should aim, is to handle the 48 first coefficients with SSE2 instructions, since 48 is the biggest multiple of 4 below 50, and then handle separately, without SSE2, the 49th and 50th coefficients. Something like this: \code for(int i = 0; i < 4*(size/4); i+=4) u.packet(i) = v.packet(i) + w.packet(i); for(int i = 4*(size/4); i < size; i++) u[i] = v[i] + w[i]; \endcode So let us look line by line at our example program, and let's follow Eigen as it compiles it. \section ConstructingVectors Constructing vectors Let's analyze the first line: \code Eigen::VectorXf u(size), v(size), w(size); \endcode First of all, VectorXf is the following typedef: \code typedef Matrix VectorXf; \endcode The class template Matrix is declared in src/Core/util/ForwardDeclarations.h with 6 template parameters, but the last 3 are automatically determined by the first 3. So you don't need to worry about them for now. Here, Matrix\ means a matrix of floats, with a dynamic number of rows and 1 column. The Matrix class inherits a base class, MatrixBase. Don't worry about it, for now it suffices to say that MatrixBase is what unifies matrices/vectors and all the expressions types -- more on that below. When we do \code Eigen::VectorXf u(size); \endcode the constructor that is called is Matrix::Matrix(int), in src/Core/Matrix.h. Besides some assertions, all it does is to construct the \a m_storage member, which is of type DenseStorage\. You may wonder, isn't it overengineering to have the storage in a separate class? The reason is that the Matrix class template covers all kinds of matrices and vector: both fixed-size and dynamic-size. The storage method is not the same in these two cases. For fixed-size, the matrix coefficients are stored as a plain member array. For dynamic-size, the coefficients will be stored as a pointer to a dynamically-allocated array. Because of this, we need to abstract storage away from the Matrix class. That's DenseStorage. Let's look at this constructor, in src/Core/DenseStorage.h. You can see that there are many partial template specializations of DenseStorages here, treating separately the cases where dimensions are Dynamic or fixed at compile-time. The partial specialization that we are looking at is: \code template class DenseStorage \endcode Here, the constructor called is DenseStorage::DenseStorage(int size, int rows, int columns) with size=50, rows=50, columns=1. Here is this constructor: \code inline DenseStorage(int size, int rows, int) : m_data(internal::aligned_new(size)), m_rows(rows) {} \endcode Here, the \a m_data member is the actual array of coefficients of the matrix. As you see, it is dynamically allocated. Rather than calling new[] or malloc(), as you can see, we have our own internal::aligned_new defined in src/Core/util/Memory.h. What it does is that if vectorization is enabled, then it uses a platform-specific call to allocate a 128-bit-aligned array, as that is very useful for vectorization with both SSE2 and AltiVec. If vectorization is disabled, it amounts to the standard new[]. As you can see, the constructor also sets the \a m_rows member to \a size. Notice that there is no \a m_columns member: indeed, in this partial specialization of DenseStorage, we know the number of columns at compile-time, since the _Cols template parameter is different from Dynamic. Namely, in our case, _Cols is 1, which is to say that our vector is just a matrix with 1 column. Hence, there is no need to store the number of columns as a runtime variable. When you call VectorXf::data() to get the pointer to the array of coefficients, it returns DenseStorage::data() which returns the \a m_data member. When you call VectorXf::size() to get the size of the vector, this is actually a method in the base class MatrixBase. It determines that the vector is a column-vector, since ColsAtCompileTime==1 (this comes from the template parameters in the typedef VectorXf). It deduces that the size is the number of rows, so it returns VectorXf::rows(), which returns DenseStorage::rows(), which returns the \a m_rows member, which was set to \a size by the constructor. \section ConstructionOfSumXpr Construction of the sum expression Now that our vectors are constructed, let's move on to the next line: \code u = v + w; \endcode The executive summary is that operator+ returns a "sum of vectors" expression, but doesn't actually perform the computation. It is the operator=, whose call occurs thereafter, that does the computation. Let us now see what Eigen does when it sees this: \code v + w \endcode Here, v and w are of type VectorXf, which is a typedef for a specialization of Matrix (as we explained above), which is a subclass of MatrixBase. So what is being called is \code MatrixBase::operator+(const MatrixBase&) \endcode The return type of this operator is \code CwiseBinaryOp, VectorXf, VectorXf> \endcode The CwiseBinaryOp class is our first encounter with an expression template. As we said, the operator+ doesn't by itself perform any computation, it just returns an abstract "sum of vectors" expression. Since there are also "difference of vectors" and "coefficient-wise product of vectors" expressions, we unify them all as "coefficient-wise binary operations", which we abbreviate as "CwiseBinaryOp". "Coefficient-wise" means that the operations is performed coefficient by coefficient. "binary" means that there are two operands -- we are adding two vectors with one another. Now you might ask, what if we did something like \code v + w + u; \endcode The first v + w would return a CwiseBinaryOp as above, so in order for this to compile, we'd need to define an operator+ also in the class CwiseBinaryOp... at this point it starts looking like a nightmare: are we going to have to define all operators in each of the expression classes (as you guessed, CwiseBinaryOp is only one of many) ? This looks like a dead end! The solution is that CwiseBinaryOp itself, as well as Matrix and all the other expression types, is a subclass of MatrixBase. So it is enough to define once and for all the operators in class MatrixBase. Since MatrixBase is the common base class of different subclasses, the aspects that depend on the subclass must be abstracted from MatrixBase. This is called polymorphism. The classical approach to polymorphism in C++ is by means of virtual functions. This is dynamic polymorphism. Here we don't want dynamic polymorphism because the whole design of Eigen is based around the assumption that all the complexity, all the abstraction, gets resolved at compile-time. This is crucial: if the abstraction can't get resolved at compile-time, Eigen's compile-time optimization mechanisms become useless, not to mention that if that abstraction has to be resolved at runtime it'll incur an overhead by itself. Here, what we want is to have a single class MatrixBase as the base of many subclasses, in such a way that each MatrixBase object (be it a matrix, or vector, or any kind of expression) knows at compile-time (as opposed to run-time) of which particular subclass it is an object (i.e. whether it is a matrix, or an expression, and what kind of expression). The solution is the Curiously Recurring Template Pattern. Let's do the break now. Hopefully you can read this wikipedia page during the break if needed, but it won't be allowed during the exam. In short, MatrixBase takes a template parameter \a Derived. Whenever we define a subclass Subclass, we actually make Subclass inherit MatrixBase\. The point is that different subclasses inherit different MatrixBase types. Thanks to this, whenever we have an object of a subclass, and we call on it some MatrixBase method, we still remember even from inside the MatrixBase method which particular subclass we're talking about. This means that we can put almost all the methods and operators in the base class MatrixBase, and have only the bare minimum in the subclasses. If you look at the subclasses in Eigen, like for instance the CwiseBinaryOp class, they have very few methods. There are coeff() and sometimes coeffRef() methods for access to the coefficients, there are rows() and cols() methods returning the number of rows and columns, but there isn't much more than that. All the meat is in MatrixBase, so it only needs to be coded once for all kinds of expressions, matrices, and vectors. So let's end this digression and come back to the piece of code from our example program that we were currently analyzing, \code v + w \endcode Now that MatrixBase is a good friend, let's write fully the prototype of the operator+ that gets called here (this code is from src/Core/MatrixBase.h): \code template class MatrixBase { // ... template const CwiseBinaryOp::Scalar>, Derived, OtherDerived> operator+(const MatrixBase &other) const; // ... }; \endcode Here of course, \a Derived and \a OtherDerived are VectorXf. As we said, CwiseBinaryOp is also used for other operations such as substration, so it takes another template parameter determining the operation that will be applied to coefficients. This template parameter is a functor, that is, a class in which we have an operator() so it behaves like a function. Here, the functor used is internal::scalar_sum_op. It is defined in src/Core/Functors.h. Let us now explain the internal::traits here. The internal::scalar_sum_op class takes one template parameter: the type of the numbers to handle. Here of course we want to pass the scalar type (a.k.a. numeric type) of VectorXf, which is \c float. How do we determine which is the scalar type of \a Derived ? Throughout Eigen, all matrix and expression types define a typedef \a Scalar which gives its scalar type. For example, VectorXf::Scalar is a typedef for \c float. So here, if life was easy, we could find the numeric type of \a Derived as just \code typename Derived::Scalar \endcode Unfortunately, we can't do that here, as the compiler would complain that the type Derived hasn't yet been defined. So we use a workaround: in src/Core/util/ForwardDeclarations.h, we declared (not defined!) all our subclasses, like Matrix, and we also declared the following class template: \code template struct internal::traits; \endcode In src/Core/Matrix.h, right \em before the definition of class Matrix, we define a partial specialization of internal::traits for T=Matrix\. In this specialization of internal::traits, we define the Scalar typedef. So when we actually define Matrix, it is legal to refer to "typename internal::traits\::Scalar". Anyway, we have declared our operator+. In our case, where \a Derived and \a OtherDerived are VectorXf, the above declaration amounts to: \code class MatrixBase { // ... const CwiseBinaryOp, VectorXf, VectorXf> operator+(const MatrixBase &other) const; // ... }; \endcode Let's now jump to src/Core/CwiseBinaryOp.h to see how it is defined. As you can see there, all it does is to return a CwiseBinaryOp object, and this object is just storing references to the left-hand-side and right-hand-side expressions -- here, these are the vectors \a v and \a w. Well, the CwiseBinaryOp object is also storing an instance of the (empty) functor class, but you shouldn't worry about it as that is a minor implementation detail. Thus, the operator+ hasn't performed any actual computation. To summarize, the operation \a v + \a w just returned an object of type CwiseBinaryOp which did nothing else than just storing references to \a v and \a w. \section Assignment The assignment
PLEASE HELP US IMPROVING THIS SECTION. This page reflects how %Eigen worked until 3.2, but since %Eigen 3.3 the assignment is more sophisticated as it involves an Assignment expression, and the creation of so called evaluator which are responsible for the evaluation of each kind of expressions.
At this point, the expression \a v + \a w has finished evaluating, so, in the process of compiling the line of code \code u = v + w; \endcode we now enter the operator=. What operator= is being called here? The vector u is an object of class VectorXf, i.e. Matrix. In src/Core/Matrix.h, inside the definition of class Matrix, we see this: \code template inline Matrix& operator=(const MatrixBase& other) { eigen_assert(m_storage.data()!=0 && "you cannot use operator= with a non initialized matrix (instead use set()"); return Base::operator=(other.derived()); } \endcode Here, Base is a typedef for MatrixBase\. So, what is being called is the operator= of MatrixBase. Let's see its prototype in src/Core/MatrixBase.h: \code template Derived& operator=(const MatrixBase& other); \endcode Here, \a Derived is VectorXf (since u is a VectorXf) and \a OtherDerived is CwiseBinaryOp. More specifically, as explained in the previous section, \a OtherDerived is: \code CwiseBinaryOp, VectorXf, VectorXf> \endcode So the full prototype of the operator= being called is: \code VectorXf& MatrixBase::operator=(const MatrixBase, VectorXf, VectorXf> > & other); \endcode This operator= literally reads "copying a sum of two VectorXf's into another VectorXf". Let's now look at the implementation of this operator=. It resides in the file src/Core/Assign.h. What we can see there is: \code template template inline Derived& MatrixBase ::operator=(const MatrixBase& other) { return internal::assign_selector::run(derived(), other.derived()); } \endcode OK so our next task is to understand internal::assign_selector :) Here is its declaration (all that is still in the same file src/Core/Assign.h) \code template struct internal::assign_selector; \endcode So internal::assign_selector takes 4 template parameters, but the 2 last ones are automatically determined by the 2 first ones. EvalBeforeAssigning is here to enforce the EvalBeforeAssigningBit. As explained here, certain expressions have this flag which makes them automatically evaluate into temporaries before assigning them to another expression. This is the case of the Product expression, in order to avoid strange aliasing effects when doing "m = m * m;" However, of course here our CwiseBinaryOp expression doesn't have the EvalBeforeAssigningBit: we said since the beginning that we didn't want a temporary to be introduced here. So if you go to src/Core/CwiseBinaryOp.h, you'll see that the Flags in internal::traits\ don't include the EvalBeforeAssigningBit. The Flags member of CwiseBinaryOp is then imported from the internal::traits by the EIGEN_GENERIC_PUBLIC_INTERFACE macro. Anyway, here the template parameter EvalBeforeAssigning has the value \c false. NeedToTranspose is here for the case where the user wants to copy a row-vector into a column-vector. We allow this as a special exception to the general rule that in assignments we require the dimesions to match. Anyway, here both the left-hand and right-hand sides are column vectors, in the sense that ColsAtCompileTime is equal to 1. So NeedToTranspose is \c false too. So, here we are in the partial specialization: \code internal::assign_selector \endcode Here's how it is defined: \code template struct internal::assign_selector { static Derived& run(Derived& dst, const OtherDerived& other) { return dst.lazyAssign(other.derived()); } }; \endcode OK so now our next job is to understand how lazyAssign works :) \code template template inline Derived& MatrixBase ::lazyAssign(const MatrixBase& other) { EIGEN_STATIC_ASSERT_SAME_MATRIX_SIZE(Derived,OtherDerived) eigen_assert(rows() == other.rows() && cols() == other.cols()); internal::assign_impl::run(derived(),other.derived()); return derived(); } \endcode What do we see here? Some assertions, and then the only interesting line is: \code internal::assign_impl::run(derived(),other.derived()); \endcode OK so now we want to know what is inside internal::assign_impl. Here is its declaration: \code template::Vectorization, int Unrolling = internal::assign_traits::Unrolling> struct internal::assign_impl; \endcode Again, internal::assign_selector takes 4 template parameters, but the 2 last ones are automatically determined by the 2 first ones. These two parameters \a Vectorization and \a Unrolling are determined by a helper class internal::assign_traits. Its job is to determine which vectorization strategy to use (that is \a Vectorization) and which unrolling strategy to use (that is \a Unrolling). We'll not enter into the details of how these strategies are chosen (this is in the implementation of internal::assign_traits at the top of the same file). Let's just say that here \a Vectorization has the value \a LinearVectorization, and \a Unrolling has the value \a NoUnrolling (the latter is obvious since our vectors have dynamic size so there's no way to unroll the loop at compile-time). So the partial specialization of internal::assign_impl that we're looking at is: \code internal::assign_impl \endcode Here is how it's defined: \code template struct internal::assign_impl { static void run(Derived1 &dst, const Derived2 &src) { const int size = dst.size(); const int packetSize = internal::packet_traits::size; const int alignedStart = internal::assign_traits::DstIsAligned ? 0 : internal::first_aligned(&dst.coeffRef(0), size); const int alignedEnd = alignedStart + ((size-alignedStart)/packetSize)*packetSize; for(int index = 0; index < alignedStart; index++) dst.copyCoeff(index, src); for(int index = alignedStart; index < alignedEnd; index += packetSize) { dst.template copyPacket::SrcAlignment>(index, src); } for(int index = alignedEnd; index < size; index++) dst.copyCoeff(index, src); } }; \endcode Here's how it works. \a LinearVectorization means that the left-hand and right-hand side expression can be accessed linearly i.e. you can refer to their coefficients by one integer \a index, as opposed to having to refer to its coefficients by two integers \a row, \a column. As we said at the beginning, vectorization works with blocks of 4 floats. Here, \a PacketSize is 4. There are two potential problems that we need to deal with: \li first, vectorization works much better if the packets are 128-bit-aligned. This is especially important for write access. So when writing to the coefficients of \a dst, we want to group these coefficients by packets of 4 such that each of these packets is 128-bit-aligned. In general, this requires to skip a few coefficients at the beginning of \a dst. This is the purpose of \a alignedStart. We then copy these first few coefficients one by one, not by packets. However, in our case, the \a dst expression is a VectorXf and remember that in the construction of the vectors we allocated aligned arrays. Thanks to \a DstIsAligned, Eigen remembers that without having to do any runtime check, so \a alignedStart is zero and this part is avoided altogether. \li second, the number of coefficients to copy is not in general a multiple of \a packetSize. Here, there are 50 coefficients to copy and \a packetSize is 4. So we'll have to copy the last 2 coefficients one by one, not by packets. Here, \a alignedEnd is 48. Now come the actual loops. First, the vectorized part: the 48 first coefficients out of 50 will be copied by packets of 4: \code for(int index = alignedStart; index < alignedEnd; index += packetSize) { dst.template copyPacket::SrcAlignment>(index, src); } \endcode What is copyPacket? It is defined in src/Core/Coeffs.h: \code template template inline void MatrixBase::copyPacket(int index, const MatrixBase& other) { eigen_internal_assert(index >= 0 && index < size()); derived().template writePacket(index, other.derived().template packet(index)); } \endcode OK, what are writePacket() and packet() here? First, writePacket() here is a method on the left-hand side VectorXf. So we go to src/Core/Matrix.h to look at its definition: \code template inline void writePacket(int index, const PacketScalar& x) { internal::pstoret(m_storage.data() + index, x); } \endcode Here, \a StoreMode is \a #Aligned, indicating that we are doing a 128-bit-aligned write access, \a PacketScalar is a type representing a "SSE packet of 4 floats" and internal::pstoret is a function writing such a packet in memory. Their definitions are architecture-specific, we find them in src/Core/arch/SSE/PacketMath.h: The line in src/Core/arch/SSE/PacketMath.h that determines the PacketScalar type (via a typedef in Matrix.h) is: \code template<> struct internal::packet_traits { typedef __m128 type; enum {size=4}; }; \endcode Here, __m128 is a SSE-specific type. Notice that the enum \a size here is what was used to define \a packetSize above. And here is the implementation of internal::pstoret: \code template<> inline void internal::pstore(float* to, const __m128& from) { _mm_store_ps(to, from); } \endcode Here, __mm_store_ps is a SSE-specific intrinsic function, representing a single SSE instruction. The difference between internal::pstore and internal::pstoret is that internal::pstoret is a dispatcher handling both the aligned and unaligned cases, you find its definition in src/Core/GenericPacketMath.h: \code template inline void internal::pstoret(Scalar* to, const Packet& from) { if(LoadMode == Aligned) internal::pstore(to, from); else internal::pstoreu(to, from); } \endcode OK, that explains how writePacket() works. Now let's look into the packet() call. Remember that we are analyzing this line of code inside copyPacket(): \code derived().template writePacket(index, other.derived().template packet(index)); \endcode Here, \a other is our sum expression \a v + \a w. The .derived() is just casting from MatrixBase to the subclass which here is CwiseBinaryOp. So let's go to src/Core/CwiseBinaryOp.h: \code class CwiseBinaryOp { // ... template inline PacketScalar packet(int index) const { return m_functor.packetOp(m_lhs.template packet(index), m_rhs.template packet(index)); } }; \endcode Here, \a m_lhs is the vector \a v, and \a m_rhs is the vector \a w. So the packet() function here is Matrix::packet(). The template parameter \a LoadMode is \a #Aligned. So we're looking at \code class Matrix { // ... template inline PacketScalar packet(int index) const { return internal::ploadt(m_storage.data() + index); } }; \endcode We let you look up the definition of internal::ploadt in GenericPacketMath.h and the internal::pload in src/Core/arch/SSE/PacketMath.h. It is very similar to the above for internal::pstore. Let's go back to CwiseBinaryOp::packet(). Once the packets from the vectors \a v and \a w have been returned, what does this function do? It calls m_functor.packetOp() on them. What is m_functor? Here we must remember what particular template specialization of CwiseBinaryOp we're dealing with: \code CwiseBinaryOp, VectorXf, VectorXf> \endcode So m_functor is an object of the empty class internal::scalar_sum_op. As we mentioned above, don't worry about why we constructed an object of this empty class at all -- it's an implementation detail, the point is that some other functors need to store member data. Anyway, internal::scalar_sum_op is defined in src/Core/Functors.h: \code template struct internal::scalar_sum_op EIGEN_EMPTY_STRUCT { inline const Scalar operator() (const Scalar& a, const Scalar& b) const { return a + b; } template inline const PacketScalar packetOp(const PacketScalar& a, const PacketScalar& b) const { return internal::padd(a,b); } }; \endcode As you can see, all what packetOp() does is to call internal::padd on the two packets. Here is the definition of internal::padd from src/Core/arch/SSE/PacketMath.h: \code template<> inline __m128 internal::padd(const __m128& a, const __m128& b) { return _mm_add_ps(a,b); } \endcode Here, _mm_add_ps is a SSE-specific intrinsic function, representing a single SSE instruction. To summarize, the loop \code for(int index = alignedStart; index < alignedEnd; index += packetSize) { dst.template copyPacket::SrcAlignment>(index, src); } \endcode has been compiled to the following code: for \a index going from 0 to the 11 ( = 48/4 - 1), read the i-th packet (of 4 floats) from the vector v and the i-th packet from the vector w using two __mm_load_ps SSE instructions, then add them together using a __mm_add_ps instruction, then store the result using a __mm_store_ps instruction. There remains the second loop handling the last few (here, the last 2) coefficients: \code for(int index = alignedEnd; index < size; index++) dst.copyCoeff(index, src); \endcode However, it works just like the one we just explained, it is just simpler because there is no SSE vectorization involved here. copyPacket() becomes copyCoeff(), packet() becomes coeff(), writePacket() becomes coeffRef(). If you followed us this far, you can probably understand this part by yourself. We see that all the C++ abstraction of Eigen goes away during compilation and that we indeed are precisely controlling which assembly instructions we emit. Such is the beauty of C++! Since we have such precise control over the emitted assembly instructions, but such complex logic to choose the right instructions, we can say that Eigen really behaves like an optimizing compiler. If you prefer, you could say that Eigen behaves like a script for the compiler. In a sense, C++ template metaprogramming is scripting the compiler -- and it's been shown that this scripting language is Turing-complete. See Wikipedia. */ } piqp-octave/src/eigen/doc/LeastSquares.dox0000644000175100001770000000611614107270226020433 0ustar runnerdockernamespace Eigen { /** \eigenManualPage LeastSquares Solving linear least squares systems This page describes how to solve linear least squares systems using %Eigen. An overdetermined system of equations, say \a Ax = \a b, has no solutions. In this case, it makes sense to search for the vector \a x which is closest to being a solution, in the sense that the difference \a Ax - \a b is as small as possible. This \a x is called the least square solution (if the Euclidean norm is used). The three methods discussed on this page are the SVD decomposition, the QR decomposition and normal equations. Of these, the SVD decomposition is generally the most accurate but the slowest, normal equations is the fastest but least accurate, and the QR decomposition is in between. \eigenAutoToc \section LeastSquaresSVD Using the SVD decomposition The \link BDCSVD::solve() solve() \endlink method in the BDCSVD class can be directly used to solve linear squares systems. It is not enough to compute only the singular values (the default for this class); you also need the singular vectors but the thin SVD decomposition suffices for computing least squares solutions:
Example:Output:
\include TutorialLinAlgSVDSolve.cpp \verbinclude TutorialLinAlgSVDSolve.out
This is example from the page \link TutorialLinearAlgebra Linear algebra and decompositions \endlink. If you just need to solve the least squares problem, but are not interested in the SVD per se, a faster alternative method is CompleteOrthogonalDecomposition. \section LeastSquaresQR Using the QR decomposition The solve() method in QR decomposition classes also computes the least squares solution. There are three QR decomposition classes: HouseholderQR (no pivoting, fast but unstable if your matrix is not rull rank), ColPivHouseholderQR (column pivoting, thus a bit slower but more stable) and FullPivHouseholderQR (full pivoting, so slowest and slightly more stable than ColPivHouseholderQR). Here is an example with column pivoting:
Example:Output:
\include LeastSquaresQR.cpp \verbinclude LeastSquaresQR.out
\section LeastSquaresNormalEquations Using normal equations Finding the least squares solution of \a Ax = \a b is equivalent to solving the normal equation ATAx = ATb. This leads to the following code
Example:Output:
\include LeastSquaresNormalEquations.cpp \verbinclude LeastSquaresNormalEquations.out
This method is usually the fastest, especially when \a A is "tall and skinny". However, if the matrix \a A is even mildly ill-conditioned, this is not a good method, because the condition number of ATA is the square of the condition number of \a A. This means that you lose roughly twice as many digits of accuracy using the normal equation, compared to the more stable methods mentioned above. */ }piqp-octave/src/eigen/doc/Pitfalls.dox0000644000175100001770000001555214107270226017601 0ustar runnerdockernamespace Eigen { /** \page TopicPitfalls Common pitfalls \section TopicPitfalls_template_keyword Compilation error with template methods See this \link TopicTemplateKeyword page \endlink. \section TopicPitfalls_aliasing Aliasing Don't miss this \link TopicAliasing page \endlink on aliasing, especially if you got wrong results in statements where the destination appears on the right hand side of the expression. \section TopicPitfalls_alignment_issue Alignment Issues (runtime assertion) %Eigen does explicit vectorization, and while that is appreciated by many users, that also leads to some issues in special situations where data alignment is compromised. Indeed, prior to C++17, C++ does not have quite good enough support for explicit data alignment. In that case your program hits an assertion failure (that is, a "controlled crash") with a message that tells you to consult this page: \code http://eigen.tuxfamily.org/dox/group__TopicUnalignedArrayAssert.html \endcode Have a look at \link TopicUnalignedArrayAssert it \endlink and see for yourself if that's something that you can cope with. It contains detailed information about how to deal with each known cause for that issue. Now what if you don't care about vectorization and so don't want to be annoyed with these alignment issues? Then read \link getrid how to get rid of them \endlink. \section TopicPitfalls_auto_keyword C++11 and the auto keyword In short: do not use the auto keywords with %Eigen's expressions, unless you are 100% sure about what you are doing. In particular, do not use the auto keyword as a replacement for a \c Matrix<> type. Here is an example: \code MatrixXd A, B; auto C = A*B; for(...) { ... w = C * v; ...} \endcode In this example, the type of C is not a \c MatrixXd but an abstract expression representing a matrix product and storing references to \c A and \c B. Therefore, the product of \c A*B will be carried out multiple times, once per iteration of the for loop. Moreover, if the coefficients of `A` or `B` change during the iteration, then `C` will evaluate to different values as in the following example: \code MatrixXd A = ..., B = ...; auto C = A*B; MatrixXd R1 = C; A = ...; MatrixXd R2 = C; \endcode for which we end up with `R1` ≠ `R2`. Here is another example leading to a segfault: \code auto C = ((A+B).eval()).transpose(); // do something with C \endcode The problem is that \c eval() returns a temporary object (in this case a \c MatrixXd) which is then referenced by the \c Transpose<> expression. However, this temporary is deleted right after the first line, and then the \c C expression references a dead object. One possible fix consists in applying \c eval() on the whole expression: \code auto C = (A+B).transpose().eval(); \endcode The same issue might occur when sub expressions are automatically evaluated by %Eigen as in the following example: \code VectorXd u, v; auto C = u + (A*v).normalized(); // do something with C \endcode Here the \c normalized() method has to evaluate the expensive product \c A*v to avoid evaluating it twice. Again, one possible fix is to call \c .eval() on the whole expression: \code auto C = (u + (A*v).normalized()).eval(); \endcode In this case, \c C will be a regular \c VectorXd object. Note that DenseBase::eval() is smart enough to avoid copies when the underlying expression is already a plain \c Matrix<>. \section TopicPitfalls_header_issues Header Issues (failure to compile) With all libraries, one must check the documentation for which header to include. The same is true with %Eigen, but slightly worse: with %Eigen, a method in a class may require an additional \c \#include over what the class itself requires! For example, if you want to use the \c cross() method on a vector (it computes a cross-product) then you need to: \code #include \endcode We try to always document this, but do tell us if we forgot an occurrence. \section TopicPitfalls_ternary_operator Ternary operator In short: avoid the use of the ternary operator (COND ? THEN : ELSE) with %Eigen's expressions for the \c THEN and \c ELSE statements. To see why, let's consider the following example: \code Vector3f A; A << 1, 2, 3; Vector3f B = ((1 < 0) ? (A.reverse()) : A); \endcode This example will return B = 3, 2, 1. Do you see why? The reason is that in c++ the type of the \c ELSE statement is inferred from the type of the \c THEN expression such that both match. Since \c THEN is a Reverse, the \c ELSE statement A is converted to a Reverse, and the compiler thus generates: \code Vector3f B = ((1 < 0) ? (A.reverse()) : Reverse(A)); \endcode In this very particular case, a workaround would be to call A.reverse().eval() for the \c THEN statement, but the safest and fastest is really to avoid this ternary operator with %Eigen's expressions and use a if/else construct. \section TopicPitfalls_pass_by_value Pass-by-value If you don't know why passing-by-value is wrong with %Eigen, read this \link TopicPassingByValue page \endlink first. While you may be extremely careful and use care to make sure that all of your code that explicitly uses %Eigen types is pass-by-reference you have to watch out for templates which define the argument types at compile time. If a template has a function that takes arguments pass-by-value, and the relevant template parameter ends up being an %Eigen type, then you will of course have the same alignment problems that you would in an explicitly defined function passing %Eigen types by reference. Using %Eigen types with other third party libraries or even the STL can present the same problem. boost::bind for example uses pass-by-value to store arguments in the returned functor. This will of course be a problem. There are at least two ways around this: - If the value you are passing is guaranteed to be around for the life of the functor, you can use boost::ref() to wrap the value as you pass it to boost::bind. Generally this is not a solution for values on the stack as if the functor ever gets passed to a lower or independent scope, the object may be gone by the time it's attempted to be used. - The other option is to make your functions take a reference counted pointer like boost::shared_ptr as the argument. This avoids needing to worry about managing the lifetime of the object being passed. \section TopicPitfalls_matrix_bool Matrices with boolean coefficients The current behaviour of using \c Matrix with boolean coefficients is inconsistent and likely to change in future versions of Eigen, so please use it carefully! A simple example for such an inconsistency is \code template void foo() { Eigen::Matrix A, B, C; A.setOnes(); B.setOnes(); C = A * B - A * B; std::cout << C << "\n"; } \endcode since calling \c foo<3>() prints the zero matrix while calling \c foo<10>() prints the identity matrix. */ } piqp-octave/src/eigen/doc/TopicScalarTypes.dox0000644000175100001770000000022114107270226021237 0ustar runnerdockernamespace Eigen { /** \page TopicScalarTypes Scalar types TODO: write this dox page! Is linked from the tutorial on the Matrix class. */ } piqp-octave/src/eigen/doc/CustomizingEigen_NullaryExpr.dox0000644000175100001770000000711214107270226023644 0ustar runnerdockernamespace Eigen { /** \page TopicCustomizing_NullaryExpr Matrix manipulation via nullary-expressions The main purpose of the class CwiseNullaryOp is to define \em procedural matrices such as constant or random matrices as returned by the Ones(), Zero(), Constant(), Identity() and Random() methods. Nevertheless, with some imagination it is possible to accomplish very sophisticated matrix manipulation with minimal efforts such that \ref TopicNewExpressionType "implementing new expression" is rarely needed. \section NullaryExpr_Circulant Example 1: circulant matrix To explore these possibilities let us start with the \em circulant example of the \ref TopicNewExpressionType "implementing new expression" topic. Let us recall that a circulant matrix is a matrix where each column is the same as the column to the left, except that it is cyclically shifted downwards. For example, here is a 4-by-4 circulant matrix: \f[ \begin{bmatrix} 1 & 8 & 4 & 2 \\ 2 & 1 & 8 & 4 \\ 4 & 2 & 1 & 8 \\ 8 & 4 & 2 & 1 \end{bmatrix} \f] A circulant matrix is uniquely determined by its first column. We wish to write a function \c makeCirculant which, given the first column, returns an expression representing the circulant matrix. For this exercise, the return type of \c makeCirculant will be a CwiseNullaryOp that we need to instantiate with: 1 - a proper \c circulant_functor storing the input vector and implementing the adequate coefficient accessor \c operator(i,j) 2 - a template instantiation of class Matrix conveying compile-time information such as the scalar type, sizes, and preferred storage layout. Calling \c ArgType the type of the input vector, we can construct the equivalent squared Matrix type as follows: \snippet make_circulant2.cpp square This little helper structure will help us to implement our \c makeCirculant function as follows: \snippet make_circulant2.cpp makeCirculant As usual, our function takes as argument a \c MatrixBase (see this \ref TopicFunctionTakingEigenTypes "page" for more details). Then, the CwiseNullaryOp object is constructed through the DenseBase::NullaryExpr static method with the adequate runtime sizes. Then, we need to implement our \c circulant_functor, which is a straightforward exercise: \snippet make_circulant2.cpp circulant_func We are now all set to try our new feature: \snippet make_circulant2.cpp main If all the fragments are combined, the following output is produced, showing that the program works as expected: \include make_circulant2.out This implementation of \c makeCirculant is much simpler than \ref TopicNewExpressionType "defining a new expression" from scratch. \section NullaryExpr_Indexing Example 2: indexing rows and columns The goal here is to mimic MatLab's ability to index a matrix through two vectors of indices referencing the rows and columns to be picked respectively, like this: \snippet nullary_indexing.out main1 To this end, let us first write a nullary-functor storing references to the input matrix and to the two arrays of indices, and implementing the required \c operator()(i,j): \snippet nullary_indexing.cpp functor Then, let's create an \c indexing(A,rows,cols) function creating the nullary expression: \snippet nullary_indexing.cpp function Finally, here is an example of how this function can be used: \snippet nullary_indexing.cpp main1 This straightforward implementation is already quite powerful as the row or column index arrays can also be expressions to perform offsetting, modulo, striding, reverse, etc. \snippet nullary_indexing.cpp main2 and the output is: \snippet nullary_indexing.out main2 */ } piqp-octave/src/eigen/doc/Doxyfile.in0000644000175100001770000025002314107270226017414 0ustar runnerdocker# Doxyfile 1.8.1.1 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (" "). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all # text before the first occurrence of this tag. Doxygen uses libiconv (or the # iconv built into libc) for the transcoding. See # http://www.gnu.org/software/libiconv for the list of possible encodings. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or sequence of words) that should # identify the project. Note that if you do not use Doxywizard you need # to put quotes around the project name if it contains spaces. PROJECT_NAME = ${EIGEN_DOXY_PROJECT_NAME} # The PROJECT_NUMBER tag can be used to enter a project or revision number. # This could be handy for archiving the generated documentation or # if some version control system is used. # EIGEN_VERSION is set in the root CMakeLists.txt PROJECT_NUMBER = "${EIGEN_VERSION}" # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer # a quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify an logo or icon that is # included in the documentation. The maximum height of the logo should not # exceed 55 pixels and the maximum width should not exceed 200 pixels. # Doxygen will copy the logo to the output directory. PROJECT_LOGO = "${Eigen_SOURCE_DIR}/doc/Eigen_Silly_Professor_64x64.png" # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) # base path where the generated documentation will be put. # If a relative path is entered, it will be relative to the location # where doxygen was started. If left blank the current directory will be used. OUTPUT_DIRECTORY = "${Eigen_BINARY_DIR}/doc${EIGEN_DOXY_OUTPUT_DIRECTORY_SUFFIX}" # If the CREATE_SUBDIRS tag is set to YES, then doxygen will create # 4096 sub-directories (in 2 levels) under the output directory of each output # format and will distribute the generated files over these directories. # Enabling this option can be useful when feeding doxygen a huge amount of # source files, where putting all generated files in the same directory would # otherwise cause performance problems for the file system. CREATE_SUBDIRS = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # The default language is English, other supported languages are: # Afrikaans, Arabic, Brazilian, Catalan, Chinese, Chinese-Traditional, # Croatian, Czech, Danish, Dutch, Esperanto, Farsi, Finnish, French, German, # Greek, Hungarian, Italian, Japanese, Japanese-en (Japanese with English # messages), Korean, Korean-en, Lithuanian, Norwegian, Macedonian, Persian, # Polish, Portuguese, Romanian, Russian, Serbian, Serbian-Cyrillic, Slovak, # Slovene, Spanish, Swedish, Ukrainian, and Vietnamese. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will # include brief member descriptions after the members that are listed in # the file and class documentation (similar to JavaDoc). # Set to NO to disable this. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend # the brief description of a member or function before the detailed description. # Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator # that is used to form the text in various listings. Each string # in this list, if found as the leading text of the brief description, will be # stripped from the text and the result after processing the whole list, is # used as the annotated text. Otherwise, the brief description is used as-is. # If left blank, the following values are used ("$name" is automatically # replaced with the name of the entity): "The $name class" "The $name widget" # "The $name file" "is" "provides" "specifies" "contains" # "represents" "a" "an" "the" ABBREVIATE_BRIEF = "The $name class" \ "The $name widget" \ "The $name file" \ is \ provides \ specifies \ contains \ represents \ a \ an \ the # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # Doxygen will generate a detailed section even if there is only a brief # description. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full # path before files name in the file list and in the header files. If set # to NO the shortest path that makes the file name unique will be used. FULL_PATH_NAMES = NO # If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag # can be used to strip a user-defined part of the path. Stripping is # only done if one of the specified strings matches the left-hand part of # the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the # path to strip. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of # the path mentioned in the documentation of a class, which tells # the reader which header file to include in order to use a class. # If left blank only the name of the header file containing the class # definition is used. Otherwise one should specify the include paths that # are normally passed to the compiler using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter # (but less readable) file names. This can be useful if your file system # doesn't support long names like on DOS, Mac, or CD-ROM. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen # will interpret the first line (until the first dot) of a JavaDoc-style # comment as the brief description. If set to NO, the JavaDoc # comments will behave just like regular Qt-style comments # (thus requiring an explicit @brief command for a brief description.) JAVADOC_AUTOBRIEF = NO # If the QT_AUTOBRIEF tag is set to YES then Doxygen will # interpret the first line (until the first dot) of a Qt-style # comment as the brief description. If set to NO, the comments # will behave just like regular Qt-style comments (thus requiring # an explicit \brief command for a brief description.) QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen # treat a multi-line C++ special comment block (i.e. a block of //! or /// # comments) as a brief description. This used to be the default behaviour. # The new default is to treat a multi-line C++ comment block as a detailed # description. Set this tag to YES if you prefer the old behaviour instead. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES (the default) then an undocumented # member inherits the documentation from any documented member that it # re-implements. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce # a new page for each member. If set to NO, the documentation of a member will # be part of the file/class/namespace that contains it. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. # Doxygen uses this value to replace tabs by spaces in code fragments. TAB_SIZE = 8 # This tag can be used to specify a number of aliases that acts # as commands in the documentation. An alias has the form "name=value". # For example adding "sideeffect=\par Side Effects:\n" will allow you to # put the command \sideeffect (or @sideeffect) in the documentation, which # will result in a user-defined paragraph with heading "Side Effects:". # You can put \n's in the value part of an alias to insert newlines. ALIASES = "only_for_vectors=This is only for vectors (either row-vectors or column-vectors), i.e. matrices which are known at compile-time to have either one row or one column." \ "not_reentrant=\warning This function is not re-entrant." \ "array_module=This is defined in the %Array module. \code #include \endcode" \ "cholesky_module=This is defined in the %Cholesky module. \code #include \endcode" \ "eigenvalues_module=This is defined in the %Eigenvalues module. \code #include \endcode" \ "geometry_module=This is defined in the %Geometry module. \code #include \endcode" \ "householder_module=This is defined in the %Householder module. \code #include \endcode" \ "jacobi_module=This is defined in the %Jacobi module. \code #include \endcode" \ "lu_module=This is defined in the %LU module. \code #include \endcode" \ "qr_module=This is defined in the %QR module. \code #include \endcode" \ "svd_module=This is defined in the %SVD module. \code #include \endcode" \ "specialfunctions_module=This is defined in the \b unsupported SpecialFunctions module. \code #include \endcode" \ "label=\bug" \ "matrixworld=*" \ "arrayworld=*" \ "note_about_arbitrary_choice_of_solution=If there exists more than one solution, this method will arbitrarily choose one." \ "note_about_using_kernel_to_study_multiple_solutions=If you need a complete analysis of the space of solutions, take the one solution obtained by this method and add to it elements of the kernel, as determined by kernel()." \ "note_about_checking_solutions=This method just tries to find as good a solution as possible. If you want to check whether a solution exists or if it is accurate, just call this function to get a result and then compute the error of this result, or use MatrixBase::isApprox() directly, for instance like this: \code bool a_solution_exists = (A*result).isApprox(b, precision); \endcode This method avoids dividing by zero, so that the non-existence of a solution doesn't by itself mean that you'll get \c inf or \c nan values." \ "note_try_to_help_rvo=This function returns the result by value. In order to make that efficient, it is implemented as just a return statement using a special constructor, hopefully allowing the compiler to perform a RVO (return value optimization)." \ "nonstableyet=\warning This is not considered to be part of the stable public API yet. Changes may happen in future releases. See \ref Experimental \"Experimental parts of Eigen\"" \ "implsparsesolverconcept=This class follows the \link TutorialSparseSolverConcept sparse solver concept \endlink." \ "blank= " \ "cpp11=[c++11]" \ "cpp14=[c++14]" \ "cpp17=[c++17]" \ "newin{1}=New in %Eigen \1." ALIASES += "eigenAutoToc= " ALIASES += "eigenManualPage=\defgroup" # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding # "class=itcl::class" will allow you to use the command class in the # itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C # sources only. Doxygen will then generate output that is more tailored for C. # For instance, some of the names that are used will be different. The list # of all members will be omitted, etc. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java # sources only. Doxygen will then generate output that is more tailored for # Java. For instance, namespaces will be presented as packages, qualified # scopes will look different, etc. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources only. Doxygen will then generate output that is more tailored for # Fortran. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for # VHDL. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given extension. # Doxygen has a built-in mapping, but you can override or extend it using this # tag. The format is ext=language, where ext is a file extension, and language # is one of the parsers supported by doxygen: IDL, Java, Javascript, CSharp, C, # C++, D, PHP, Objective-C, Python, Fortran, VHDL, C, C++. For instance to make # doxygen treat .inc files as Fortran files (default is PHP), and .f files as C # (default is Fortran), use: inc=Fortran f=C. Note that for custom extensions # you also need to set FILE_PATTERNS otherwise the files are not read by doxygen. EXTENSION_MAPPING = .h=C++ no_extension=C++ # If MARKDOWN_SUPPORT is enabled (the default) then doxygen pre-processes all # comments according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you # can mix doxygen, HTML, and XML commands with Markdown formatting. # Disable only in case of backward compatibilities issues. MARKDOWN_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should # set this tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); v.s. # func(std::string) {}). This also makes the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip sources only. # Doxygen will parse them like normal C++ but will assume all classes use public # instead of private inheritance when no explicit protection keyword is present. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate getter # and setter methods for a property. Setting this option to YES (the default) # will make doxygen replace the get and set methods by a property in the # documentation. This will only work if the methods are indeed getting or # setting a simple type. If this is not the case, or you want to show the # methods anyway, you should set this option to NO. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES, then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. DISTRIBUTE_GROUP_DOC = YES # Set the SUBGROUPING tag to YES (the default) to allow class member groups of # the same type (for instance a group of public functions) to be put as a # subgroup of that type (e.g. under the Public Functions section). Set it to # NO to prevent subgrouping. Alternatively, this can be done per class using # the \nosubgrouping command. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and # unions are shown inside the group in which they are included (e.g. using # @ingroup) instead of on a separate page (for HTML and Man pages) or # section (for LaTeX and RTF). INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and # unions with only public data fields will be shown inline in the documentation # of the scope in which they are defined (i.e. file, namespace, or group # documentation), provided this scope is documented. If set to NO (the default), # structs, classes, and unions are shown on a separate page (for HTML and Man # pages) or section (for LaTeX and RTF). INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT is enabled, a typedef of a struct, union, or enum # is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically # be useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. TYPEDEF_HIDES_STRUCT = NO # The SYMBOL_CACHE_SIZE determines the size of the internal cache use to # determine which symbols to keep in memory and which to flush to disk. # When the cache is full, less often used symbols will be written to disk. # For small to medium size projects (<1000 input files) the default value is # probably good enough. For larger projects a too small cache size can cause # doxygen to be busy swapping symbols to and from disk most of the time # causing a significant performance penalty. # If the system has enough physical memory increasing the cache will improve the # performance by keeping more symbols in memory. Note that the value works on # a logarithmic scale so increasing the size by one will roughly double the # memory usage. The cache size is given by this formula: # 2^(16+SYMBOL_CACHE_SIZE). The valid range is 0..9, the default is 0, # corresponding to a cache size of 2^16 = 65536 symbols. # SYMBOL_CACHE_SIZE = 0 # Similar to the SYMBOL_CACHE_SIZE the size of the symbol lookup cache can be # set using LOOKUP_CACHE_SIZE. This cache is used to resolve symbols given # their name and scope. Since this can be an expensive process and often the # same symbol appear multiple times in the code, doxygen keeps a cache of # pre-resolved symbols. If the cache is too small doxygen will become slower. # If the cache is too large, memory is wasted. The cache size is given by this # formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range is 0..9, the default is 0, # corresponding to a cache size of 2^16 = 65536 symbols. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in # documentation are documented, even if no documentation was available. # Private class members and static file members will be hidden unless # the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES EXTRACT_ALL = NO # If the EXTRACT_PRIVATE tag is set to YES all private members of a class # will be included in the documentation. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES all members with package or internal scope will be included in the documentation. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES all static members of a file # will be included in the documentation. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) # defined locally in source files will be included in the documentation. # If set to NO only classes defined in header files are included. EXTRACT_LOCAL_CLASSES = NO # This flag is only useful for Objective-C code. When set to YES local # methods, which are defined in the implementation section but not in # the interface are included in the documentation. # If set to NO (the default) only methods in the interface are included. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base # name of the file that contains the anonymous namespace. By default # anonymous namespaces are hidden. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all # undocumented members of documented classes, files or namespaces. # If set to NO (the default) these members will be included in the # various overviews, but no documentation section is generated. # This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_MEMBERS = YES # If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. # If set to NO (the default) these classes will be included in the various # overviews. This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_CLASSES = YES # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all # friend (class|struct|union) declarations. # If set to NO (the default) these declarations will be included in the # documentation. HIDE_FRIEND_COMPOUNDS = YES # If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any # documentation blocks found inside the body of a function. # If set to NO (the default) these blocks will be appended to the # function's detailed documentation block. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation # that is typed after a \internal command is included. If the tag is set # to NO (the default) then the documentation will be excluded. # Set it to YES to include the internal documentation. INTERNAL_DOCS = ${EIGEN_DOXY_INTERNAL} # If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate # file names in lower-case letters. If set to YES upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen # will show members with their full class and namespace scopes in the # documentation. If set to YES the scope will be hidden. HIDE_SCOPE_NAMES = NO # If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen # will put a list of the files that are included by a file in the documentation # of that file. SHOW_INCLUDE_FILES = ${EIGEN_DOXY_INTERNAL} # If the FORCE_LOCAL_INCLUDES tag is set to YES then Doxygen # will list include files with double quotes in the documentation # rather than with sharp brackets. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES (the default) then a tag [inline] # is inserted in the documentation for inline members. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen # will sort the (detailed) documentation of file and class members # alphabetically by member name. If set to NO the members will appear in # declaration order. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the # brief documentation of file, namespace and class members alphabetically # by member name. If set to NO (the default) the members will appear in # declaration order. SORT_BRIEF_DOCS = YES # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen # will sort the (brief and detailed) documentation of class members so that # constructors and destructors are listed first. If set to NO (the default) # the constructors will appear in the respective orders defined by # SORT_MEMBER_DOCS and SORT_BRIEF_DOCS. # This tag will be ignored for brief docs if SORT_BRIEF_DOCS is set to NO # and ignored for detailed docs if SORT_MEMBER_DOCS is set to NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the # hierarchy of group names into alphabetical order. If set to NO (the default) # the group names will appear in their defined order. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be # sorted by fully-qualified names, including namespaces. If set to # NO (the default), the class list will be sorted only by class name, # not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the # alphabetical list. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to # do proper type resolution of all parameters of a function it will reject a # match between the prototype and the implementation of a member function even # if there is only one candidate or it is obvious which candidate to choose # by doing a simple string match. By disabling STRICT_PROTO_MATCHING doxygen # will still accept a match between prototype and implementation in such cases. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or # disable (NO) the todo list. This list is created by putting \todo # commands in the documentation. GENERATE_TODOLIST = ${EIGEN_DOXY_INTERNAL} # The GENERATE_TESTLIST tag can be used to enable (YES) or # disable (NO) the test list. This list is created by putting \test # commands in the documentation. GENERATE_TESTLIST = NO # The GENERATE_BUGLIST tag can be used to enable (YES) or # disable (NO) the bug list. This list is created by putting \bug # commands in the documentation. GENERATE_BUGLIST = ${EIGEN_DOXY_INTERNAL} # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or # disable (NO) the deprecated list. This list is created by putting # \deprecated commands in the documentation. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional # documentation sections, marked by \if sectionname ... \endif. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines # the initial value of a variable or macro consists of for it to appear in # the documentation. If the initializer consists of more lines than specified # here it will be hidden. Use a value of 0 to hide initializers completely. # The appearance of the initializer of individual variables and macros in the # documentation can be controlled using \showinitializer or \hideinitializer # command in the documentation regardless of this setting. MAX_INITIALIZER_LINES = 0 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated # at the bottom of the documentation of classes and structs. If set to YES the # list will mention the files that were used to generate the documentation. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. # This will remove the Files entry from the Quick Index and from the # Folder Tree View (if specified). The default is YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the # Namespaces page. # This will remove the Namespaces entry from the Quick Index # and from the Folder Tree View (if specified). The default is YES. SHOW_NAMESPACES = NO # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command , where is the value of # the FILE_VERSION_FILTER tag, and is the name of an input file # provided by doxygen. Whatever the program writes to standard output # is used as the file version. See the manual for examples. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. # You can optionally specify a file name after the option, if omitted # DoxygenLayout.xml will be used as the name of the layout file. LAYOUT_FILE = "${Eigen_BINARY_DIR}/doc${EIGEN_DOXY_OUTPUT_DIRECTORY_SUFFIX}/eigendoxy_layout.xml" # The CITE_BIB_FILES tag can be used to specify one or more bib files # containing the references data. This must be a list of .bib files. The # .bib extension is automatically appended if omitted. Using this command # requires the bibtex tool to be installed. See also # http://en.wikipedia.org/wiki/BibTeX for more info. For LaTeX the style # of the bibliography can be controlled using LATEX_BIB_STYLE. To use this # feature you need bibtex and perl available in the search path. CITE_BIB_FILES = #--------------------------------------------------------------------------- # configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated # by doxygen. Possible values are YES and NO. If left blank NO is used. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated by doxygen. Possible values are YES and NO. If left blank # NO is used. WARNINGS = YES # If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings # for undocumented members. If EXTRACT_ALL is set to YES then this flag will # automatically be disabled. WARN_IF_UNDOCUMENTED = NO # If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some # parameters in a documented function, or documenting parameters that # don't exist or using markup commands wrongly. WARN_IF_DOC_ERROR = YES # The WARN_NO_PARAMDOC option can be enabled to get warnings for # functions that are documented, but have no documentation for their parameters # or return value. If set to NO (the default) doxygen will only warn about # wrong or incomplete parameter documentation, but not about the absence of # documentation. WARN_NO_PARAMDOC = NO # The WARN_FORMAT tag determines the format of the warning messages that # doxygen can produce. The string should contain the $file, $line, and $text # tags, which will be replaced by the file and line number from which the # warning originated and the warning text. Optionally the format may contain # $version, which will be replaced by the version of the file (if it could # be obtained via FILE_VERSION_FILTER) WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning # and error messages should be written. If left blank the output is written # to stderr. WARN_LOGFILE = #--------------------------------------------------------------------------- # configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag can be used to specify the files and/or directories that contain # documented source files. You may enter file names like "myfile.cpp" or # directories like "/usr/src/myproject". Separate the files or directories # with spaces. INPUT = ${EIGEN_DOXY_INPUT} # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is # also the default input encoding. Doxygen uses libiconv (or the iconv built # into libc) for the transcoding. See http://www.gnu.org/software/libiconv for # the list of possible encodings. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank the following patterns are tested: # *.c *.cc *.cxx *.cpp *.c++ *.d *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh # *.hxx *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.dox *.py # *.f90 *.f *.for *.vhd *.vhdl FILE_PATTERNS = * # The RECURSIVE tag can be used to turn specify whether or not subdirectories # should be searched for input files as well. Possible values are YES and NO. # If left blank NO is used. RECURSIVE = YES # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = "${Eigen_SOURCE_DIR}/Eigen/src/Core/products" \ "${Eigen_SOURCE_DIR}/Eigen/Eigen2Support" \ "${Eigen_SOURCE_DIR}/Eigen/src/Eigen2Support" \ "${Eigen_SOURCE_DIR}/doc/examples" \ "${Eigen_SOURCE_DIR}/doc/special_examples" \ "${Eigen_SOURCE_DIR}/doc/snippets" \ "${Eigen_SOURCE_DIR}/unsupported/doc/examples" \ "${Eigen_SOURCE_DIR}/unsupported/doc/snippets" # Forward declarations of class templates cause the title of the main page for # the class template to not contain the template signature. This only happens # when the \class command is used to document the class. Possibly caused # by https://github.com/doxygen/doxygen/issues/7698. Confirmed fixed by # doxygen release 1.8.19. EXCLUDE += "${Eigen_SOURCE_DIR}/Eigen/src/Core/util/ForwardDeclarations.h" # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. Note that the wildcards are matched # against the file with absolute path, so to exclude all test directories # for example use the pattern */test/* EXCLUDE_PATTERNS = CMake* \ *.txt \ *.sh \ *.orig \ *.diff \ diff \ *~ \ *. \ *.sln \ *.sdf \ *.tmp \ *.vcxproj \ *.filters \ *.user \ *.suo # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test EXCLUDE_SYMBOLS = internal::* \ Flagged* \ *InnerIterator* \ DenseStorage<* \ # The EXAMPLE_PATH tag can be used to specify one or more files or # directories that contain example code fragments that are included (see # the \include command). EXAMPLE_PATH = "${Eigen_SOURCE_DIR}/doc/snippets" \ "${Eigen_BINARY_DIR}/doc/snippets" \ "${Eigen_SOURCE_DIR}/doc/examples" \ "${Eigen_BINARY_DIR}/doc/examples" \ "${Eigen_SOURCE_DIR}/doc/special_examples" \ "${Eigen_BINARY_DIR}/doc/special_examples" \ "${Eigen_SOURCE_DIR}/unsupported/doc/snippets" \ "${Eigen_BINARY_DIR}/unsupported/doc/snippets" \ "${Eigen_SOURCE_DIR}/unsupported/doc/examples" \ "${Eigen_BINARY_DIR}/unsupported/doc/examples" # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank all files are included. EXAMPLE_PATTERNS = * # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude # commands irrespective of the value of the RECURSIVE tag. # Possible values are YES and NO. If left blank NO is used. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or # directories that contain image that are included in the documentation (see # the \image command). IMAGE_PATH = ${Eigen_BINARY_DIR}/doc/html # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command , where # is the value of the INPUT_FILTER tag, and is the name of an # input file. Doxygen will then use the output that the filter program writes # to standard output. # If FILTER_PATTERNS is specified, this tag will be # ignored. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. # Doxygen will compare the file name with each pattern and apply the # filter if there is a match. # The filters are a list of the form: # pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further # info on how filters are used. If FILTER_PATTERNS is empty or if # non of the patterns match the file name, INPUT_FILTER is applied. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will be used to filter the input files when producing source # files to browse (i.e. when SOURCE_BROWSER is set to YES). FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) # and it is also possible to disable source filtering for a specific pattern # using *.ext= (so without naming a filter). This option only has effect when # FILTER_SOURCE_FILES is enabled. FILTER_SOURCE_PATTERNS = #--------------------------------------------------------------------------- # configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will # be generated. Documented entities will be cross-referenced with these sources. # Note: To get rid of all source code in the generated output, make sure also # VERBATIM_HEADERS is set to NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body # of functions and classes directly in the documentation. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct # doxygen to hide any special comment blocks from generated source code # fragments. Normal C, C++ and Fortran comments will always remain visible. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES # then for each documented function all documented # functions referencing it will be listed. REFERENCED_BY_RELATION = NO # If the REFERENCES_RELATION tag is set to YES # then for each documented function all documented entities # called/used by that function will be listed. REFERENCES_RELATION = NO # If the REFERENCES_LINK_SOURCE tag is set to YES (the default) # and SOURCE_BROWSER tag is set to YES, then the hyperlinks from # functions in REFERENCES_RELATION and REFERENCED_BY_RELATION lists will # link to the source code. # Otherwise they will link to the documentation. REFERENCES_LINK_SOURCE = YES # If the USE_HTAGS tag is set to YES then the references to source code # will point to the HTML generated by the htags(1) tool instead of doxygen # built-in source browser. The htags tool is part of GNU's global source # tagging system (see http://www.gnu.org/software/global/global.html). You # will need version 4.8.6 or higher. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen # will generate a verbatim copy of the header file for each class for # which an include is specified. Set to NO to disable this. VERBATIM_HEADERS = YES #--------------------------------------------------------------------------- # configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index # of all compounds will be generated. Enable this if the project # contains a lot of classes, structs, unions or interfaces. ALPHABETICAL_INDEX = NO # If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns # in which this list will be split (can be a number in the range [1..20]) COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all # classes will be put under the same header in the alphabetical index. # The IGNORE_PREFIX tag can be used to specify one or more prefixes that # should be ignored while generating the index headers. IGNORE_PREFIX = #--------------------------------------------------------------------------- # configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES (the default) Doxygen will # generate HTML output. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `html' will be used as the default path. HTML_OUTPUT = "${Eigen_BINARY_DIR}/doc/html${EIGEN_DOXY_OUTPUT_DIRECTORY_SUFFIX}" # The HTML_FILE_EXTENSION tag can be used to specify the file extension for # each generated HTML page (for example: .htm,.php,.asp). If it is left blank # doxygen will generate files with .html extension. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a personal HTML header for # each generated HTML page. If it is left blank doxygen will generate a # standard header. Note that when using a custom header you are responsible # for the proper inclusion of any scripts and style sheets that doxygen # needs, which is dependent on the configuration options used. # It is advised to generate a default header using "doxygen -w html # header.html footer.html stylesheet.css YourConfigFile" and then modify # that header. Note that the header is subject to change so you typically # have to redo this when upgrading to a newer version of doxygen or when # changing the value of configuration settings such as GENERATE_TREEVIEW! HTML_HEADER = "${Eigen_BINARY_DIR}/doc/eigendoxy_header.html" # The HTML_FOOTER tag can be used to specify a personal HTML footer for # each generated HTML page. If it is left blank doxygen will generate a # standard footer. HTML_FOOTER = "${Eigen_BINARY_DIR}/doc/eigendoxy_footer.html" # The HTML_STYLESHEET tag can be used to specify a user-defined cascading # style sheet that is used by each HTML page. It can be used to # fine-tune the look of the HTML output. If the tag is left blank doxygen # will generate a default style sheet. Note that doxygen will try to copy # the style sheet file to the HTML output directory, so don't put your own # style sheet in the HTML output directory as well, or it will be erased! HTML_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath$ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that # the files will be copied as-is; there are no commands or markers available. HTML_EXTRA_FILES = "${Eigen_SOURCE_DIR}/doc/eigendoxy.css" # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. # Doxygen will adjust the colors in the style sheet and background images # according to this color. Hue is specified as an angle on a colorwheel, # see http://en.wikipedia.org/wiki/Hue for more information. # For instance the value 0 represents red, 60 is yellow, 120 is green, # 180 is cyan, 240 is blue, 300 purple, and 360 is red again. # The allowed range is 0 to 359. # The default is 220. HTML_COLORSTYLE_HUE = ${EIGEN_DOXY_HTML_COLORSTYLE_HUE} # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of # the colors in the HTML output. For a value of 0 the output will use # grayscales only. A value of 255 will produce the most vivid colors. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to # the luminance component of the colors in the HTML output. Values below # 100 gradually make the output lighter, whereas values above 100 make # the output darker. The value divided by 100 is the actual gamma applied, # so 80 represents a gamma of 0.8, The value 220 represents a gamma of 2.2, # and 100 does not change the gamma. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting # this to NO can help when comparing the output of multiple runs. HTML_TIMESTAMP = YES # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. HTML_DYNAMIC_SECTIONS = YES # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of # entries shown in the various tree structured indices initially; the user # can expand and collapse entries dynamically later on. Doxygen will expand # the tree to such a level that at most the specified number of entries are # visible (unless a fully collapsed tree already exceeds this amount). # So setting the number of entries 1 will produce a full collapsed tree by # default. 0 is a special value representing an infinite number of entries # and will result in a full expanded tree by default. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files # will be generated that can be used as input for Apple's Xcode 3 # integrated development environment, introduced with OSX 10.5 (Leopard). # To create a documentation set, doxygen will generate a Makefile in the # HTML output directory. Running make will produce the docset in that # directory and running "make install" will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find # it at startup. # See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. GENERATE_DOCSET = NO # When GENERATE_DOCSET tag is set to YES, this tag determines the name of the # feed. A documentation feed provides an umbrella under which multiple # documentation sets from a single provider (such as a company or product suite) # can be grouped. DOCSET_FEEDNAME = "Doxygen generated docs" # When GENERATE_DOCSET tag is set to YES, this tag specifies a string that # should uniquely identify the documentation set bundle. This should be a # reverse domain-name style string, e.g. com.mycompany.MyDocSet. Doxygen # will append .docset to the name. DOCSET_BUNDLE_ID = org.doxygen.Project # When GENERATE_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The GENERATE_PUBLISHER_NAME tag identifies the documentation publisher. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES, additional index files # will be generated that can be used as input for tools like the # Microsoft HTML help workshop to generate a compiled HTML help file (.chm) # of the generated HTML documentation. GENERATE_HTMLHELP = NO # If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can # be used to specify the file name of the resulting .chm file. You # can add a path in front of the file if the result should not be # written to the html output directory. CHM_FILE = # If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can # be used to specify the location (absolute path including file name) of # the HTML help compiler (hhc.exe). If non-empty doxygen will try to run # the HTML help compiler on the generated index.hhp. HHC_LOCATION = # If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag # controls if a separate .chi index file is generated (YES) or that # it should be included in the master .chm file (NO). GENERATE_CHI = NO # If the GENERATE_HTMLHELP tag is set to YES, the CHM_INDEX_ENCODING # is used to encode HtmlHelp index (hhk), content (hhc) and project file # content. CHM_INDEX_ENCODING = # If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag # controls whether a binary table of contents is generated (YES) or a # normal table of contents (NO) in the .chm file. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members # to the contents of the HTML help documentation and to the tree view. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated # that can be used as input for Qt's qhelpgenerator to generate a # Qt Compressed Help (.qch) of the generated HTML documentation. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can # be used to specify the file name of the resulting .qch file. # The path specified is relative to the HTML output folder. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating # Qt Help Project output. For more information please see # http://doc.trolltech.com/qthelpproject.html#namespace QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating # Qt Help Project output. For more information please see # http://doc.trolltech.com/qthelpproject.html#virtual-folders QHP_VIRTUAL_FOLDER = doc # If QHP_CUST_FILTER_NAME is set, it specifies the name of a custom filter to # add. For more information please see # http://doc.trolltech.com/qthelpproject.html#custom-filters QHP_CUST_FILTER_NAME = # The QHP_CUST_FILT_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see # # Qt Help Project / Custom Filters. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's # filter section matches. # # Qt Help Project / Filter Attributes. QHP_SECT_FILTER_ATTRS = # If the GENERATE_QHP tag is set to YES, the QHG_LOCATION tag can # be used to specify the location of Qt's qhelpgenerator. # If non-empty doxygen will try to run qhelpgenerator on the generated # .qhp file. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files # will be generated, which together with the HTML files, form an Eclipse help # plugin. To install this plugin and make it available under the help contents # menu in Eclipse, the contents of the directory containing the HTML and XML # files needs to be copied into the plugins directory of eclipse. The name of # the directory within the plugins directory should be the same as # the ECLIPSE_DOC_ID value. After copying Eclipse needs to be restarted before # the help appears. GENERATE_ECLIPSEHELP = NO # A unique identifier for the eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have # this name. ECLIPSE_DOC_ID = org.doxygen.Project # The DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) # at top of each HTML page. The value NO (the default) enables the index and # the value YES disables it. Since the tabs have the same information as the # navigation tree you can set this option to NO if you already set # GENERATE_TREEVIEW to YES. DISABLE_INDEX = YES # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. # If the tag value is set to YES, a side panel will be generated # containing a tree-like index structure (just like the one that # is generated for HTML Help). For this to work a browser that supports # JavaScript, DHTML, CSS and frames is required (i.e. any modern browser). # Windows users are probably better off using the HTML help feature. # Since the tree basically has the same information as the tab index you # could consider to set DISABLE_INDEX to NO when enabling this option. GENERATE_TREEVIEW = YES # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values # (range [0,1..20]) that doxygen will group on one line in the generated HTML # documentation. Note that a value of 0 will completely suppress the enum # values from appearing in the overview section. ENUM_VALUES_PER_LINE = 1 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be # used to set the initial width (in pixels) of the frame in which the tree # is shown. TREEVIEW_WIDTH = 250 # When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open # links to external symbols imported via tag files in a separate window. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of Latex formulas included # as images in the HTML documentation. The default is 10. Note that # when you change the font size after a successful doxygen run you need # to manually remove any form_*.png images from the HTML output directory # to force them to be regenerated. FORMULA_FONTSIZE = 12 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are # not supported properly for IE 6.0, but are supported on all modern browsers. # Note that when changing this option you need to delete any form_*.png files # in the HTML output before the changes have effect. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax # (see http://www.mathjax.org) which uses client side Javascript for the # rendering instead of using prerendered bitmaps. Use this if you do not # have LaTeX installed or if you want to formulas look prettier in the HTML # output. When enabled you may also need to install MathJax separately and # configure the path to it using the MATHJAX_RELPATH option. USE_MATHJAX = @EIGEN_DOXY_USE_MATHJAX@ # When MathJax is enabled you need to specify the location relative to the # HTML output directory using the MATHJAX_RELPATH option. The destination # directory should contain the MathJax.js script. For instance, if the mathjax # directory is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to # the MathJax Content Delivery Network so you can quickly see the result without # installing MathJax. # However, it is strongly recommended to install a local # copy of MathJax from http://www.mathjax.org before deployment. MATHJAX_RELPATH = https://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or MathJax extension # names that should be enabled during MathJax rendering. MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # When the SEARCHENGINE tag is enabled doxygen will generate a search box # for the HTML output. The underlying search engine uses javascript # and DHTML and should work on any modern browser. Note that when using # HTML help (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets # (GENERATE_DOCSET) there is already a search function so this one should # typically be disabled. For large projects the javascript based search engine # can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution. SEARCHENGINE = YES # When the SERVER_BASED_SEARCH tag is enabled the search engine will be # implemented using a PHP enabled web server instead of at the web client # using Javascript. Doxygen will generate the search PHP script and index # file to put on the web server. The advantage of the server # based approach is that it scales better to large projects and allows # full text search. The disadvantages are that it is more difficult to setup # and does not have live searching capabilities. SERVER_BASED_SEARCH = NO #--------------------------------------------------------------------------- # configuration options related to the LaTeX output #--------------------------------------------------------------------------- # If the GENERATE_LATEX tag is set to YES (the default) Doxygen will # generate Latex output. GENERATE_LATEX = NO # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `latex' will be used as the default path. LATEX_OUTPUT = latex # The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be # invoked. If left blank `latex' will be used as the default command name. # Note that when enabling USE_PDFLATEX this option is only used for # generating bitmaps for formulas in the HTML output, but not in the # Makefile that is written to the output directory. LATEX_CMD_NAME = latex # The MAKEINDEX_CMD_NAME tag can be used to specify the command name to # generate index for LaTeX. If left blank `makeindex' will be used as the # default command name. MAKEINDEX_CMD_NAME = makeindex # If the COMPACT_LATEX tag is set to YES Doxygen generates more compact # LaTeX documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_LATEX = NO # The PAPER_TYPE tag can be used to set the paper type that is used # by the printer. Possible values are: a4, letter, legal and # executive. If left blank a4wide will be used. PAPER_TYPE = a4wide # The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX # packages that should be included in the LaTeX output. EXTRA_PACKAGES = amssymb \ amsmath # The LATEX_HEADER tag can be used to specify a personal LaTeX header for # the generated latex document. The header should contain everything until # the first chapter. If it is left blank doxygen will generate a # standard header. Notice: only use this tag if you know what you are doing! LATEX_HEADER = # The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for # the generated latex document. The footer should contain everything after # the last chapter. If it is left blank doxygen will generate a # standard footer. Notice: only use this tag if you know what you are doing! LATEX_FOOTER = # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated # is prepared for conversion to pdf (using ps2pdf). The pdf file will # contain links (just like the HTML output) instead of page references # This makes the output suitable for online browsing using a pdf viewer. PDF_HYPERLINKS = NO # If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of # plain latex in the generated Makefile. Set this option to YES to get a # higher quality PDF documentation. USE_PDFLATEX = NO # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. # command to the generated LaTeX files. This will instruct LaTeX to keep # running if errors occur, instead of asking the user for help. # This option is also used when generating formulas in HTML. LATEX_BATCHMODE = NO # If LATEX_HIDE_INDICES is set to YES then doxygen will not # include the index chapters (such as File Index, Compound Index, etc.) # in the output. LATEX_HIDE_INDICES = NO # If LATEX_SOURCE_CODE is set to YES then doxygen will include # source code with syntax highlighting in the LaTeX output. # Note that which sources are shown also depends on other settings # such as SOURCE_BROWSER. LATEX_SOURCE_CODE = NO # The LATEX_BIB_STYLE tag can be used to specify the style to use for the # bibliography, e.g. plainnat, or ieeetr. The default style is "plain". See # http://en.wikipedia.org/wiki/BibTeX for more info. LATEX_BIB_STYLE = plain #--------------------------------------------------------------------------- # configuration options related to the RTF output #--------------------------------------------------------------------------- # If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output # The RTF output is optimized for Word 97 and may not look very pretty with # other RTF readers or editors. GENERATE_RTF = NO # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `rtf' will be used as the default path. RTF_OUTPUT = rtf # If the COMPACT_RTF tag is set to YES Doxygen generates more compact # RTF documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_RTF = NO # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated # will contain hyperlink fields. The RTF file will # contain links (just like the HTML output) instead of page references. # This makes the output suitable for online browsing using WORD or other # programs which support those fields. # Note: wordpad (write) and others do not support links. RTF_HYPERLINKS = NO # Load style sheet definitions from file. Syntax is similar to doxygen's # config file, i.e. a series of assignments. You only have to provide # replacements, missing definitions are set to their default value. RTF_STYLESHEET_FILE = # Set optional variables used in the generation of an rtf document. # Syntax is similar to doxygen's config file. RTF_EXTENSIONS_FILE = #--------------------------------------------------------------------------- # configuration options related to the man page output #--------------------------------------------------------------------------- # If the GENERATE_MAN tag is set to YES (the default) Doxygen will # generate man pages GENERATE_MAN = NO # The MAN_OUTPUT tag is used to specify where the man pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `man' will be used as the default path. MAN_OUTPUT = man # The MAN_EXTENSION tag determines the extension that is added to # the generated man pages (default is the subroutine's section .3) MAN_EXTENSION = .3 # If the MAN_LINKS tag is set to YES and Doxygen generates man output, # then it will generate one additional man file for each entity # documented in the real man page(s). These additional files # only source the real man page, but without them the man command # would be unable to find the correct page. The default is NO. MAN_LINKS = NO #--------------------------------------------------------------------------- # configuration options related to the XML output #--------------------------------------------------------------------------- # If the GENERATE_XML tag is set to YES Doxygen will # generate an XML file that captures the structure of # the code including all documentation. GENERATE_XML = NO # The XML_OUTPUT tag is used to specify where the XML pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `xml' will be used as the default path. XML_OUTPUT = xml # The XML_SCHEMA tag can be used to specify an XML schema, # which can be used by a validating XML parser to check the # syntax of the XML files. # XML_SCHEMA = # The XML_DTD tag can be used to specify an XML DTD, # which can be used by a validating XML parser to check the # syntax of the XML files. # XML_DTD = # If the XML_PROGRAMLISTING tag is set to YES Doxygen will # dump the program listings (including syntax highlighting # and cross-referencing information) to the XML output. Note that # enabling this will significantly increase the size of the XML output. XML_PROGRAMLISTING = YES #--------------------------------------------------------------------------- # configuration options for the AutoGen Definitions output #--------------------------------------------------------------------------- # If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will # generate an AutoGen Definitions (see autogen.sf.net) file # that captures the structure of the code including all # documentation. Note that this feature is still experimental # and incomplete at the moment. GENERATE_AUTOGEN_DEF = NO #--------------------------------------------------------------------------- # configuration options related to the Perl module output #--------------------------------------------------------------------------- # If the GENERATE_PERLMOD tag is set to YES Doxygen will # generate a Perl module file that captures the structure of # the code including all documentation. Note that this # feature is still experimental and incomplete at the # moment. GENERATE_PERLMOD = NO # If the PERLMOD_LATEX tag is set to YES Doxygen will generate # the necessary Makefile rules, Perl scripts and LaTeX code to be able # to generate PDF and DVI output from the Perl module output. PERLMOD_LATEX = NO # If the PERLMOD_PRETTY tag is set to YES the Perl module output will be # nicely formatted so it can be parsed by a human reader. # This is useful # if you want to understand what is going on. # On the other hand, if this # tag is set to NO the size of the Perl module output will be much smaller # and Perl will parse it just the same. PERLMOD_PRETTY = YES # The names of the make variables in the generated doxyrules.make file # are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. # This is useful so different doxyrules.make files included by the same # Makefile don't overwrite each other's variables. PERLMOD_MAKEVAR_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the preprocessor #--------------------------------------------------------------------------- # If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will # evaluate all C-preprocessor directives found in the sources and include # files. ENABLE_PREPROCESSING = YES # If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro # names in the source code. If set to NO (the default) only conditional # compilation will be performed. Macro expansion can be done in a controlled # way by setting EXPAND_ONLY_PREDEF to YES. MACRO_EXPANSION = YES # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES # then the macro expansion is limited to the macros specified with the # PREDEFINED and EXPAND_AS_DEFINED tags. EXPAND_ONLY_PREDEF = YES # If the SEARCH_INCLUDES tag is set to YES (the default) the includes files # pointed to by INCLUDE_PATH will be searched when a #include is found. SEARCH_INCLUDES = YES # The INCLUDE_PATH tag can be used to specify one or more directories that # contain include files that are not input files but should be processed by # the preprocessor. INCLUDE_PATH = "${Eigen_SOURCE_DIR}/Eigen/src/plugins" # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard # patterns (like *.h and *.hpp) to filter out the header-files in the # directories. If left blank, the patterns specified with FILE_PATTERNS will # be used. INCLUDE_FILE_PATTERNS = # The PREDEFINED tag can be used to specify one or more macro names that # are defined before the preprocessor is started (similar to the -D option of # gcc). The argument of the tag is a list of macros of the form: name # or name=definition (no spaces). If the definition and the = are # omitted =1 is assumed. To prevent a macro definition from being # undefined via #undef or recursively expanded use the := operator # instead of the = operator. PREDEFINED = EIGEN_EMPTY_STRUCT \ EIGEN_PARSED_BY_DOXYGEN \ EIGEN_VECTORIZE \ EIGEN_QT_SUPPORT \ EIGEN_STRONG_INLINE=inline \ EIGEN_DEVICE_FUNC= \ EIGEN_HAS_CXX11=1 \ EIGEN_HAS_CXX11_MATH=1 \ "EIGEN_MAKE_CWISE_BINARY_OP(METHOD,FUNCTOR)=template const CwiseBinaryOp, const Derived, const OtherDerived> METHOD(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const;" \ "EIGEN_CWISE_PRODUCT_RETURN_TYPE(LHS,RHS)=CwiseBinaryOp, const LHS, const RHS>"\ "EIGEN_CAT2(a,b)= a ## b"\ "EIGEN_CAT(a,b)=EIGEN_CAT2(a,b)"\ "EIGEN_CWISE_BINARY_RETURN_TYPE(LHS,RHS,OPNAME)=CwiseBinaryOp, const LHS, const RHS>"\ "EIGEN_ALIGN_TO_BOUNDARY(x)="\ DOXCOMMA=, # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then # this tag can be used to specify a list of macro names that should be expanded. # The macro definition that is found in the sources will be used. # Use the PREDEFINED tag if you want to use a different macro definition that # overrules the definition found in the source code. EXPAND_AS_DEFINED = EIGEN_MAKE_TYPEDEFS \ EIGEN_MAKE_FIXED_TYPEDEFS \ EIGEN_MAKE_TYPEDEFS_ALL_SIZES \ EIGEN_MAKE_ARRAY_TYPEDEFS \ EIGEN_MAKE_ARRAY_FIXED_TYPEDEFS \ EIGEN_MAKE_ARRAY_TYPEDEFS_ALL_SIZES \ EIGEN_CWISE_UNOP_RETURN_TYPE \ EIGEN_CWISE_BINOP_RETURN_TYPE \ EIGEN_CURRENT_STORAGE_BASE_CLASS \ EIGEN_MATHFUNC_IMPL \ _EIGEN_GENERIC_PUBLIC_INTERFACE \ EIGEN_ARRAY_DECLARE_GLOBAL_UNARY \ EIGEN_EMPTY \ EIGEN_EULER_ANGLES_TYPEDEFS \ EIGEN_EULER_ANGLES_SINGLE_TYPEDEF \ EIGEN_EULER_SYSTEM_TYPEDEF \ EIGEN_AUTODIFF_DECLARE_GLOBAL_UNARY \ EIGEN_MATRIX_FUNCTION \ EIGEN_MATRIX_FUNCTION_1 \ EIGEN_DOC_UNARY_ADDONS \ EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL \ EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF # If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then # doxygen's preprocessor will remove all references to function-like macros # that are alone on a line, have an all uppercase name, and do not end with a # semicolon, because these will confuse the parser if not removed. SKIP_FUNCTION_MACROS = YES #--------------------------------------------------------------------------- # Configuration::additions related to external references #--------------------------------------------------------------------------- # The TAGFILES option can be used to specify one or more tagfiles. For each # tag file the location of the external documentation should be added. The # format of a tag file without this location is as follows: # # TAGFILES = file1 file2 ... # Adding location for the tag files is done as follows: # # TAGFILES = file1=loc1 "file2 = loc2" ... # where "loc1" and "loc2" can be relative or absolute paths # or URLs. Note that each tag file must have a unique name (where the name does # NOT include the path). If a tag file is not located in the directory in which # doxygen is run, you must also specify the path to the tagfile here. TAGFILES = ${EIGEN_DOXY_TAGFILES} # "${Eigen_BINARY_DIR}/doc/eigen-unsupported.doxytags =unsupported" # When a file name is specified after GENERATE_TAGFILE, doxygen will create # a tag file that is based on the input files it reads. GENERATE_TAGFILE = "${Eigen_BINARY_DIR}/doc/${EIGEN_DOXY_PROJECT_NAME}.doxytags" # If the ALLEXTERNALS tag is set to YES all external classes will be listed # in the class index. If set to NO only the inherited external classes # will be listed. ALLEXTERNALS = NO # If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed # in the modules index. If set to NO, only the current project's groups will # be listed. EXTERNAL_GROUPS = NO # The PERL_PATH should be the absolute path and name of the perl script # interpreter (i.e. the result of `which perl'). PERL_PATH = /usr/bin/perl #--------------------------------------------------------------------------- # Configuration options related to the dot tool #--------------------------------------------------------------------------- # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will # generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base # or super classes. Setting the tag to NO turns the diagrams off. Note that # this option also works with HAVE_DOT disabled, but it is recommended to # install and use dot, since it yields more powerful graphs. CLASS_DIAGRAMS = YES # You can define message sequence charts within doxygen comments using the \msc # command. Doxygen will then run the mscgen tool (see # http://www.mcternan.me.uk/mscgen/) to produce the chart and insert it in the # documentation. The MSCGEN_PATH tag allows you to specify the directory where # the mscgen tool resides. If left empty the tool is assumed to be found in the # default search path. MSCGEN_PATH = # If set to YES, the inheritance and collaboration graphs will hide # inheritance and usage relations if the target is undocumented # or is not a class. HIDE_UNDOC_RELATIONS = NO # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is # available from the path. This tool is part of Graphviz, a graph visualization # toolkit from AT&T and Lucent Bell Labs. The other options in this section # have no effect if this option is set to NO (the default) HAVE_DOT = YES # The DOT_NUM_THREADS specifies the number of dot invocations doxygen is # allowed to run in parallel. When set to 0 (the default) doxygen will # base this on the number of processors available in the system. You can set it # explicitly to a value larger than 0 to get control over the balance # between CPU load and processing speed. DOT_NUM_THREADS = 0 # By default doxygen will use the Helvetica font for all dot files that # doxygen generates. When you want a differently looking font you can specify # the font name using DOT_FONTNAME. You need to make sure dot is able to find # the font, which can be done by putting it in a standard location or by setting # the DOTFONTPATH environment variable or by setting DOT_FONTPATH to the # directory containing the font. DOT_FONTNAME = # The DOT_FONTSIZE tag can be used to set the size of the font of dot graphs. # The default size is 10pt. DOT_FONTSIZE = 10 # By default doxygen will tell dot to use the Helvetica font. # If you specify a different font using DOT_FONTNAME you can use DOT_FONTPATH to # set the path where dot can find it. DOT_FONTPATH = # If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect inheritance relations. Setting this tag to YES will force the # CLASS_DIAGRAMS tag to NO. CLASS_GRAPH = YES # If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect implementation dependencies (inheritance, containment, and # class references variables) of the class with other documented classes. COLLABORATION_GRAPH = NO # If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen # will generate a graph for groups, showing the direct groups dependencies GROUP_GRAPHS = NO # If the UML_LOOK tag is set to YES doxygen will generate inheritance and # collaboration diagrams in a style similar to the OMG's Unified Modeling # Language. UML_LOOK = YES # If the UML_LOOK tag is enabled, the fields and methods are shown inside # the class node. If there are many fields or methods and many nodes the # graph may become too big to be useful. The UML_LIMIT_NUM_FIELDS # threshold limits the number of items for each type to make the size more # manageable. Set this to 0 for no limit. Note that the threshold may be # exceeded by 50% before the limit is enforced. UML_LIMIT_NUM_FIELDS = 10 # If set to YES, the inheritance and collaboration graphs will show the # relations between templates and their instances. TEMPLATE_RELATIONS = NO # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT # tags are set to YES then doxygen will generate a graph for each documented # file showing the direct and indirect include dependencies of the file with # other documented files. INCLUDE_GRAPH = NO # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and # HAVE_DOT tags are set to YES then doxygen will generate a graph for each # documented header file showing the documented files that directly or # indirectly include this file. INCLUDED_BY_GRAPH = NO # If the CALL_GRAPH and HAVE_DOT options are set to YES then # doxygen will generate a call dependency graph for every global function # or class method. Note that enabling this option will significantly increase # the time of a run. So in most cases it will be better to enable call graphs # for selected functions only using the \callgraph command. CALL_GRAPH = NO # If the CALLER_GRAPH and HAVE_DOT tags are set to YES then # doxygen will generate a caller dependency graph for every global function # or class method. Note that enabling this option will significantly increase # the time of a run. So in most cases it will be better to enable caller # graphs for selected functions only using the \callergraph command. CALLER_GRAPH = NO # If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen # will generate a graphical hierarchy of all classes instead of a textual one. GRAPHICAL_HIERARCHY = NO # If the DIRECTORY_GRAPH and HAVE_DOT tags are set to YES # then doxygen will show the dependencies a directory has on other directories # in a graphical way. The dependency relations are determined by the #include # relations between the files in the directories. DIRECTORY_GRAPH = NO # The DOT_IMAGE_FORMAT tag can be used to set the image format of the images # generated by dot. Possible values are svg, png, jpg, or gif. # If left blank png will be used. If you choose svg you need to set # HTML_FILE_EXTENSION to xhtml in order to make the SVG files # visible in IE 9+ (other browsers do not have this requirement). DOT_IMAGE_FORMAT = png # If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to # enable generation of interactive SVG images that allow zooming and panning. # Note that this requires a modern browser other than Internet Explorer. # Tested and working are Firefox, Chrome, Safari, and Opera. For IE 9+ you # need to set HTML_FILE_EXTENSION to xhtml in order to make the SVG files # visible. Older versions of IE do not have SVG support. INTERACTIVE_SVG = NO # The tag DOT_PATH can be used to specify the path where the dot tool can be # found. If left blank, it is assumed the dot tool can be found in the path. DOT_PATH = # The DOTFILE_DIRS tag can be used to specify one or more directories that # contain dot files that are included in the documentation (see the # \dotfile command). DOTFILE_DIRS = # The MSCFILE_DIRS tag can be used to specify one or more directories that # contain msc files that are included in the documentation (see the # \mscfile command). MSCFILE_DIRS = # The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of # nodes that will be shown in the graph. If the number of nodes in a graph # becomes larger than this value, doxygen will truncate the graph, which is # visualized by representing a node as a red box. Note that doxygen if the # number of direct children of the root node in a graph is already larger than # DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note # that the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH. DOT_GRAPH_MAX_NODES = 50 # The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the # graphs generated by dot. A depth value of 3 means that only nodes reachable # from the root by following a path via at most 3 edges will be shown. Nodes # that lay further from the root node will be omitted. Note that setting this # option to 1 or 2 may greatly reduce the computation time needed for large # code bases. Also note that the size of a graph can be further restricted by # DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction. MAX_DOT_GRAPH_DEPTH = 0 # Set the DOT_TRANSPARENT tag to YES to generate images with a transparent # background. This is disabled by default, because dot on Windows does not # seem to support this out of the box. Warning: Depending on the platform used, # enabling this option may lead to badly anti-aliased labels on the edges of # a graph (i.e. they become hard to read). DOT_TRANSPARENT = NO # Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output # files in one run (i.e. multiple -o and -T options on the command line). This # makes dot run faster, but since only newer versions of dot (>1.8.10) # support this, this feature is disabled by default. DOT_MULTI_TARGETS = NO # If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will # generate a legend page explaining the meaning of the various boxes and # arrows in the dot generated graphs. GENERATE_LEGEND = YES # If the DOT_CLEANUP tag is set to YES (the default) Doxygen will # remove the intermediate dot files that are used to generate # the various graphs. DOT_CLEANUP = YES piqp-octave/src/eigen/doc/TutorialArrayClass.dox0000644000175100001770000002051314107270226021604 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialArrayClass The Array class and coefficient-wise operations This page aims to provide an overview and explanations on how to use Eigen's Array class. \eigenAutoToc \section TutorialArrayClassIntro What is the Array class? The Array class provides general-purpose arrays, as opposed to the Matrix class which is intended for linear algebra. Furthermore, the Array class provides an easy way to perform coefficient-wise operations, which might not have a linear algebraic meaning, such as adding a constant to every coefficient in the array or multiplying two arrays coefficient-wise. \section TutorialArrayClassTypes Array types Array is a class template taking the same template parameters as Matrix. As with Matrix, the first three template parameters are mandatory: \code Array \endcode The last three template parameters are optional. Since this is exactly the same as for Matrix, we won't explain it again here and just refer to \ref TutorialMatrixClass. Eigen also provides typedefs for some common cases, in a way that is similar to the Matrix typedefs but with some slight differences, as the word "array" is used for both 1-dimensional and 2-dimensional arrays. We adopt the convention that typedefs of the form ArrayNt stand for 1-dimensional arrays, where N and t are the size and the scalar type, as in the Matrix typedefs explained on \ref TutorialMatrixClass "this page". For 2-dimensional arrays, we use typedefs of the form ArrayNNt. Some examples are shown in the following table:
Type Typedef
\code Array \endcode \code ArrayXf \endcode
\code Array \endcode \code Array3f \endcode
\code Array \endcode \code ArrayXXd \endcode
\code Array \endcode \code Array33d \endcode
\section TutorialArrayClassAccess Accessing values inside an Array The parenthesis operator is overloaded to provide write and read access to the coefficients of an array, just as with matrices. Furthermore, the \c << operator can be used to initialize arrays (via the comma initializer) or to print them.
Example:Output:
\include Tutorial_ArrayClass_accessors.cpp \verbinclude Tutorial_ArrayClass_accessors.out
For more information about the comma initializer, see \ref TutorialAdvancedInitialization. \section TutorialArrayClassAddSub Addition and subtraction Adding and subtracting two arrays is the same as for matrices. The operation is valid if both arrays have the same size, and the addition or subtraction is done coefficient-wise. Arrays also support expressions of the form array + scalar which add a scalar to each coefficient in the array. This provides a functionality that is not directly available for Matrix objects.
Example:Output:
\include Tutorial_ArrayClass_addition.cpp \verbinclude Tutorial_ArrayClass_addition.out
\section TutorialArrayClassMult Array multiplication First of all, of course you can multiply an array by a scalar, this works in the same way as matrices. Where arrays are fundamentally different from matrices, is when you multiply two together. Matrices interpret multiplication as matrix product and arrays interpret multiplication as coefficient-wise product. Thus, two arrays can be multiplied if and only if they have the same dimensions.
Example:Output:
\include Tutorial_ArrayClass_mult.cpp \verbinclude Tutorial_ArrayClass_mult.out
\section TutorialArrayClassCwiseOther Other coefficient-wise operations The Array class defines other coefficient-wise operations besides the addition, subtraction and multiplication operators described above. For example, the \link ArrayBase::abs() .abs() \endlink method takes the absolute value of each coefficient, while \link ArrayBase::sqrt() .sqrt() \endlink computes the square root of the coefficients. If you have two arrays of the same size, you can call \link ArrayBase::min(const Eigen::ArrayBase&) const .min(.) \endlink to construct the array whose coefficients are the minimum of the corresponding coefficients of the two given arrays. These operations are illustrated in the following example.
Example:Output:
\include Tutorial_ArrayClass_cwise_other.cpp \verbinclude Tutorial_ArrayClass_cwise_other.out
More coefficient-wise operations can be found in the \ref QuickRefPage. \section TutorialArrayClassConvert Converting between array and matrix expressions When should you use objects of the Matrix class and when should you use objects of the Array class? You cannot apply Matrix operations on arrays, or Array operations on matrices. Thus, if you need to do linear algebraic operations such as matrix multiplication, then you should use matrices; if you need to do coefficient-wise operations, then you should use arrays. However, sometimes it is not that simple, but you need to use both Matrix and Array operations. In that case, you need to convert a matrix to an array or reversely. This gives access to all operations regardless of the choice of declaring objects as arrays or as matrices. \link MatrixBase Matrix expressions \endlink have an \link MatrixBase::array() .array() \endlink method that 'converts' them into \link ArrayBase array expressions\endlink, so that coefficient-wise operations can be applied easily. Conversely, \link ArrayBase array expressions \endlink have a \link ArrayBase::matrix() .matrix() \endlink method. As with all Eigen expression abstractions, this doesn't have any runtime cost (provided that you let your compiler optimize). Both \link MatrixBase::array() .array() \endlink and \link ArrayBase::matrix() .matrix() \endlink can be used as rvalues and as lvalues. Mixing matrices and arrays in an expression is forbidden with Eigen. For instance, you cannot add a matrix and array directly; the operands of a \c + operator should either both be matrices or both be arrays. However, it is easy to convert from one to the other with \link MatrixBase::array() .array() \endlink and \link ArrayBase::matrix() .matrix()\endlink. The exception to this rule is the assignment operator: it is allowed to assign a matrix expression to an array variable, or to assign an array expression to a matrix variable. The following example shows how to use array operations on a Matrix object by employing the \link MatrixBase::array() .array() \endlink method. For example, the statement result = m.array() * n.array() takes two matrices \c m and \c n, converts them both to an array, uses * to multiply them coefficient-wise and assigns the result to the matrix variable \c result (this is legal because Eigen allows assigning array expressions to matrix variables). As a matter of fact, this usage case is so common that Eigen provides a \link MatrixBase::cwiseProduct const .cwiseProduct(.) \endlink method for matrices to compute the coefficient-wise product. This is also shown in the example program.
Example:Output:
\include Tutorial_ArrayClass_interop_matrix.cpp \verbinclude Tutorial_ArrayClass_interop_matrix.out
Similarly, if \c array1 and \c array2 are arrays, then the expression array1.matrix() * array2.matrix() computes their matrix product. Here is a more advanced example. The expression (m.array() + 4).matrix() * m adds 4 to every coefficient in the matrix \c m and then computes the matrix product of the result with \c m. Similarly, the expression (m.array() * n.array()).matrix() * m computes the coefficient-wise product of the matrices \c m and \c n and then the matrix product of the result with \c m.
Example:Output:
\include Tutorial_ArrayClass_interop.cpp \verbinclude Tutorial_ArrayClass_interop.out
*/ } piqp-octave/src/eigen/doc/TutorialMapClass.dox0000644000175100001770000000762414107270226021253 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialMapClass Interfacing with raw buffers: the Map class This page explains how to work with "raw" C/C++ arrays. This can be useful in a variety of contexts, particularly when "importing" vectors and matrices from other libraries into %Eigen. \eigenAutoToc \section TutorialMapIntroduction Introduction Occasionally you may have a pre-defined array of numbers that you want to use within %Eigen as a vector or matrix. While one option is to make a copy of the data, most commonly you probably want to re-use this memory as an %Eigen type. Fortunately, this is very easy with the Map class. \section TutorialMapTypes Map types and declaring Map variables A Map object has a type defined by its %Eigen equivalent: \code Map > \endcode Note that, in this default case, a Map requires just a single template parameter. To construct a Map variable, you need two other pieces of information: a pointer to the region of memory defining the array of coefficients, and the desired shape of the matrix or vector. For example, to define a matrix of \c float with sizes determined at compile time, you might do the following: \code Map mf(pf,rows,columns); \endcode where \c pf is a \c float \c * pointing to the array of memory. A fixed-size read-only vector of integers might be declared as \code Map mi(pi); \endcode where \c pi is an \c int \c *. In this case the size does not have to be passed to the constructor, because it is already specified by the Matrix/Array type. Note that Map does not have a default constructor; you \em must pass a pointer to initialize the object. However, you can work around this requirement (see \ref TutorialMapPlacementNew). Map is flexible enough to accommodate a variety of different data representations. There are two other (optional) template parameters: \code Map \endcode \li \c MapOptions specifies whether the pointer is \c #Aligned, or \c #Unaligned. The default is \c #Unaligned. \li \c StrideType allows you to specify a custom layout for the memory array, using the Stride class. One example would be to specify that the data array is organized in row-major format:
Example:Output:
\include Tutorial_Map_rowmajor.cpp \verbinclude Tutorial_Map_rowmajor.out
However, Stride is even more flexible than this; for details, see the documentation for the Map and Stride classes. \section TutorialMapUsing Using Map variables You can use a Map object just like any other %Eigen type:
Example:Output:
\include Tutorial_Map_using.cpp \verbinclude Tutorial_Map_using.out
All %Eigen functions are written to accept Map objects just like other %Eigen types. However, when writing your own functions taking %Eigen types, this does \em not happen automatically: a Map type is not identical to its Dense equivalent. See \ref TopicFunctionTakingEigenTypes for details. \section TutorialMapPlacementNew Changing the mapped array It is possible to change the array of a Map object after declaration, using the C++ "placement new" syntax:
Example:Output:
\include Map_placement_new.cpp \verbinclude Map_placement_new.out
Despite appearances, this does not invoke the memory allocator, because the syntax specifies the location for storing the result. This syntax makes it possible to declare a Map object without first knowing the mapped array's location in memory: \code Map A(NULL); // don't try to use this matrix yet! VectorXf b(n_matrices); for (int i = 0; i < n_matrices; i++) { new (&A) Map(get_matrix_pointer(i)); b(i) = A.trace(); } \endcode */ } piqp-octave/src/eigen/doc/NewExpressionType.dox0000644000175100001770000001275214107270226021475 0ustar runnerdockernamespace Eigen { /** \page TopicNewExpressionType Adding a new expression type \warning Disclaimer: this page is tailored to very advanced users who are not afraid of dealing with some %Eigen's internal aspects. In most cases, a custom expression can be avoided by either using custom \ref MatrixBase::unaryExpr "unary" or \ref MatrixBase::binaryExpr "binary" functors, while extremely complex matrix manipulations can be achieved by a nullary functors as described in the \ref TopicCustomizing_NullaryExpr "previous page". This page describes with the help of an example how to implement a new light-weight expression type in %Eigen. This consists of three parts: the expression type itself, a traits class containing compile-time information about the expression, and the evaluator class which is used to evaluate the expression to a matrix. \b TO \b DO: Write a page explaining the design, with details on vectorization etc., and refer to that page here. \eigenAutoToc \section TopicSetting The setting A circulant matrix is a matrix where each column is the same as the column to the left, except that it is cyclically shifted downwards. For example, here is a 4-by-4 circulant matrix: \f[ \begin{bmatrix} 1 & 8 & 4 & 2 \\ 2 & 1 & 8 & 4 \\ 4 & 2 & 1 & 8 \\ 8 & 4 & 2 & 1 \end{bmatrix} \f] A circulant matrix is uniquely determined by its first column. We wish to write a function \c makeCirculant which, given the first column, returns an expression representing the circulant matrix. For simplicity, we restrict the \c makeCirculant function to dense matrices. It may make sense to also allow arrays, or sparse matrices, but we will not do so here. We also do not want to support vectorization. \section TopicPreamble Getting started We will present the file implementing the \c makeCirculant function part by part. We start by including the appropriate header files and forward declaring the expression class, which we will call \c Circulant. The \c makeCirculant function will return an object of this type. The class \c Circulant is in fact a class template; the template argument \c ArgType refers to the type of the vector passed to the \c makeCirculant function. \include make_circulant.cpp.preamble \section TopicTraits The traits class For every expression class \c X, there should be a traits class \c Traits in the \c Eigen::internal namespace containing information about \c X known as compile time. As explained in \ref TopicSetting, we designed the \c Circulant expression class to refer to dense matrices. The entries of the circulant matrix have the same type as the entries of the vector passed to the \c makeCirculant function. The type used to index the entries is also the same. Again for simplicity, we will only return column-major matrices. Finally, the circulant matrix is a square matrix (number of rows equals number of columns), and the number of rows equals the number of rows of the column vector passed to the \c makeCirculant function. If this is a dynamic-size vector, then the size of the circulant matrix is not known at compile-time. This leads to the following code: \include make_circulant.cpp.traits \section TopicExpression The expression class The next step is to define the expression class itself. In our case, we want to inherit from \c MatrixBase in order to expose the interface for dense matrices. In the constructor, we check that we are passed a column vector (see \ref TopicAssertions) and we store the vector from which we are going to build the circulant matrix in the member variable \c m_arg. Finally, the expression class should compute the size of the corresponding circulant matrix. As explained above, this is a square matrix with as many columns as the vector used to construct the matrix. \b TO \b DO: What about the \c Nested typedef? It seems to be necessary; is this only temporary? \include make_circulant.cpp.expression \section TopicEvaluator The evaluator The last big fragment implements the evaluator for the \c Circulant expression. The evaluator computes the entries of the circulant matrix; this is done in the \c .coeff() member function. The entries are computed by finding the corresponding entry of the vector from which the circulant matrix is constructed. Getting this entry may actually be non-trivial when the circulant matrix is constructed from a vector which is given by a complicated expression, so we use the evaluator which corresponds to the vector. The \c CoeffReadCost constant records the cost of computing an entry of the circulant matrix; we ignore the index computation and say that this is the same as the cost of computing an entry of the vector from which the circulant matrix is constructed. In the constructor, we save the evaluator for the column vector which defined the circulant matrix. We also save the size of that vector; remember that we can query an expression object to find the size but not the evaluator. \include make_circulant.cpp.evaluator \section TopicEntry The entry point After all this, the \c makeCirculant function is very simple. It simply creates an expression object and returns it. \include make_circulant.cpp.entry \section TopicMain A simple main function for testing Finally, a short \c main function that shows how the \c makeCirculant function can be called. \include make_circulant.cpp.main If all the fragments are combined, the following output is produced, showing that the program works as expected: \include make_circulant.out */ } piqp-octave/src/eigen/doc/eigendoxy_tabs.css0000644000175100001770000000210714107270226021015 0ustar runnerdocker.tabs, .tabs2, .tabs3 { background-image: url('tab_b.png'); width: 100%; z-index: 101; font-size: 13px; } .tabs2 { font-size: 10px; } .tabs3 { font-size: 9px; } .tablist { margin: 0; padding: 0; display: table; } .tablist li { float: left; display: table-cell; background-image: url('tab_b.png'); line-height: 36px; list-style: none; } .tablist a { display: block; padding: 0 20px; font-weight: bold; background-image:url('tab_s.png'); background-repeat:no-repeat; background-position:right; color: #283A5D; text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9); text-decoration: none; outline: none; } .tabs3 .tablist a { padding: 0 10px; } .tablist a:hover { background-image: url('tab_h.png'); background-repeat:repeat-x; color: #fff; text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0); text-decoration: none; } .tablist li.current a { background-image: url('tab_a.png'); background-repeat:repeat-x; color: #fff; text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0); } piqp-octave/src/eigen/doc/TutorialSparse.dox0000644000175100001770000005000314107270226020772 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialSparse Sparse matrix manipulations \eigenAutoToc Manipulating and solving sparse problems involves various modules which are summarized below:
ModuleHeader fileContents
\link SparseCore_Module SparseCore \endlink\code#include \endcodeSparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers)
\link SparseCholesky_Module SparseCholesky \endlink\code#include \endcodeDirect sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems
\link SparseLU_Module SparseLU \endlink\code #include \endcode %Sparse LU factorization to solve general square sparse systems
\link SparseQR_Module SparseQR \endlink\code #include\endcode %Sparse QR factorization for solving sparse linear least-squares problems
\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink\code#include \endcodeIterative solvers to solve large general linear square problems (including self-adjoint positive definite problems)
\link Sparse_Module Sparse \endlink\code#include \endcodeIncludes all the above modules
\section TutorialSparseIntro Sparse matrix format In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix. \b The \b %SparseMatrix \b class The class SparseMatrix is the main sparse matrix representation of Eigen's sparse module; it offers high performance and low memory usage. It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme. It consists of four compact arrays: - \c Values: stores the coefficient values of the non-zeros. - \c InnerIndices: stores the row (resp. column) indices of the non-zeros. - \c OuterStarts: stores for each column (resp. row) the index of the first non-zero in the previous two arrays. - \c InnerNNZs: stores the number of non-zeros of each column (resp. row). The word \c inner refers to an \em inner \em vector that is a column for a column-major matrix, or a row for a row-major matrix. The word \c outer refers to the other direction. This storage scheme is better explained on an example. The following matrix
03 00 0
220 0017
75 01 0
00 00 0
00140 8
and one of its possible sparse, \b column \b major representation:
Values: 227_3514__1_178
InnerIndices: 12_02 4__2_ 14
OuterStarts:035810\em 12
InnerNNZs: 2211 2
Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices. The \c "_" indicates available free space to quickly insert new elements. Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector. On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a O(1) operation. The case where no empty space is available is a special case, and is referred as the \em compressed mode. It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS). Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function. In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we have the equality: \c InnerNNZs[j] = \c OuterStarts[j+1]-\c OuterStarts[j]. Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer. It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs. The results of %Eigen's operations always produces \b compressed sparse matrices. On the other hand, the insertion of a new element into a SparseMatrix converts this later to the \b uncompressed mode. Here is the previous matrix represented in compressed mode:
Values: 22735141178
InnerIndices: 1202 42 14
OuterStarts:02456\em 8
A SparseVector is a special case of a SparseMatrix where only the \c Values and \c InnerIndices arrays are stored. There is no notion of compressed/uncompressed mode for a SparseVector. \section TutorialSparseExample First example Before describing each individual class, let's start with the following typical example: solving the Laplace equation \f$ \Delta u = 0 \f$ on a regular 2D grid using a finite difference scheme and Dirichlet boundary conditions. Such problem can be mathematically expressed as a linear problem of the form \f$ Ax=b \f$ where \f$ x \f$ is the vector of \c m unknowns (in our case, the values of the pixels), \f$ b \f$ is the right hand side vector resulting from the boundary conditions, and \f$ A \f$ is an \f$ m \times m \f$ matrix containing only a few non-zero elements resulting from the discretization of the Laplacian operator.
\include Tutorial_sparse_example.cpp \image html Tutorial_sparse_example.jpeg
In this example, we start by defining a column-major sparse matrix type of double \c SparseMatrix, and a triplet list of the same scalar type \c Triplet. A triplet is a simple object representing a non-zero entry as the triplet: \c row index, \c column index, \c value. In the main function, we declare a list \c coefficients of triplets (as a std vector) and the right hand side vector \f$ b \f$ which are filled by the \a buildProblem function. The raw and flat list of non-zero entries is then converted to a true SparseMatrix object \c A. Note that the elements of the list do not have to be sorted, and possible duplicate entries will be summed up. The last step consists of effectively solving the assembled problem. Since the resulting matrix \c A is symmetric by construction, we can perform a direct Cholesky factorization via the SimplicialLDLT class which behaves like its LDLT counterpart for dense objects. The resulting vector \c x contains the pixel values as a 1D array which is saved to a jpeg file shown on the right of the code above. Describing the \a buildProblem and \a save functions is out of the scope of this tutorial. They are given \ref TutorialSparse_example_details "here" for the curious and reproducibility purpose. \section TutorialSparseSparseMatrix The SparseMatrix class \b %Matrix \b and \b vector \b properties \n The SparseMatrix and SparseVector classes take three template arguments: * the scalar type (e.g., double) * the storage order (ColMajor or RowMajor, the default is ColMajor) * the inner index type (default is \c int). As for dense Matrix objects, constructors takes the size of the object. Here are some examples: \code SparseMatrix > mat(1000,2000); // declares a 1000x2000 column-major compressed sparse matrix of complex SparseMatrix mat(1000,2000); // declares a 1000x2000 row-major compressed sparse matrix of double SparseVector > vec(1000); // declares a column sparse vector of complex of size 1000 SparseVector vec(1000); // declares a row sparse vector of double of size 1000 \endcode In the rest of the tutorial, \c mat and \c vec represent any sparse-matrix and sparse-vector objects, respectively. The dimensions of a matrix can be queried using the following functions:
Standard \n dimensions\code mat.rows() mat.cols()\endcode \code vec.size() \endcode
Sizes along the \n inner/outer dimensions\code mat.innerSize() mat.outerSize()\endcode
Number of non \n zero coefficients\code mat.nonZeros() \endcode \code vec.nonZeros() \endcode
\b Iterating \b over \b the \b nonzero \b coefficients \n Random access to the elements of a sparse object can be done through the \c coeffRef(i,j) function. However, this function involves a quite expensive binary search. In most cases, one only wants to iterate over the non-zeros elements. This is achieved by a standard loop over the outer dimension, and then by iterating over the non-zeros of the current inner vector via an InnerIterator. Thus, the non-zero entries have to be visited in the same order than the storage order. Here is an example:
\code SparseMatrix mat(rows,cols); for (int k=0; k::InnerIterator it(mat,k); it; ++it) { it.value(); it.row(); // row index it.col(); // col index (here it is equal to k) it.index(); // inner index, here it is equal to it.row() } \endcode \code SparseVector vec(size); for (SparseVector::InnerIterator it(vec); it; ++it) { it.value(); // == vec[ it.index() ] it.index(); } \endcode
For a writable expression, the referenced value can be modified using the valueRef() function. If the type of the sparse matrix or vector depends on a template parameter, then the \c typename keyword is required to indicate that \c InnerIterator denotes a type; see \ref TopicTemplateKeyword for details. \section TutorialSparseFilling Filling a sparse matrix Because of the special storage scheme of a SparseMatrix, special care has to be taken when adding new nonzero entries. For instance, the cost of a single purely random insertion into a SparseMatrix is \c O(nnz), where \c nnz is the current number of non-zero coefficients. The simplest way to create a sparse matrix while guaranteeing good performance is thus to first build a list of so-called \em triplets, and then convert it to a SparseMatrix. Here is a typical usage example: \code typedef Eigen::Triplet T; std::vector tripletList; tripletList.reserve(estimation_of_entries); for(...) { // ... tripletList.push_back(T(i,j,v_ij)); } SparseMatrixType mat(rows,cols); mat.setFromTriplets(tripletList.begin(), tripletList.end()); // mat is ready to go! \endcode The \c std::vector of triplets might contain the elements in arbitrary order, and might even contain duplicated elements that will be summed up by setFromTriplets(). See the SparseMatrix::setFromTriplets() function and class Triplet for more details. In some cases, however, slightly higher performance, and lower memory consumption can be reached by directly inserting the non-zeros into the destination matrix. A typical scenario of this approach is illustrated below: \code 1: SparseMatrix mat(rows,cols); // default is column major 2: mat.reserve(VectorXi::Constant(cols,6)); 3: for each i,j such that v_ij != 0 4: mat.insert(i,j) = v_ij; // alternative: mat.coeffRef(i,j) += v_ij; 5: mat.makeCompressed(); // optional \endcode - The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an operator[](int j) returning the reserve size of the \c j-th inner vector (e.g., via a VectorXi or std::vector). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector. - The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation. - When calling insert(i,j) the element \c i \c ,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly. - The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage. \section TutorialSparseFeatureSet Supported operators and functions Because of their special storage format, sparse matrices cannot offer the same level of flexibility than dense matrices. In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented. In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector. \subsection TutorialSparse_BasicOps Basic operations %Sparse expressions support most of the unary and binary coefficient wise operations: \code sm1.real() sm1.imag() -sm1 0.5*sm1 sm1+sm2 sm1-sm2 sm1.cwiseProduct(sm2) \endcode However, a strong restriction is that the storage orders must match. For instance, in the following example: \code sm4 = sm1 + sm2 + sm3; \endcode sm1, sm2, and sm3 must all be row-major or all column-major. On the other hand, there is no restriction on the target matrix sm4. For instance, this means that for computing \f$ A^T + A \f$, the matrix \f$ A^T \f$ must be evaluated into a temporary matrix of compatible storage order: \code SparseMatrix A, B; B = SparseMatrix(A.transpose()) + A; \endcode Binary coefficient wise operators can also mix sparse and dense expressions: \code sm2 = sm1.cwiseProduct(dm1); dm2 = sm1 + dm1; dm2 = dm1 - sm1; \endcode Performance-wise, the adding/subtracting sparse and dense matrices is better performed in two steps. For instance, instead of doing dm2 = sm1 + dm1, better write: \code dm2 = dm1; dm2 += sm1; \endcode This version has the advantage to fully exploit the higher performance of dense storage (no indirection, SIMD, etc.), and to pay the cost of slow sparse evaluation on the few non-zeros of the sparse matrix only. %Sparse expressions also support transposition: \code sm1 = sm2.transpose(); sm1 = sm2.adjoint(); \endcode However, there is no transposeInPlace() method. \subsection TutorialSparse_Products Matrix products %Eigen supports various kind of sparse matrix products which are summarize below: - \b sparse-dense: \code dv2 = sm1 * dv1; dm2 = dm1 * sm1.adjoint(); dm2 = 2. * sm1 * dm1; \endcode - \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with selfadjointView(): \code dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored dm2 = A.selfadjointView() * dm1; // if only the upper part of A is stored dm2 = A.selfadjointView() * dm1; // if only the lower part of A is stored \endcode - \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear: \code sm3 = sm1 * sm2; sm3 = 4 * sm1.adjoint() * sm2; \endcode The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the prune() functions: \code sm3 = (sm1 * sm2).pruned(); // removes numerical zeros sm3 = (sm1 * sm2).pruned(ref); // removes elements much smaller than ref sm3 = (sm1 * sm2).pruned(ref,epsilon); // removes elements smaller than ref*epsilon \endcode - \b permutations. Finally, permutations can be applied to sparse matrices too: \code PermutationMatrix P = ...; sm2 = P * sm1; sm2 = sm1 * P.inverse(); sm2 = sm1.transpose() * P; \endcode \subsection TutorialSparse_SubMatrices Block operations Regarding read-access, sparse matrices expose the same API than for dense matrices to access to sub-matrices such as blocks, columns, and rows. See \ref TutorialBlockOperations for a detailed introduction. However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as block(...) and corner*(...). The available API for write-access to a SparseMatrix are summarized below: \code SparseMatrix sm1; sm1.col(j) = ...; sm1.leftCols(ncols) = ...; sm1.middleCols(j,ncols) = ...; sm1.rightCols(ncols) = ...; SparseMatrix sm2; sm2.row(i) = ...; sm2.topRows(nrows) = ...; sm2.middleRows(i,nrows) = ...; sm2.bottomRows(nrows) = ...; \endcode In addition, sparse matrices expose the SparseMatrixBase::innerVector() and SparseMatrixBase::innerVectors() methods, which are aliases to the col/middleCols methods for a column-major storage, and to the row/middleRows methods for a row-major storage. \subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side: \code dm2 = sm1.triangularView(dm1); dv2 = sm1.transpose().triangularView(dv1); \endcode The selfadjointView() function permits various operations: - optimized sparse-dense matrix products: \code dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored dm2 = A.selfadjointView() * dm1; // if only the upper part of A is stored dm2 = A.selfadjointView() * dm1; // if only the lower part of A is stored \endcode - copy of triangular parts: \code sm2 = sm1.selfadjointView(); // makes a full selfadjoint matrix from the upper triangular part sm2.selfadjointView() = sm1.selfadjointView(); // copies the upper triangular part to the lower triangular part \endcode - application of symmetric permutations: \code PermutationMatrix P = ...; sm2 = A.selfadjointView().twistedBy(P); // compute P S P' from the upper triangular part of A, and make it a full matrix sm2.selfadjointView() = A.selfadjointView().twistedBy(P); // compute P S P' from the lower triangular part of A, and then only compute the lower part \endcode Please, refer to the \link SparseQuickRefPage Quick Reference \endlink guide for the list of supported operations. The list of linear solvers available is \link TopicSparseSystems here. \endlink */ } piqp-octave/src/eigen/doc/FunctionsTakingEigenTypes.dox0000644000175100001770000003222214107270226023117 0ustar runnerdockernamespace Eigen { /** \page TopicFunctionTakingEigenTypes Writing Functions Taking %Eigen Types as Parameters %Eigen's use of expression templates results in potentially every expression being of a different type. If you pass such an expression to a function taking a parameter of type Matrix, your expression will implicitly be evaluated into a temporary Matrix, which will then be passed to the function. This means that you lose the benefit of expression templates. Concretely, this has two drawbacks: \li The evaluation into a temporary may be useless and inefficient; \li This only allows the function to read from the expression, not to write to it. Fortunately, all this myriad of expression types have in common that they all inherit a few common, templated base classes. By letting your function take templated parameters of these base types, you can let them play nicely with %Eigen's expression templates. \eigenAutoToc \section TopicFirstExamples Some First Examples This section will provide simple examples for different types of objects %Eigen is offering. Before starting with the actual examples, we need to recapitulate which base objects we can work with (see also \ref TopicClassHierarchy). \li MatrixBase: The common base class for all dense matrix expressions (as opposed to array expressions, as opposed to sparse and special matrix classes). Use it in functions that are meant to work only on dense matrices. \li ArrayBase: The common base class for all dense array expressions (as opposed to matrix expressions, etc). Use it in functions that are meant to work only on arrays. \li DenseBase: The common base class for all dense matrix expression, that is, the base class for both \c MatrixBase and \c ArrayBase. It can be used in functions that are meant to work on both matrices and arrays. \li EigenBase: The base class unifying all types of objects that can be evaluated into dense matrices or arrays, for example special matrix classes such as diagonal matrices, permutation matrices, etc. It can be used in functions that are meant to work on any such general type. %EigenBase Example

Prints the dimensions of the most generic object present in %Eigen. It could be any matrix expressions, any dense or sparse matrix and any array.
Example:Output:
\include function_taking_eigenbase.cpp \verbinclude function_taking_eigenbase.out
%DenseBase Example

Prints a sub-block of the dense expression. Accepts any dense matrix or array expression, but no sparse objects and no special matrix classes such as DiagonalMatrix. \code template void print_block(const DenseBase& b, int x, int y, int r, int c) { std::cout << "block: " << b.block(x,y,r,c) << std::endl; } \endcode %ArrayBase Example

Prints the maximum coefficient of the array or array-expression. \code template void print_max_coeff(const ArrayBase &a) { std::cout << "max: " << a.maxCoeff() << std::endl; } \endcode %MatrixBase Example

Prints the inverse condition number of the given matrix or matrix-expression. \code template void print_inv_cond(const MatrixBase& a) { const typename JacobiSVD::SingularValuesType& sing_vals = a.jacobiSvd().singularValues(); std::cout << "inv cond: " << sing_vals(sing_vals.size()-1) / sing_vals(0) << std::endl; } \endcode Multiple templated arguments example

Calculate the Euclidean distance between two points. \code template typename DerivedA::Scalar squaredist(const MatrixBase& p1,const MatrixBase& p2) { return (p1-p2).squaredNorm(); } \endcode Notice that we used two template parameters, one per argument. This permits the function to handle inputs of different types, e.g., \code squaredist(v1,2*v2) \endcode where the first argument \c v1 is a vector and the second argument \c 2*v2 is an expression.

These examples are just intended to give the reader a first impression of how functions can be written which take a plain and constant Matrix or Array argument. They are also intended to give the reader an idea about the most common base classes being the optimal candidates for functions. In the next section we will look in more detail at an example and the different ways it can be implemented, while discussing each implementation's problems and advantages. For the discussion below, Matrix and Array as well as MatrixBase and ArrayBase can be exchanged and all arguments still hold. \section TopicUsingRefClass How to write generic, but non-templated function? In all the previous examples, the functions had to be template functions. This approach allows to write very generic code, but it is often desirable to write non templated functions and still keep some level of genericity to avoid stupid copies of the arguments. The typical example is to write functions accepting both a MatrixXf or a block of a MatrixXf. This is exactly the purpose of the Ref class. Here is a simple example:
Example:Output:
\include function_taking_ref.cpp \verbinclude function_taking_ref.out
In the first two calls to inv_cond, no copy occur because the memory layout of the arguments matches the memory layout accepted by Ref. However, in the last call, we have a generic expression that will be automatically evaluated into a temporary MatrixXf by the Ref<> object. A Ref object can also be writable. Here is an example of a function computing the covariance matrix of two input matrices where each row is an observation: \code void cov(const Ref x, const Ref y, Ref C) { const float num_observations = static_cast(x.rows()); const RowVectorXf x_mean = x.colwise().sum() / num_observations; const RowVectorXf y_mean = y.colwise().sum() / num_observations; C = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations; } \endcode and here are two examples calling cov without any copy: \code MatrixXf m1, m2, m3 cov(m1, m2, m3); cov(m1.leftCols<3>(), m2.leftCols<3>(), m3.topLeftCorner<3,3>()); \endcode The Ref<> class has two other optional template arguments allowing to control the kind of memory layout that can be accepted without any copy. See the class Ref documentation for the details. \section TopicPlainFunctionsWorking In which cases do functions taking plain Matrix or Array arguments work? Without using template functions, and without the Ref class, a naive implementation of the previous cov function might look like this \code MatrixXf cov(const MatrixXf& x, const MatrixXf& y) { const float num_observations = static_cast(x.rows()); const RowVectorXf x_mean = x.colwise().sum() / num_observations; const RowVectorXf y_mean = y.colwise().sum() / num_observations; return (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations; } \endcode and contrary to what one might think at first, this implementation is fine unless you require a generic implementation that works with double matrices too and unless you do not care about temporary objects. Why is that the case? Where are temporaries involved? How can code as given below compile? \code MatrixXf x,y,z; MatrixXf C = cov(x,y+z); \endcode In this special case, the example is fine and will be working because both parameters are declared as \e const references. The compiler creates a temporary and evaluates the expression x+z into this temporary. Once the function is processed, the temporary is released and the result is assigned to C. \b Note: Functions taking \e const references to Matrix (or Array) can process expressions at the cost of temporaries. \section TopicPlainFunctionsFailing In which cases do functions taking a plain Matrix or Array argument fail? Here, we consider a slightly modified version of the function given above. This time, we do not want to return the result but pass an additional non-const parameter which allows us to store the result. A first naive implementation might look as follows. \code // Note: This code is flawed! void cov(const MatrixXf& x, const MatrixXf& y, MatrixXf& C) { const float num_observations = static_cast(x.rows()); const RowVectorXf x_mean = x.colwise().sum() / num_observations; const RowVectorXf y_mean = y.colwise().sum() / num_observations; C = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations; } \endcode When trying to execute the following code \code MatrixXf C = MatrixXf::Zero(3,6); cov(x,y, C.block(0,0,3,3)); \endcode the compiler will fail, because it is not possible to convert the expression returned by \c MatrixXf::block() into a non-const \c MatrixXf&. This is the case because the compiler wants to protect you from writing your result to a temporary object. In this special case this protection is not intended -- we want to write to a temporary object. So how can we overcome this problem? The solution which is preferred at the moment is based on a little \em hack. One needs to pass a const reference to the matrix and internally the constness needs to be cast away. The correct implementation for C98 compliant compilers would be \code template void cov(const MatrixBase& x, const MatrixBase& y, MatrixBase const & C) { typedef typename Derived::Scalar Scalar; typedef typename internal::plain_row_type::type RowVectorType; const Scalar num_observations = static_cast(x.rows()); const RowVectorType x_mean = x.colwise().sum() / num_observations; const RowVectorType y_mean = y.colwise().sum() / num_observations; const_cast< MatrixBase& >(C) = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations; } \endcode The implementation above does now not only work with temporary expressions but it also allows to use the function with matrices of arbitrary floating point scalar types. \b Note: The const cast hack will only work with templated functions. It will not work with the MatrixXf implementation because it is not possible to cast a Block expression to a Matrix reference! \section TopicResizingInGenericImplementations How to resize matrices in generic implementations? One might think we are done now, right? This is not completely true because in order for our covariance function to be generically applicable, we want the following code to work \code MatrixXf x = MatrixXf::Random(100,3); MatrixXf y = MatrixXf::Random(100,3); MatrixXf C; cov(x, y, C); \endcode This is not the case anymore, when we are using an implementation taking MatrixBase as a parameter. In general, %Eigen supports automatic resizing but it is not possible to do so on expressions. Why should resizing of a matrix Block be allowed? It is a reference to a sub-matrix and we definitely don't want to resize that. So how can we incorporate resizing if we cannot resize on MatrixBase? The solution is to resize the derived object as in this implementation. \code template void cov(const MatrixBase& x, const MatrixBase& y, MatrixBase const & C_) { typedef typename Derived::Scalar Scalar; typedef typename internal::plain_row_type::type RowVectorType; const Scalar num_observations = static_cast(x.rows()); const RowVectorType x_mean = x.colwise().sum() / num_observations; const RowVectorType y_mean = y.colwise().sum() / num_observations; MatrixBase& C = const_cast< MatrixBase& >(C_); C.derived().resize(x.cols(),x.cols()); // resize the derived object C = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations; } \endcode This implementation is now working for parameters being expressions and for parameters being matrices and having the wrong size. Resizing the expressions does not do any harm in this case unless they actually require resizing. That means, passing an expression with the wrong dimensions will result in a run-time error (in debug mode only) while passing expressions of the correct size will just work fine. \b Note: In the above discussion the terms Matrix and Array and MatrixBase and ArrayBase can be exchanged and all arguments still hold. \section TopicSummary Summary - To summarize, the implementation of functions taking non-writable (const referenced) objects is not a big issue and does not lead to problematic situations in terms of compiling and running your program. However, a naive implementation is likely to introduce unnecessary temporary objects in your code. In order to avoid evaluating parameters into temporaries, pass them as (const) references to MatrixBase or ArrayBase (so templatize your function). - Functions taking writable (non-const) parameters must take const references and cast away constness within the function body. - Functions that take as parameters MatrixBase (or ArrayBase) objects, and potentially need to resize them (in the case where they are resizable), must call resize() on the derived class, as returned by derived(). */ } piqp-octave/src/eigen/doc/UsingNVCC.dox0000644000175100001770000000347714107270226017565 0ustar runnerdocker namespace Eigen { /** \page TopicCUDA Using Eigen in CUDA kernels Staring from CUDA 5.5 and Eigen 3.3, it is possible to use Eigen's matrices, vectors, and arrays for fixed size within CUDA kernels. This is especially useful when working on numerous but small problems. By default, when Eigen's headers are included within a .cu file compiled by nvcc most Eigen's functions and methods are prefixed by the \c __device__ \c __host__ keywords making them callable from both host and device code. This support can be disabled by defining \c EIGEN_NO_CUDA before including any Eigen's header. This might be useful to disable some warnings when a .cu file makes use of Eigen on the host side only. However, in both cases, host's SIMD vectorization has to be disabled in .cu files. It is thus \b strongly \b recommended to properly move all costly host computation from your .cu files to regular .cpp files. Known issues: - \c nvcc with MS Visual Studio does not work (patch welcome) - \c nvcc 5.5 with gcc-4.7 (or greater) has issues with the standard \c \ header file. To workaround this, you can add the following before including any other files: \code // workaround issue between gcc >= 4.7 and cuda 5.5 #if (defined __GNUC__) && (__GNUC__>4 || __GNUC_MINOR__>=7) #undef _GLIBCXX_ATOMIC_BUILTINS #undef _GLIBCXX_USE_INT128 #endif \endcode - On 64bits system Eigen uses \c long \c int as the default type for indexes and sizes. On CUDA device, it would make sense to default to 32 bits \c int. However, to keep host and CUDA code compatible, this cannot be done automatically by %Eigen, and the user is thus required to define \c EIGEN_DEFAULT_DENSE_INDEX_TYPE to \c int throughout his code (or only for CUDA code if there is no interaction between host and CUDA code through %Eigen's object). */ } piqp-octave/src/eigen/doc/TopicLinearAlgebraDecompositions.dox0000644000175100001770000002155414107270226024431 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicLinearAlgebraDecompositions Catalogue of dense decompositions This page presents a catalogue of the dense matrix decompositions offered by Eigen. For an introduction on linear solvers and decompositions, check this \link TutorialLinearAlgebra page \endlink. To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink. \section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen
Generic information, not Eigen-specific Eigen-specific
Decomposition Requirements on the matrix Speed Algorithm reliability and accuracy Rank-revealing Allows to compute (besides linear solving) Linear solver provided by Eigen Maturity of Eigen's implementation Optimizations
PartialPivLU Invertible Fast Depends on condition number - - Yes Excellent Blocking, Implicit MT
FullPivLU - Slow Proven Yes - Yes Excellent -
HouseholderQR - Fast Depends on condition number - Orthogonalization Yes Excellent Blocking
ColPivHouseholderQR - Fast Good Yes Orthogonalization Yes Excellent -
FullPivHouseholderQR - Slow Proven Yes Orthogonalization Yes Average -
CompleteOrthogonalDecomposition - Fast Good Yes Orthogonalization Yes Excellent -
LLT Positive definite Very fast Depends on condition number - - Yes Excellent Blocking
LDLT Positive or negative semidefinite1 Very fast Good - - Yes Excellent Soon: blocking
\n Singular values and eigenvalues decompositions
BDCSVD (divide \& conquer) - One of the fastest SVD algorithms Excellent Yes Singular values/vectors, least squares Yes (and does least squares) Excellent Blocked bidiagonalization
JacobiSVD (two-sided) - Slow (but fast for small matrices) Proven3 Yes Singular values/vectors, least squares Yes (and does least squares) Excellent R-SVD
SelfAdjointEigenSolver Self-adjoint Fast-average2 Good Yes Eigenvalues/vectors - Excellent Closed forms for 2x2 and 3x3
ComplexEigenSolver Square Slow-very slow2 Depends on condition number Yes Eigenvalues/vectors - Average -
EigenSolver Square and real Average-slow2 Depends on condition number Yes Eigenvalues/vectors - Average -
GeneralizedSelfAdjointEigenSolver Square Fast-average2 Depends on condition number - Generalized eigenvalues/vectors - Good -
\n Helper decompositions
RealSchur Square and real Average-slow2 Depends on condition number Yes - - Average -
ComplexSchur Square Slow-very slow2 Depends on condition number Yes - - Average -
Tridiagonalization Self-adjoint Fast Good - - - Good Soon: blocking
HessenbergDecomposition Square Average Good - - - Good Soon: blocking
\b Notes:
  • \b 1: There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.
  • \b 2: Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.
  • \b 3: Our JacobiSVD is two-sided, making for proven and optimal precision for square matrices. For non-square matrices, we have to use a QR preconditioner first. The default choice, ColPivHouseholderQR, is already very reliable, but if you want it to be proven, use FullPivHouseholderQR instead.
\section TopicLinAlgTerminology Terminology
Selfadjoint
For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for \em hermitian. More generally, a matrix \f$ A \f$ is selfadjoint if and only if it is equal to its adjoint \f$ A^* \f$. The adjoint is also called the \em conjugate \em transpose.
Positive/negative definite
A selfadjoint matrix \f$ A \f$ is positive definite if \f$ v^* A v > 0 \f$ for any non zero vector \f$ v \f$. In the same vein, it is negative definite if \f$ v^* A v < 0 \f$ for any non zero vector \f$ v \f$
Positive/negative semidefinite
A selfadjoint matrix \f$ A \f$ is positive semi-definite if \f$ v^* A v \ge 0 \f$ for any non zero vector \f$ v \f$. In the same vein, it is negative semi-definite if \f$ v^* A v \le 0 \f$ for any non zero vector \f$ v \f$
Blocking
Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.
Implicit Multi Threading (MT)
Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product routines.
Explicit Multi Threading (MT)
Means the algorithm is explicitly parallelized to take advantage of multicore processors via OpenMP.
Meta-unroller
Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.
*/ } piqp-octave/src/eigen/doc/TutorialReductionsVisitorsBroadcasting.dox0000644000175100001770000002734614107270226025756 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialReductionsVisitorsBroadcasting Reductions, visitors and broadcasting This page explains Eigen's reductions, visitors and broadcasting and how they are used with \link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink. \eigenAutoToc \section TutorialReductionsVisitorsBroadcastingReductions Reductions In Eigen, a reduction is a function taking a matrix or array, and returning a single scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink, returning the sum of all the coefficients inside a given matrix or array.
Example:Output:
\include tut_arithmetic_redux_basic.cpp \verbinclude tut_arithmetic_redux_basic.out
The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed a.diagonal().sum(). \subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients. Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink. These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things. If you want other coefficient-wise \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm lpNorm

() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients. The following example demonstrates these methods.
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out
\b Operator \b norm: The 1-norm and \f$\infty\f$-norm matrix operator norms can easily be computed as follows:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.out
See below for more explanations on the syntax of these expressions. \subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions The following reductions operate on boolean values: - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true . - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true . - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to \b true. These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, array > 0 is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, (array > 0).all() tests whether all coefficients of \c array are positive. This can be seen in the following example:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out
\subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions TODO In the meantime you can have a look at the DenseBase::redux() function. \section TutorialReductionsVisitorsBroadcastingVisitors Visitors Visitors are useful when one wants to obtain the location of a coefficient inside a Matrix or Array. The simplest examples are \link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and \link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find the location of the greatest or smallest coefficient in a Matrix or Array. The arguments passed to a visitor are pointers to the variables where the row and column position are to be stored. These variables should be of type \link Eigen::Index Index \endlink, as shown below:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out
Both functions also return the value of the minimum or maximum coefficient. \section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions Partial reductions are reductions that can operate column- or row-wise on a Matrix or Array, applying the reduction operation on each column or row and returning a column or row vector with the corresponding values. Partial reductions are applied with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink. A simple example is obtaining the maximum of the elements in each column in a given matrix, storing the result in a row vector:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out
The same operation can be performed row-wise:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out
Note that column-wise operations return a row vector, while row-wise operations return a column vector. \subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations It is also possible to use the result of a partial reduction to do further processing. Here is another example that finds the column whose sum of elements is the maximum within a matrix. With column-wise partial reductions this can be coded as:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out
The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose size is 1x4. Therefore, if \f[ \mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix} \f] then \f[ \mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix} \f] The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied to obtain the column index where the maximum sum is found, which is the column index 2 (third column) in this case. \section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in one direction. A simple example is to add a certain column vector to each column in a matrix. This can be accomplished with:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out
We can interpret the instruction mat.colwise() += v in two equivalent ways. It adds the vector \c v to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to form a four-by-two matrix which is then added to \c mat: \f[ \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}. \f] The operators -=, + and - can also be used column-wise and row-wise. On arrays, we can also use the operators *=, /=, * and / to perform coefficient-wise multiplication and division column-wise or row-wise. These operators are not available on matrices because it is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with \c v(1), and so on, then use mat = mat * v.asDiagonal(). It is important to point out that the vector to be added column-wise or row-wise must be of type Vector, and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that broadcasting operations can only be applied with an object of type Vector, when operating with Matrix. The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should not mix arrays and matrices in the same expression. To perform the same operation row-wise we can do:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out
\subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations Broadcasting can also be combined with other operations, such as Matrix or Array operations, reductions and partial reductions. Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds the nearest neighbour of a vector v within the columns of matrix m. The Euclidean distance will be used in this example, computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink:
Example:Output:
\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out
The line that does the job is \code (m.colwise() - v).colwise().squaredNorm().minCoeff(&index); \endcode We will go step by step to understand what is happening: - m.colwise() - v is a broadcasting operation, subtracting v from each column in m. The result of this operation is a new matrix whose size is the same as matrix m: \f[ \mbox{m.colwise() - v} = \begin{bmatrix} -1 & 21 & 4 & 7 \\ 0 & 8 & 4 & -1 \end{bmatrix} \f] - (m.colwise() - v).colwise().squaredNorm() is a partial reduction, computing the squared norm column-wise. The result of this operation is a row vector where each coefficient is the squared Euclidean distance between each column in m and v: \f[ \mbox{(m.colwise() - v).colwise().squaredNorm()} = \begin{bmatrix} 1 & 505 & 32 & 50 \end{bmatrix} \f] - Finally, minCoeff(&index) is used to obtain the index of the column in m that is closest to v in terms of Euclidean distance. */ } piqp-octave/src/eigen/doc/TutorialAdvancedInitialization.dox0000644000175100001770000001536414107270226024165 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialAdvancedInitialization Advanced initialization This page discusses several advanced methods for initializing matrices. It gives more details on the comma-initializer, which was introduced before. It also explains how to get special matrices such as the identity matrix and the zero matrix. \eigenAutoToc \section TutorialAdvancedInitializationCommaInitializer The comma initializer Eigen offers a comma initializer syntax which allows the user to easily set all the coefficients of a matrix, vector or array. Simply list the coefficients, starting at the top-left corner and moving from left to right and from the top to the bottom. The size of the object needs to be specified beforehand. If you list too few or too many coefficients, Eigen will complain.
Example:Output:
\include Tutorial_commainit_01.cpp \verbinclude Tutorial_commainit_01.out
Moreover, the elements of the initialization list may themselves be vectors or matrices. A common use is to join vectors or matrices together. For example, here is how to join two row vectors together. Remember that you have to set the size before you can use the comma initializer.
Example:Output:
\include Tutorial_AdvancedInitialization_Join.cpp \verbinclude Tutorial_AdvancedInitialization_Join.out
We can use the same technique to initialize matrices with a block structure.
Example:Output:
\include Tutorial_AdvancedInitialization_Block.cpp \verbinclude Tutorial_AdvancedInitialization_Block.out
The comma initializer can also be used to fill block expressions such as m.row(i). Here is a more complicated way to get the same result as in the first example above:
Example:Output:
\include Tutorial_commainit_01b.cpp \verbinclude Tutorial_commainit_01b.out
\section TutorialAdvancedInitializationSpecialMatrices Special matrices and arrays The Matrix and Array classes have static methods like \link DenseBase::Zero() Zero()\endlink, which can be used to initialize all coefficients to zero. There are three variants. The first variant takes no arguments and can only be used for fixed-size objects. If you want to initialize a dynamic-size object to zero, you need to specify the size. Thus, the second variant requires one argument and can be used for one-dimensional dynamic-size objects, while the third variant requires two arguments and can be used for two-dimensional objects. All three variants are illustrated in the following example:
Example:Output:
\include Tutorial_AdvancedInitialization_Zero.cpp \verbinclude Tutorial_AdvancedInitialization_Zero.out
Similarly, the static method \link DenseBase::Constant() Constant\endlink(value) sets all coefficients to \c value. If the size of the object needs to be specified, the additional arguments go before the \c value argument, as in MatrixXd::Constant(rows, cols, value). The method \link DenseBase::Random() Random() \endlink fills the matrix or array with random coefficients. The identity matrix can be obtained by calling \link MatrixBase::Identity() Identity()\endlink; this method is only available for Matrix, not for Array, because "identity matrix" is a linear algebra concept. The method \link DenseBase::LinSpaced LinSpaced\endlink(size, low, high) is only available for vectors and one-dimensional arrays; it yields a vector of the specified size whose coefficients are equally spaced between \c low and \c high. The method \c LinSpaced() is illustrated in the following example, which prints a table with angles in degrees, the corresponding angle in radians, and their sine and cosine.
Example:Output:
\include Tutorial_AdvancedInitialization_LinSpaced.cpp \verbinclude Tutorial_AdvancedInitialization_LinSpaced.out
This example shows that objects like the ones returned by LinSpaced() can be assigned to variables (and expressions). Eigen defines utility functions like \link DenseBase::setZero() setZero()\endlink, \link MatrixBase::setIdentity() \endlink and \link DenseBase::setLinSpaced() \endlink to do this conveniently. The following example contrasts three ways to construct the matrix \f$ J = \bigl[ \begin{smallmatrix} O & I \\ I & O \end{smallmatrix} \bigr] \f$: using static methods and assignment, using static methods and the comma-initializer, or using the setXxx() methods.
Example:Output:
\include Tutorial_AdvancedInitialization_ThreeWays.cpp \verbinclude Tutorial_AdvancedInitialization_ThreeWays.out
A summary of all pre-defined matrix, vector and array objects can be found in the \ref QuickRefPage. \section TutorialAdvancedInitializationTemporaryObjects Usage as temporary objects As shown above, static methods as Zero() and Constant() can be used to initialize variables at the time of declaration or at the right-hand side of an assignment operator. You can think of these methods as returning a matrix or array; in fact, they return so-called \ref TopicEigenExpressionTemplates "expression objects" which evaluate to a matrix or array when needed, so that this syntax does not incur any overhead. These expressions can also be used as a temporary object. The second example in the \ref GettingStarted guide, which we reproduce here, already illustrates this.
Example:Output:
\include QuickStart_example2_dynamic.cpp \verbinclude QuickStart_example2_dynamic.out
The expression m + MatrixXf::Constant(3,3,1.2) constructs the 3-by-3 matrix expression with all its coefficients equal to 1.2 plus the corresponding coefficient of \a m. The comma-initializer, too, can also be used to construct temporary objects. The following example constructs a random matrix of size 2-by-3, and then multiplies this matrix on the left with \f$ \bigl[ \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \bigr] \f$.
Example:Output:
\include Tutorial_AdvancedInitialization_CommaTemporary.cpp \verbinclude Tutorial_AdvancedInitialization_CommaTemporary.out
The \link CommaInitializer::finished() finished() \endlink method is necessary here to get the actual matrix object once the comma initialization of our temporary submatrix is done. */ } piqp-octave/src/eigen/doc/examples/0000755000175100001770000000000014107270226017115 5ustar runnerdockerpiqp-octave/src/eigen/doc/examples/make_circulant.cpp.evaluator0000644000175100001770000000166414107270226024612 0ustar runnerdockernamespace Eigen { namespace internal { template struct evaluator > : evaluator_base > { typedef Circulant XprType; typedef typename nested_eval::type ArgTypeNested; typedef typename remove_all::type ArgTypeNestedCleaned; typedef typename XprType::CoeffReturnType CoeffReturnType; enum { CoeffReadCost = evaluator::CoeffReadCost, Flags = Eigen::ColMajor }; evaluator(const XprType& xpr) : m_argImpl(xpr.m_arg), m_rows(xpr.rows()) { } CoeffReturnType coeff(Index row, Index col) const { Index index = row - col; if (index < 0) index += m_rows; return m_argImpl.coeff(index); } evaluator m_argImpl; const Index m_rows; }; } } piqp-octave/src/eigen/doc/examples/tut_matrix_resize_fixed_size.cpp0000644000175100001770000000034514107270226025615 0ustar runnerdocker#include #include using namespace Eigen; int main() { Matrix4d m; m.resize(4,4); // no operation std::cout << "The matrix m is of size " << m.rows() << "x" << m.cols() << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_BlockOperations_block_assignment.cpp0000644000175100001770000000101314107270226030177 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Array22f m; m << 1,2, 3,4; Array44f a = Array44f::Constant(0.6); cout << "Here is the array a:" << endl << a << endl << endl; a.block<2,2>(1,1) = m; cout << "Here is now a with m copied into its central 2x2 block:" << endl << a << endl << endl; a.block(0,0,2,3) = a.block(2,1,2,3); cout << "Here is now a with bottom-right 2x3 block copied into top-left 2x3 block:" << endl << a << endl << endl; } piqp-octave/src/eigen/doc/examples/DenseBase_template_int_middleCols.cpp0000644000175100001770000000043314107270226026356 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main(void) { int const N = 5; MatrixXi A(N,N); A.setRandom(); cout << "A =\n" << A << '\n' << endl; cout << "A(:,1..3) =\n" << A.middleCols<3>(1) << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_cwise_other.cpp0000644000175100001770000000063214107270226026144 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { ArrayXf a = ArrayXf::Random(5); a *= 2; cout << "a =" << endl << a << endl; cout << "a.abs() =" << endl << a.abs() << endl; cout << "a.abs().sqrt() =" << endl << a.abs().sqrt() << endl; cout << "a.min(a.abs().sqrt()) =" << endl << a.min(a.abs().sqrt()) << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_interop.cpp0000644000175100001770000000067414107270226025317 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { MatrixXf m(2,2); MatrixXf n(2,2); MatrixXf result(2,2); m << 1,2, 3,4; n << 5,6, 7,8; result = (m.array() + 4).matrix() * m; cout << "-- Combination 1: --" << endl << result << endl << endl; result = (m.array() * n.array()).matrix() * m; cout << "-- Combination 2: --" << endl << result << endl << endl; } piqp-octave/src/eigen/doc/examples/class_Block.cpp0000644000175100001770000000134114107270226022037 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; template Eigen::Block topLeftCorner(MatrixBase& m, int rows, int cols) { return Eigen::Block(m.derived(), 0, 0, rows, cols); } template const Eigen::Block topLeftCorner(const MatrixBase& m, int rows, int cols) { return Eigen::Block(m.derived(), 0, 0, rows, cols); } int main(int, char**) { Matrix4d m = Matrix4d::Identity(); cout << topLeftCorner(4*m, 2, 3) << endl; // calls the const version topLeftCorner(m, 2, 3) *= 5; // calls the non-const version cout << "Now the matrix m is:" << endl << m << endl; return 0; } piqp-octave/src/eigen/doc/examples/class_FixedReshaped.cpp0000644000175100001770000000072314107270226023523 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; template Eigen::Reshaped reshape_helper(MatrixBase& m) { return Eigen::Reshaped(m.derived()); } int main(int, char**) { MatrixXd m(2, 4); m << 1, 2, 3, 4, 5, 6, 7, 8; MatrixXd n = reshape_helper(m); cout << "matrix m is:" << endl << m << endl; cout << "matrix n is:" << endl << n << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_BlockOperations_corner.cpp0000644000175100001770000000070014107270226026147 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::Matrix4f m; m << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12, 13,14,15,16; cout << "m.leftCols(2) =" << endl << m.leftCols(2) << endl << endl; cout << "m.bottomRows<2>() =" << endl << m.bottomRows<2>() << endl << endl; m.topLeftCorner(1,3) = m.bottomRightCorner(3,1).transpose(); cout << "After assignment, m = " << endl << m << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp0000644000175100001770000000036714107270226031442 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf mat(2,4); mat << 1, 2, 6, 9, 3, 1, 7, 2; std::cout << "Column's maximum: " << std::endl << mat.colwise().maxCoeff() << std::endl; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp0000644000175100001770000000055614107270226022610 0ustar runnerdocker/* This program is presented in several fragments in the doc page. Every fragment is in its own file; this file simply combines them. */ #include "make_circulant.cpp.preamble" #include "make_circulant.cpp.traits" #include "make_circulant.cpp.expression" #include "make_circulant.cpp.evaluator" #include "make_circulant.cpp.entry" #include "make_circulant.cpp.main" piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp0000644000175100001770000000124314107270226033201 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { VectorXf v(2); MatrixXf m(2,2), n(2,2); v << -1, 2; m << 1,-2, -3,4; cout << "v.squaredNorm() = " << v.squaredNorm() << endl; cout << "v.norm() = " << v.norm() << endl; cout << "v.lpNorm<1>() = " << v.lpNorm<1>() << endl; cout << "v.lpNorm() = " << v.lpNorm() << endl; cout << endl; cout << "m.squaredNorm() = " << m.squaredNorm() << endl; cout << "m.norm() = " << m.norm() << endl; cout << "m.lpNorm<1>() = " << m.lpNorm<1>() << endl; cout << "m.lpNorm() = " << m.lpNorm() << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgSVDSolve.cpp0000644000175100001770000000062514107270226024104 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { MatrixXf A = MatrixXf::Random(3, 2); cout << "Here is the matrix A:\n" << A << endl; VectorXf b = VectorXf::Random(3); cout << "Here is the right hand side b:\n" << b << endl; cout << "The least-squares solution is:\n" << A.bdcSvd(ComputeThinU | ComputeThinV).solve(b) << endl; } piqp-octave/src/eigen/doc/examples/tut_arithmetic_scalar_mul_div.cpp0000644000175100001770000000054114107270226025712 0ustar runnerdocker#include #include using namespace Eigen; int main() { Matrix2d a; a << 1, 2, 3, 4; Vector3d v(1,2,3); std::cout << "a * 2.5 =\n" << a * 2.5 << std::endl; std::cout << "0.1 * v =\n" << 0.1 * v << std::endl; std::cout << "Doing v *= 2;" << std::endl; v *= 2; std::cout << "Now v =\n" << v << std::endl; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp.main0000644000175100001770000000022214107270226023521 0ustar runnerdockerint main() { Eigen::VectorXd vec(4); vec << 1, 2, 4, 8; Eigen::MatrixXd mat; mat = makeCirculant(vec); std::cout << mat << std::endl; } piqp-octave/src/eigen/doc/examples/make_circulant2.cpp0000644000175100001770000000245014107270226022665 0ustar runnerdocker#include #include using namespace Eigen; // [circulant_func] template class circulant_functor { const ArgType &m_vec; public: circulant_functor(const ArgType& arg) : m_vec(arg) {} const typename ArgType::Scalar& operator() (Index row, Index col) const { Index index = row - col; if (index < 0) index += m_vec.size(); return m_vec(index); } }; // [circulant_func] // [square] template struct circulant_helper { typedef Matrix MatrixType; }; // [square] // [makeCirculant] template CwiseNullaryOp, typename circulant_helper::MatrixType> makeCirculant(const Eigen::MatrixBase& arg) { typedef typename circulant_helper::MatrixType MatrixType; return MatrixType::NullaryExpr(arg.size(), arg.size(), circulant_functor(arg.derived())); } // [makeCirculant] // [main] int main() { Eigen::VectorXd vec(4); vec << 1, 2, 4, 8; Eigen::MatrixXd mat; mat = makeCirculant(vec); std::cout << mat << std::endl; } // [main] piqp-octave/src/eigen/doc/examples/DenseBase_middleRows_int.cpp0000644000175100001770000000043214107270226024514 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main(void) { int const N = 5; MatrixXi A(N,N); A.setRandom(); cout << "A =\n" << A << '\n' << endl; cout << "A(2..3,:) =\n" << A.middleRows(2,2) << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_BlockOperations_colrow.cpp0000644000175100001770000000060614107270226026171 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf m(3,3); m << 1,2,3, 4,5,6, 7,8,9; cout << "Here is the matrix m:" << endl << m << endl; cout << "2nd Row: " << m.row(1) << endl; m.col(2) += 3 * m.col(0); cout << "After adding 3 times the first column into the third column, the matrix m is:\n"; cout << m << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp0000644000175100001770000000036414107270226031471 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf mat(2,4); mat << 1, 2, 6, 9, 3, 1, 7, 2; std::cout << "Row's maximum: " << std::endl << mat.rowwise().maxCoeff() << std::endl; } ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootrootpiqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpppiqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.0000644000175100001770000000055114107270226034357 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf mat(2,4); Eigen::VectorXf v(4); mat << 1, 2, 6, 9, 3, 1, 7, 2; v << 0,1,2,3; //add v to each row of m mat.rowwise() += v.transpose(); std::cout << "Broadcasting result: " << std::endl; std::cout << mat << std::endl; } piqp-octave/src/eigen/doc/examples/TemplateKeyword_flexible.cpp0000644000175100001770000000124514107270226024615 0ustar runnerdocker#include #include using namespace Eigen; template void copyUpperTriangularPart(MatrixBase& dst, const MatrixBase& src) { /* Note the 'template' keywords in the following line! */ dst.template triangularView() = src.template triangularView(); } int main() { MatrixXi m1 = MatrixXi::Ones(5,5); MatrixXi m2 = MatrixXi::Random(4,4); std::cout << "m2 before copy:" << std::endl; std::cout << m2 << std::endl << std::endl; copyUpperTriangularPart(m2, m1.topLeftCorner(4,4)); std::cout << "m2 after copy:" << std::endl; std::cout << m2 << std::endl << std::endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgExSolveLDLT.cpp0000644000175100001770000000054414107270226024504 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix2f A, b; A << 2, -1, -1, 3; b << 1, 2, 3, 1; cout << "Here is the matrix A:\n" << A << endl; cout << "Here is the right hand side b:\n" << b << endl; Matrix2f x = A.ldlt().solve(b); cout << "The solution is:\n" << x << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgExComputeSolveError.cpp0000644000175100001770000000056314107270226026374 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { MatrixXd A = MatrixXd::Random(100,100); MatrixXd b = MatrixXd::Random(100,50); MatrixXd x = A.fullPivLu().solve(b); double relative_error = (A*x - b).norm() / b.norm(); // norm() is L2 norm cout << "The relative error is:\n" << relative_error << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_mult.cpp0000644000175100001770000000035514107270226024614 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { ArrayXXf a(2,2); ArrayXXf b(2,2); a << 1,2, 3,4; b << 5,6, 7,8; cout << "a * b = " << endl << a * b << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgInverseDeterminant.cpp0000644000175100001770000000053414107270226026244 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix3f A; A << 1, 2, 1, 2, 1, 0, -1, 1, 2; cout << "Here is the matrix A:\n" << A << endl; cout << "The determinant of A is " << A.determinant() << endl; cout << "The inverse of A is:\n" << A.inverse() << endl; } piqp-octave/src/eigen/doc/examples/class_VectorBlock.cpp0000644000175100001770000000140714107270226023225 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; template Eigen::VectorBlock segmentFromRange(MatrixBase& v, int start, int end) { return Eigen::VectorBlock(v.derived(), start, end-start); } template const Eigen::VectorBlock segmentFromRange(const MatrixBase& v, int start, int end) { return Eigen::VectorBlock(v.derived(), start, end-start); } int main(int, char**) { Matrix v; v << 1,2,3,4,5,6; cout << segmentFromRange(2*v, 2, 4) << endl; // calls the const version segmentFromRange(v, 1, 3) *= 5; // calls the non-const version cout << "Now the vector v is:" << endl << v << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_interop_matrix.cpp0000644000175100001770000000111714107270226026674 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { MatrixXf m(2,2); MatrixXf n(2,2); MatrixXf result(2,2); m << 1,2, 3,4; n << 5,6, 7,8; result = m * n; cout << "-- Matrix m*n: --" << endl << result << endl << endl; result = m.array() * n.array(); cout << "-- Array m*n: --" << endl << result << endl << endl; result = m.cwiseProduct(n); cout << "-- With cwiseProduct: --" << endl << result << endl << endl; result = m.array() + 4; cout << "-- Array m + 4: --" << endl << result << endl << endl; } piqp-octave/src/eigen/doc/examples/DenseBase_middleCols_int.cpp0000644000175100001770000000043214107270226024462 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main(void) { int const N = 5; MatrixXi A(N,N); A.setRandom(); cout << "A =\n" << A << '\n' << endl; cout << "A(1..3,:) =\n" << A.middleCols(1,3) << endl; return 0; } piqp-octave/src/eigen/doc/examples/function_taking_eigenbase.cpp0000644000175100001770000000064214107270226025007 0ustar runnerdocker#include #include using namespace Eigen; template void print_size(const EigenBase& b) { std::cout << "size (rows, cols): " << b.size() << " (" << b.rows() << ", " << b.cols() << ")" << std::endl; } int main() { Vector3f v; print_size(v); // v.asDiagonal() returns a 3x3 diagonal matrix pseudo-expression print_size(v.asDiagonal()); } piqp-octave/src/eigen/doc/examples/class_CwiseUnaryOp.cpp0000644000175100001770000000106114107270226023374 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; // define a custom template unary functor template struct CwiseClampOp { CwiseClampOp(const Scalar& inf, const Scalar& sup) : m_inf(inf), m_sup(sup) {} const Scalar operator()(const Scalar& x) const { return xm_sup ? m_sup : x); } Scalar m_inf, m_sup; }; int main(int, char**) { Matrix4d m1 = Matrix4d::Random(); cout << m1 << endl << "becomes: " << endl << m1.unaryExpr(CwiseClampOp(-0.5,0.5)) << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_simple_example_fixed_size.cpp0000644000175100001770000000043214107270226026720 0ustar runnerdocker#include #include using namespace Eigen; int main() { Matrix3f m3; m3 << 1, 2, 3, 4, 5, 6, 7, 8, 9; Matrix4f m4 = Matrix4f::Identity(); Vector4i v4(1, 2, 3, 4); std::cout << "m3\n" << m3 << "\nm4:\n" << m4 << "\nv4:\n" << v4 << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp0000644000175100001770000000076614107270226031461 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { MatrixXf mat(2,4); mat << 1, 2, 6, 9, 3, 1, 7, 2; MatrixXf::Index maxIndex; float maxNorm = mat.colwise().sum().maxCoeff(&maxIndex); std::cout << "Maximum sum at position " << maxIndex << std::endl; std::cout << "The corresponding vector is: " << std::endl; std::cout << mat.col( maxIndex ) << std::endl; std::cout << "And its sum is is: " << maxNorm << std::endl; } ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootrootpiqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.cpppiqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.c0000644000175100001770000000067714107270226034427 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { MatrixXf m(2,2); m << 1,-2, -3,4; cout << "1-norm(m) = " << m.cwiseAbs().colwise().sum().maxCoeff() << " == " << m.colwise().lpNorm<1>().maxCoeff() << endl; cout << "infty-norm(m) = " << m.cwiseAbs().rowwise().sum().maxCoeff() << " == " << m.rowwise().lpNorm<1>().maxCoeff() << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgExSolveColPivHouseholderQR.cpp0000644000175100001770000000057514107270226027612 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix3f A; Vector3f b; A << 1,2,3, 4,5,6, 7,8,10; b << 3, 3, 4; cout << "Here is the matrix A:\n" << A << endl; cout << "Here is the vector b:\n" << b << endl; Vector3f x = A.colPivHouseholderQr().solve(b); cout << "The solution is:\n" << x << endl; } piqp-octave/src/eigen/doc/examples/DenseBase_template_int_middleRows.cpp0000644000175100001770000000043314107270226026410 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main(void) { int const N = 5; MatrixXi A(N,N); A.setRandom(); cout << "A =\n" << A << '\n' << endl; cout << "A(1..3,:) =\n" << A.middleRows<3>(1) << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp0000644000175100001770000000102314107270226031645 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Eigen::MatrixXf m(2,2); m << 1, 2, 3, 4; //get location of maximum MatrixXf::Index maxRow, maxCol; float max = m.maxCoeff(&maxRow, &maxCol); //get location of minimum MatrixXf::Index minRow, minCol; float min = m.minCoeff(&minRow, &minCol); cout << "Max: " << max << ", at: " << maxRow << "," << maxCol << endl; cout << "Min: " << min << ", at: " << minRow << "," << minCol << endl; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp.preamble0000644000175100001770000000012514107270226024366 0ustar runnerdocker#include #include template class Circulant; piqp-octave/src/eigen/doc/examples/class_FixedVectorBlock.cpp0000644000175100001770000000124114107270226024201 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; template Eigen::VectorBlock firstTwo(MatrixBase& v) { return Eigen::VectorBlock(v.derived(), 0); } template const Eigen::VectorBlock firstTwo(const MatrixBase& v) { return Eigen::VectorBlock(v.derived(), 0); } int main(int, char**) { Matrix v; v << 1,2,3,4,5,6; cout << firstTwo(4*v) << endl; // calls the const version firstTwo(v) *= 2; // calls the non-const version cout << "Now the vector v is:" << endl << v << endl; return 0; } piqp-octave/src/eigen/doc/examples/QuickStart_example.cpp0000644000175100001770000000031614107270226023426 0ustar runnerdocker#include #include using Eigen::MatrixXd; int main() { MatrixXd m(2,2); m(0,0) = 3; m(1,0) = 2.5; m(0,1) = -1; m(1,1) = m(1,0) + m(0,1); std::cout << m << std::endl; } piqp-octave/src/eigen/doc/examples/.krazy0000644000175100001770000000004214107270226020252 0ustar runnerdockerEXCLUDE copyright EXCLUDE license piqp-octave/src/eigen/doc/examples/Tutorial_simple_example_dynamic_size.cpp0000644000175100001770000000125014107270226027244 0ustar runnerdocker#include #include using namespace Eigen; int main() { for (int size=1; size<=4; ++size) { MatrixXi m(size,size+1); // a (size)x(size+1)-matrix of int's for (int j=0; j #include using namespace Eigen; // [functor] template class indexing_functor { const ArgType &m_arg; const RowIndexType &m_rowIndices; const ColIndexType &m_colIndices; public: typedef Matrix MatrixType; indexing_functor(const ArgType& arg, const RowIndexType& row_indices, const ColIndexType& col_indices) : m_arg(arg), m_rowIndices(row_indices), m_colIndices(col_indices) {} const typename ArgType::Scalar& operator() (Index row, Index col) const { return m_arg(m_rowIndices[row], m_colIndices[col]); } }; // [functor] // [function] template CwiseNullaryOp, typename indexing_functor::MatrixType> mat_indexing(const Eigen::MatrixBase& arg, const RowIndexType& row_indices, const ColIndexType& col_indices) { typedef indexing_functor Func; typedef typename Func::MatrixType MatrixType; return MatrixType::NullaryExpr(row_indices.size(), col_indices.size(), Func(arg.derived(), row_indices, col_indices)); } // [function] int main() { std::cout << "[main1]\n"; Eigen::MatrixXi A = Eigen::MatrixXi::Random(4,4); Array3i ri(1,2,1); ArrayXi ci(6); ci << 3,2,1,0,0,2; Eigen::MatrixXi B = mat_indexing(A, ri, ci); std::cout << "A =" << std::endl; std::cout << A << std::endl << std::endl; std::cout << "A([" << ri.transpose() << "], [" << ci.transpose() << "]) =" << std::endl; std::cout << B << std::endl; std::cout << "[main1]\n"; std::cout << "[main2]\n"; B = mat_indexing(A, ri+1, ci); std::cout << "A(ri+1,ci) =" << std::endl; std::cout << B << std::endl << std::endl; #if EIGEN_COMP_CXXVER >= 11 B = mat_indexing(A, ArrayXi::LinSpaced(13,0,12).unaryExpr([](int x){return x%4;}), ArrayXi::LinSpaced(4,0,3)); std::cout << "A(ArrayXi::LinSpaced(13,0,12).unaryExpr([](int x){return x%4;}), ArrayXi::LinSpaced(4,0,3)) =" << std::endl; std::cout << B << std::endl << std::endl; #endif std::cout << "[main2]\n"; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp.traits0000644000175100001770000000113514107270226024107 0ustar runnerdockernamespace Eigen { namespace internal { template struct traits > { typedef Eigen::Dense StorageKind; typedef Eigen::MatrixXpr XprKind; typedef typename ArgType::StorageIndex StorageIndex; typedef typename ArgType::Scalar Scalar; enum { Flags = Eigen::ColMajor, RowsAtCompileTime = ArgType::RowsAtCompileTime, ColsAtCompileTime = ArgType::RowsAtCompileTime, MaxRowsAtCompileTime = ArgType::MaxRowsAtCompileTime, MaxColsAtCompileTime = ArgType::MaxRowsAtCompileTime }; }; } } piqp-octave/src/eigen/doc/examples/Cwise_lgamma.cpp0000644000175100001770000000030014107270226022202 0ustar runnerdocker#include #include #include using namespace Eigen; int main() { Array4d v(0.5,10,0,-1); std::cout << v.lgamma() << std::endl; } piqp-octave/src/eigen/doc/examples/tut_arithmetic_redux_basic.cpp0000644000175100001770000000102114107270226025210 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::Matrix2d mat; mat << 1, 2, 3, 4; cout << "Here is mat.sum(): " << mat.sum() << endl; cout << "Here is mat.prod(): " << mat.prod() << endl; cout << "Here is mat.mean(): " << mat.mean() << endl; cout << "Here is mat.minCoeff(): " << mat.minCoeff() << endl; cout << "Here is mat.maxCoeff(): " << mat.maxCoeff() << endl; cout << "Here is mat.trace(): " << mat.trace() << endl; } piqp-octave/src/eigen/doc/examples/Cwise_erfc.cpp0000644000175100001770000000027614107270226021677 0ustar runnerdocker#include #include #include using namespace Eigen; int main() { Array4d v(-0.5,2,0,-7); std::cout << v.erfc() << std::endl; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp.expression0000644000175100001770000000111714107270226025000 0ustar runnerdockertemplate class Circulant : public Eigen::MatrixBase > { public: Circulant(const ArgType& arg) : m_arg(arg) { EIGEN_STATIC_ASSERT(ArgType::ColsAtCompileTime == 1, YOU_TRIED_CALLING_A_VECTOR_METHOD_ON_A_MATRIX); } typedef typename Eigen::internal::ref_selector::type Nested; typedef Eigen::Index Index; Index rows() const { return m_arg.rows(); } Index cols() const { return m_arg.rows(); } typedef typename Eigen::internal::ref_selector::type ArgTypeNested; ArgTypeNested m_arg; }; piqp-octave/src/eigen/doc/examples/tut_arithmetic_dot_cross.cpp0000644000175100001770000000061114107270226024723 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { Vector3d v(1,2,3); Vector3d w(0,1,2); cout << "Dot product: " << v.dot(w) << endl; double dp = v.adjoint()*w; // automatic conversion of the inner product to a scalar cout << "Dot product via a matrix product: " << dp << endl; cout << "Cross product:\n" << v.cross(w) << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgComputeTwice.cpp0000644000175100001770000000115614107270226025047 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix2f A, b; LLT llt; A << 2, -1, -1, 3; b << 1, 2, 3, 1; cout << "Here is the matrix A:\n" << A << endl; cout << "Here is the right hand side b:\n" << b << endl; cout << "Computing LLT decomposition..." << endl; llt.compute(A); cout << "The solution is:\n" << llt.solve(b) << endl; A(1,1)++; cout << "The matrix A is now:\n" << A << endl; cout << "Computing LLT decomposition..." << endl; llt.compute(A); cout << "The solution is now:\n" << llt.solve(b) << endl; } piqp-octave/src/eigen/doc/examples/matrixfree_cg.cpp0000644000175100001770000001026314107270226022442 0ustar runnerdocker#include #include #include #include #include class MatrixReplacement; using Eigen::SparseMatrix; namespace Eigen { namespace internal { // MatrixReplacement looks-like a SparseMatrix, so let's inherits its traits: template<> struct traits : public Eigen::internal::traits > {}; } } // Example of a matrix-free wrapper from a user type to Eigen's compatible type // For the sake of simplicity, this example simply wrap a Eigen::SparseMatrix. class MatrixReplacement : public Eigen::EigenBase { public: // Required typedefs, constants, and method: typedef double Scalar; typedef double RealScalar; typedef int StorageIndex; enum { ColsAtCompileTime = Eigen::Dynamic, MaxColsAtCompileTime = Eigen::Dynamic, IsRowMajor = false }; Index rows() const { return mp_mat->rows(); } Index cols() const { return mp_mat->cols(); } template Eigen::Product operator*(const Eigen::MatrixBase& x) const { return Eigen::Product(*this, x.derived()); } // Custom API: MatrixReplacement() : mp_mat(0) {} void attachMyMatrix(const SparseMatrix &mat) { mp_mat = &mat; } const SparseMatrix my_matrix() const { return *mp_mat; } private: const SparseMatrix *mp_mat; }; // Implementation of MatrixReplacement * Eigen::DenseVector though a specialization of internal::generic_product_impl: namespace Eigen { namespace internal { template struct generic_product_impl // GEMV stands for matrix-vector : generic_product_impl_base > { typedef typename Product::Scalar Scalar; template static void scaleAndAddTo(Dest& dst, const MatrixReplacement& lhs, const Rhs& rhs, const Scalar& alpha) { // This method should implement "dst += alpha * lhs * rhs" inplace, // however, for iterative solvers, alpha is always equal to 1, so let's not bother about it. assert(alpha==Scalar(1) && "scaling is not implemented"); EIGEN_ONLY_USED_FOR_DEBUG(alpha); // Here we could simply call dst.noalias() += lhs.my_matrix() * rhs, // but let's do something fancier (and less efficient): for(Index i=0; i S = Eigen::MatrixXd::Random(n,n).sparseView(0.5,1); S = S.transpose()*S; MatrixReplacement A; A.attachMyMatrix(S); Eigen::VectorXd b(n), x; b.setRandom(); // Solve Ax = b using various iterative solver with matrix-free version: { Eigen::ConjugateGradient cg; cg.compute(A); x = cg.solve(b); std::cout << "CG: #iterations: " << cg.iterations() << ", estimated error: " << cg.error() << std::endl; } { Eigen::BiCGSTAB bicg; bicg.compute(A); x = bicg.solve(b); std::cout << "BiCGSTAB: #iterations: " << bicg.iterations() << ", estimated error: " << bicg.error() << std::endl; } { Eigen::GMRES gmres; gmres.compute(A); x = gmres.solve(b); std::cout << "GMRES: #iterations: " << gmres.iterations() << ", estimated error: " << gmres.error() << std::endl; } { Eigen::DGMRES gmres; gmres.compute(A); x = gmres.solve(b); std::cout << "DGMRES: #iterations: " << gmres.iterations() << ", estimated error: " << gmres.error() << std::endl; } { Eigen::MINRES minres; minres.compute(A); x = minres.solve(b); std::cout << "MINRES: #iterations: " << minres.iterations() << ", estimated error: " << minres.error() << std::endl; } } piqp-octave/src/eigen/doc/examples/CMakeLists.txt0000644000175100001770000000113714107270226021657 0ustar runnerdockerfile(GLOB examples_SRCS "*.cpp") foreach(example_src ${examples_SRCS}) get_filename_component(example ${example_src} NAME_WE) add_executable(${example} ${example_src}) if(EIGEN_STANDARD_LIBRARIES_TO_LINK_TO) target_link_libraries(${example} ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO}) endif() add_custom_command( TARGET ${example} POST_BUILD COMMAND ${example} ARGS >${CMAKE_CURRENT_BINARY_DIR}/${example}.out ) add_dependencies(all_examples ${example}) endforeach() if(EIGEN_COMPILER_SUPPORT_CPP11) ei_add_target_property(nullary_indexing COMPILE_FLAGS "-std=c++11") endif()piqp-octave/src/eigen/doc/examples/CustomizingEigen_Inheritance.cpp0000644000175100001770000000137614107270226025424 0ustar runnerdocker#include #include class MyVectorType : public Eigen::VectorXd { public: MyVectorType(void):Eigen::VectorXd() {} // This constructor allows you to construct MyVectorType from Eigen expressions template MyVectorType(const Eigen::MatrixBase& other) : Eigen::VectorXd(other) { } // This method allows you to assign Eigen expressions to MyVectorType template MyVectorType& operator=(const Eigen::MatrixBase & other) { this->Eigen::VectorXd::operator=(other); return *this; } }; int main() { MyVectorType v = MyVectorType::Ones(4); v(2) += 10; v = 2 * v; std::cout << v.transpose() << std::endl; } piqp-octave/src/eigen/doc/examples/tut_arithmetic_add_sub.cpp0000644000175100001770000000072714107270226024335 0ustar runnerdocker#include #include using namespace Eigen; int main() { Matrix2d a; a << 1, 2, 3, 4; MatrixXd b(2,2); b << 2, 3, 1, 4; std::cout << "a + b =\n" << a + b << std::endl; std::cout << "a - b =\n" << a - b << std::endl; std::cout << "Doing a += b;" << std::endl; a += b; std::cout << "Now a =\n" << a << std::endl; Vector3d v(1,2,3); Vector3d w(1,0,0); std::cout << "-v + w - v =\n" << -v + w - v << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_addition.cpp0000644000175100001770000000062014107270226025421 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { ArrayXXf a(3,3); ArrayXXf b(3,3); a << 1,2,3, 4,5,6, 7,8,9; b << 1,2,3, 1,2,3, 1,2,3; // Adding two arrays cout << "a + b = " << endl << a + b << endl << endl; // Subtracting a scalar from an array cout << "a - 2 = " << endl << a - 2 << endl; } piqp-octave/src/eigen/doc/examples/QuickStart_example2_fixed.cpp0000644000175100001770000000044114107270226024666 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { Matrix3d m = Matrix3d::Random(); m = (m + Matrix3d::Constant(1.2)) * 50; cout << "m =" << endl << m << endl; Vector3d v(1,2,3); cout << "m * v =" << endl << m * v << endl; } piqp-octave/src/eigen/doc/examples/tut_matrix_resize.cpp0000644000175100001770000000075114107270226023405 0ustar runnerdocker#include #include using namespace Eigen; int main() { MatrixXd m(2,5); m.resize(4,3); std::cout << "The matrix m is of size " << m.rows() << "x" << m.cols() << std::endl; std::cout << "It has " << m.size() << " coefficients" << std::endl; VectorXd v(2); v.resize(5); std::cout << "The vector v is of size " << v.size() << std::endl; std::cout << "As a matrix, v is of size " << v.rows() << "x" << v.cols() << std::endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgSelfAdjointEigenSolver.cpp0000644000175100001770000000102614107270226027000 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix2f A; A << 1, 2, 2, 3; cout << "Here is the matrix A:\n" << A << endl; SelfAdjointEigenSolver eigensolver(A); if (eigensolver.info() != Success) abort(); cout << "The eigenvalues of A are:\n" << eigensolver.eigenvalues() << endl; cout << "Here's a matrix whose columns are eigenvectors of A \n" << "corresponding to these eigenvalues:\n" << eigensolver.eigenvectors() << endl; } piqp-octave/src/eigen/doc/examples/TutorialInplaceLU.cpp0000644000175100001770000000306514107270226023165 0ustar runnerdocker#include struct init { init() { std::cout << "[" << "init" << "]" << std::endl; } }; init init_obj; // [init] #include #include using namespace std; using namespace Eigen; int main() { MatrixXd A(2,2); A << 2, -1, 1, 3; cout << "Here is the input matrix A before decomposition:\n" << A << endl; cout << "[init]" << endl; cout << "[declaration]" << endl; PartialPivLU > lu(A); cout << "Here is the input matrix A after decomposition:\n" << A << endl; cout << "[declaration]" << endl; cout << "[matrixLU]" << endl; cout << "Here is the matrix storing the L and U factors:\n" << lu.matrixLU() << endl; cout << "[matrixLU]" << endl; cout << "[solve]" << endl; MatrixXd A0(2,2); A0 << 2, -1, 1, 3; VectorXd b(2); b << 1, 2; VectorXd x = lu.solve(b); cout << "Residual: " << (A0 * x - b).norm() << endl; cout << "[solve]" << endl; cout << "[modifyA]" << endl; A << 3, 4, -2, 1; x = lu.solve(b); cout << "Residual: " << (A0 * x - b).norm() << endl; cout << "[modifyA]" << endl; cout << "[recompute]" << endl; A0 = A; // save A lu.compute(A); x = lu.solve(b); cout << "Residual: " << (A0 * x - b).norm() << endl; cout << "[recompute]" << endl; cout << "[recompute_bis0]" << endl; MatrixXd A1(2,2); A1 << 5,-2,3,4; lu.compute(A1); cout << "Here is the input matrix A1 after decomposition:\n" << A1 << endl; cout << "[recompute_bis0]" << endl; cout << "[recompute_bis1]" << endl; x = lu.solve(b); cout << "Residual: " << (A1 * x - b).norm() << endl; cout << "[recompute_bis1]" << endl; } piqp-octave/src/eigen/doc/examples/tut_matrix_coefficient_accessors.cpp0000644000175100001770000000052714107270226026430 0ustar runnerdocker#include #include using namespace Eigen; int main() { MatrixXd m(2,2); m(0,0) = 3; m(1,0) = 2.5; m(0,1) = -1; m(1,1) = m(1,0) + m(0,1); std::cout << "Here is the matrix m:\n" << m << std::endl; VectorXd v(2); v(0) = 4; v(1) = v(0) - 1; std::cout << "Here is the vector v:\n" << v << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ArrayClass_accessors.cpp0000644000175100001770000000072214107270226025616 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { ArrayXXf m(2,2); // assign some values coefficient by coefficient m(0,0) = 1.0; m(0,1) = 2.0; m(1,0) = 3.0; m(1,1) = m(0,1) + m(1,0); // print values to standard output cout << m << endl << endl; // using the comma-initializer is also allowed m << 1.0,2.0, 3.0,4.0; // print values to standard output cout << m << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_PartialLU_solve.cpp0000644000175100001770000000062114107270226024550 0ustar runnerdocker#include #include #include using namespace std; using namespace Eigen; int main() { Matrix3f A; Vector3f b; A << 1,2,3, 4,5,6, 7,8,10; b << 3, 3, 4; cout << "Here is the matrix A:" << endl << A << endl; cout << "Here is the vector b:" << endl << b << endl; Vector3f x = A.lu().solve(b); cout << "The solution is:" << endl << x << endl; } piqp-octave/src/eigen/doc/examples/QuickStart_example2_dynamic.cpp0000644000175100001770000000046114107270226025215 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; int main() { MatrixXd m = MatrixXd::Random(3,3); m = (m + MatrixXd::Constant(3,3,1.2)) * 50; cout << "m =" << endl << m << endl; VectorXd v(3); v << 1, 2, 3; cout << "m * v =" << endl << m * v << endl; } piqp-octave/src/eigen/doc/examples/function_taking_ref.cpp0000644000175100001770000000112214107270226023633 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; float inv_cond(const Ref& a) { const VectorXf sing_vals = a.jacobiSvd().singularValues(); return sing_vals(sing_vals.size()-1) / sing_vals(0); } int main() { Matrix4f m = Matrix4f::Random(); cout << "matrix m:" << endl << m << endl << endl; cout << "inv_cond(m): " << inv_cond(m) << endl; cout << "inv_cond(m(1:3,1:3)): " << inv_cond(m.topLeftCorner(3,3)) << endl; cout << "inv_cond(m+I): " << inv_cond(m+Matrix4f::Identity()) << endl; } piqp-octave/src/eigen/doc/examples/tut_arithmetic_matrix_mul.cpp0000644000175100001770000000114414107270226025107 0ustar runnerdocker#include #include using namespace Eigen; int main() { Matrix2d mat; mat << 1, 2, 3, 4; Vector2d u(-1,1), v(2,0); std::cout << "Here is mat*mat:\n" << mat*mat << std::endl; std::cout << "Here is mat*u:\n" << mat*u << std::endl; std::cout << "Here is u^T*mat:\n" << u.transpose()*mat << std::endl; std::cout << "Here is u^T*v:\n" << u.transpose()*v << std::endl; std::cout << "Here is u*v^T:\n" << u*v.transpose() << std::endl; std::cout << "Let's multiply mat by itself" << std::endl; mat = mat*mat; std::cout << "Now mat is mat:\n" << mat << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_BlockOperations_vector.cpp0000644000175100001770000000053414107270226026166 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::ArrayXf v(6); v << 1, 2, 3, 4, 5, 6; cout << "v.head(3) =" << endl << v.head(3) << endl << endl; cout << "v.tail<3>() = " << endl << v.tail<3>() << endl << endl; v.segment(1,4) *= 2; cout << "after 'v.segment(1,4) *= 2', v =" << endl << v << endl; } piqp-octave/src/eigen/doc/examples/TemplateKeyword_simple.cpp0000644000175100001770000000077414107270226024322 0ustar runnerdocker#include #include using namespace Eigen; void copyUpperTriangularPart(MatrixXf& dst, const MatrixXf& src) { dst.triangularView() = src.triangularView(); } int main() { MatrixXf m1 = MatrixXf::Ones(4,4); MatrixXf m2 = MatrixXf::Random(4,4); std::cout << "m2 before copy:" << std::endl; std::cout << m2 << std::endl << std::endl; copyUpperTriangularPart(m2, m1); std::cout << "m2 after copy:" << std::endl; std::cout << m2 << std::endl << std::endl; } piqp-octave/src/eigen/doc/examples/Tutorial_BlockOperations_print_block.cpp0000644000175100001770000000063514107270226027174 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf m(4,4); m << 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12, 13,14,15,16; cout << "Block in the middle" << endl; cout << m.block<2,2>(1,1) << endl << endl; for (int i = 1; i <= 3; ++i) { cout << "Block of size " << i << "x" << i << endl; cout << m.block(0,0,i,i) << endl << endl; } } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp0000644000175100001770000000067014107270226032510 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Eigen::MatrixXf m(2,4); Eigen::VectorXf v(2); m << 1, 23, 6, 9, 3, 11, 7, 2; v << 2, 3; MatrixXf::Index index; // find nearest neighbour (m.colwise() - v).colwise().squaredNorm().minCoeff(&index); cout << "Nearest neighbour is column " << index << ":" << endl; cout << m.col(index) << endl; } piqp-octave/src/eigen/doc/examples/make_circulant.cpp.entry0000644000175100001770000000022114107270226023735 0ustar runnerdockertemplate Circulant makeCirculant(const Eigen::MatrixBase& arg) { return Circulant(arg.derived()); } piqp-octave/src/eigen/doc/examples/Cwise_erf.cpp0000644000175100001770000000027514107270226021533 0ustar runnerdocker#include #include #include using namespace Eigen; int main() { Array4d v(-0.5,2,0,-7); std::cout << v.erf() << std::endl; } piqp-octave/src/eigen/doc/examples/class_FixedBlock.cpp0000644000175100001770000000127114107270226023021 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; template Eigen::Block topLeft2x2Corner(MatrixBase& m) { return Eigen::Block(m.derived(), 0, 0); } template const Eigen::Block topLeft2x2Corner(const MatrixBase& m) { return Eigen::Block(m.derived(), 0, 0); } int main(int, char**) { Matrix3d m = Matrix3d::Identity(); cout << topLeft2x2Corner(4*m) << endl; // calls the const version topLeft2x2Corner(m) *= 2; // calls the non-const version cout << "Now the matrix m is:" << endl << m << endl; return 0; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgSetThreshold.cpp0000644000175100001770000000057114107270226025047 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix2d A; A << 2, 1, 2, 0.9999999999; FullPivLU lu(A); cout << "By default, the rank of A is found to be " << lu.rank() << endl; lu.setThreshold(1e-5); cout << "With threshold 1e-5, the rank of A is found to be " << lu.rank() << endl; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp0000644000175100001770000000054414107270226033305 0ustar runnerdocker#include #include using namespace std; int main() { Eigen::MatrixXf mat(2,4); Eigen::VectorXf v(2); mat << 1, 2, 6, 9, 3, 1, 7, 2; v << 0, 1; //add v to each column of m mat.colwise() += v; std::cout << "Broadcasting result: " << std::endl; std::cout << mat << std::endl; } piqp-octave/src/eigen/doc/examples/class_CwiseUnaryOp_ptrfun.cpp0000644000175100001770000000056314107270226025000 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; // define function to be applied coefficient-wise double ramp(double x) { if (x > 0) return x; else return 0; } int main(int, char**) { Matrix4d m1 = Matrix4d::Random(); cout << m1 << endl << "becomes: " << endl << m1.unaryExpr(ptr_fun(ramp)) << endl; return 0; } piqp-octave/src/eigen/doc/examples/class_CwiseBinaryOp.cpp0000644000175100001770000000101614107270226023522 0ustar runnerdocker#include #include using namespace Eigen; using namespace std; // define a custom template binary functor template struct MakeComplexOp { EIGEN_EMPTY_STRUCT_CTOR(MakeComplexOp) typedef complex result_type; complex operator()(const Scalar& a, const Scalar& b) const { return complex(a,b); } }; int main(int, char**) { Matrix4d m1 = Matrix4d::Random(), m2 = Matrix4d::Random(); cout << m1.binaryExpr(m2, MakeComplexOp()) << endl; return 0; } piqp-octave/src/eigen/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp0000644000175100001770000000100114107270226033151 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { ArrayXXf a(2,2); a << 1,2, 3,4; cout << "(a > 0).all() = " << (a > 0).all() << endl; cout << "(a > 0).any() = " << (a > 0).any() << endl; cout << "(a > 0).count() = " << (a > 0).count() << endl; cout << endl; cout << "(a > 2).all() = " << (a > 2).all() << endl; cout << "(a > 2).any() = " << (a > 2).any() << endl; cout << "(a > 2).count() = " << (a > 2).count() << endl; } piqp-octave/src/eigen/doc/examples/TutorialLinAlgRankRevealing.cpp0000644000175100001770000000113014107270226025157 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; int main() { Matrix3f A; A << 1, 2, 5, 2, 1, 4, 3, 0, 3; cout << "Here is the matrix A:\n" << A << endl; FullPivLU lu_decomp(A); cout << "The rank of A is " << lu_decomp.rank() << endl; cout << "Here is a matrix whose columns form a basis of the null-space of A:\n" << lu_decomp.kernel() << endl; cout << "Here is a matrix whose columns form a basis of the column-space of A:\n" << lu_decomp.image(A) << endl; // yes, have to pass the original A } piqp-octave/src/eigen/doc/examples/class_Reshaped.cpp0000644000175100001770000000104114107270226022535 0ustar runnerdocker#include #include using namespace std; using namespace Eigen; template const Reshaped reshape_helper(const MatrixBase& m, int rows, int cols) { return Reshaped(m.derived(), rows, cols); } int main(int, char**) { MatrixXd m(3, 4); m << 1, 4, 7, 10, 2, 5, 8, 11, 3, 6, 9, 12; cout << m << endl; Ref n = reshape_helper(m, 2, 6); cout << "Matrix m is:" << endl << m << endl; cout << "Matrix n is:" << endl << n << endl; } piqp-octave/src/eigen/doc/PreprocessorDirectives.dox0000644000175100001770000003370514107270226022533 0ustar runnerdockernamespace Eigen { /** \page TopicPreprocessorDirectives Preprocessor directives You can control some aspects of %Eigen by defining the preprocessor tokens using \c \#define. These macros should be defined before any %Eigen headers are included. Often they are best set in the project options. This page lists the preprocessor tokens recognized by %Eigen. \eigenAutoToc \section TopicPreprocessorDirectivesMajor Macros with major effects These macros have a major effect and typically break the API (Application Programming Interface) and/or the ABI (Application Binary Interface). This can be rather dangerous: if parts of your program are compiled with one option, and other parts (or libraries that you use) are compiled with another option, your program may fail to link or exhibit subtle bugs. Nevertheless, these options can be useful for people who know what they are doing. - \b EIGEN2_SUPPORT and \b EIGEN2_SUPPORT_STAGEnn_xxx are disabled starting from the 3.3 release. Defining one of these will raise a compile-error. If you need to compile Eigen2 code, check this site. - \b EIGEN_DEFAULT_DENSE_INDEX_TYPE - the type for column and row indices in matrices, vectors and array (DenseBase::Index). Set to \c std::ptrdiff_t by default. - \b EIGEN_DEFAULT_IO_FORMAT - the IOFormat to use when printing a matrix if no %IOFormat is specified. Defaults to the %IOFormat constructed by the default constructor IOFormat::IOFormat(). - \b EIGEN_INITIALIZE_MATRICES_BY_ZERO - if defined, all entries of newly constructed matrices and arrays are initialized to zero, as are new entries in matrices and arrays after resizing. Not defined by default. \warning The unary (resp. binary) constructor of \c 1x1 (resp. \c 2x1 or \c 1x2) fixed size matrices is always interpreted as an initialization constructor where the argument(s) are the coefficient values and not the sizes. For instance, \code Vector2d v(2,1); \endcode will create a vector with coeficients [2,1], and \b not a \c 2x1 vector initialized with zeros (i.e., [0,0]). If such cases might occur, then it is recommended to use the default constructor with a explicit call to resize: \code Matrix v; v.resize(size); Matrix m; m.resize(rows,cols); \endcode - \b EIGEN_INITIALIZE_MATRICES_BY_NAN - if defined, all entries of newly constructed matrices and arrays are initialized to NaN, as are new entries in matrices and arrays after resizing. This option is especially useful for debugging purpose, though a memory tool like valgrind is preferable. Not defined by default. \warning See the documentation of \c EIGEN_INITIALIZE_MATRICES_BY_ZERO for a discussion on a limitations of these macros when applied to \c 1x1, \c 1x2, and \c 2x1 fixed-size matrices. - \b EIGEN_NO_AUTOMATIC_RESIZING - if defined, the matrices (or arrays) on both sides of an assignment a = b have to be of the same size; otherwise, %Eigen automatically resizes \c a so that it is of the correct size. Not defined by default. \section TopicPreprocessorDirectivesCppVersion C++ standard features By default, %Eigen strive to automatically detect and enable language features at compile-time based on the information provided by the compiler. - \b EIGEN_MAX_CPP_VER - disables usage of C++ features requiring a version greater than EIGEN_MAX_CPP_VER. Possible values are: 03, 11, 14, 17, etc. If not defined (the default), %Eigen enables all features supported by the compiler. Individual features can be explicitly enabled or disabled by defining the following token to 0 or 1 respectively. For instance, one might limit the C++ version to C++03 by defining EIGEN_MAX_CPP_VER=03, but still enable C99 math functions by defining EIGEN_HAS_C99_MATH=1. - \b EIGEN_HAS_C99_MATH - controls the usage of C99 math functions such as erf, erfc, lgamma, etc. Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_CXX11_MATH - controls the implementation of some functions such as round, logp1, isinf, isnan, etc. Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_RVALUE_REFERENCES - defines whether rvalue references are supported Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_STD_RESULT_OF - defines whether std::result_of is supported Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_VARIADIC_TEMPLATES - defines whether variadic templates are supported Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_CONSTEXPR - defines whether relaxed const expression are supported Automatic detection disabled if EIGEN_MAX_CPP_VER<14. - \b EIGEN_HAS_CXX11_CONTAINERS - defines whether STL's containers follows C++11 specifications Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_HAS_CXX11_NOEXCEPT - defines whether noexcept is supported Automatic detection disabled if EIGEN_MAX_CPP_VER<11. - \b EIGEN_NO_IO - Disables any usage and support for ``. \section TopicPreprocessorDirectivesAssertions Assertions The %Eigen library contains many assertions to guard against programming errors, both at compile time and at run time. However, these assertions do cost time and can thus be turned off. - \b EIGEN_NO_DEBUG - disables %Eigen's assertions if defined. Not defined by default, unless the \c NDEBUG macro is defined (this is a standard C++ macro which disables all asserts). - \b EIGEN_NO_STATIC_ASSERT - if defined, compile-time static assertions are replaced by runtime assertions; this saves compilation time. Not defined by default. - \b eigen_assert - macro with one argument that is used inside %Eigen for assertions. By default, it is basically defined to be \c assert, which aborts the program if the assertion is violated. Redefine this macro if you want to do something else, like throwing an exception. - \b EIGEN_MPL2_ONLY - disable non MPL2 compatible features, or in other words disable the features which are still under the LGPL. \section TopicPreprocessorDirectivesPerformance Alignment, vectorization and performance tweaking - \b \c EIGEN_MALLOC_ALREADY_ALIGNED - Can be set to 0 or 1 to tell whether default system \c malloc already returns aligned buffers. In not defined, then this information is automatically deduced from the compiler and system preprocessor tokens. - \b \c EIGEN_MAX_ALIGN_BYTES - Must be a power of two, or 0. Defines an upper bound on the memory boundary in bytes on which dynamically and statically allocated data may be aligned by %Eigen. If not defined, a default value is automatically computed based on architecture, compiler, and OS. This option is typically used to enforce binary compatibility between code/libraries compiled with different SIMD options. For instance, one may compile AVX code and enforce ABI compatibility with existing SSE code by defining \c EIGEN_MAX_ALIGN_BYTES=16. In the other way round, since by default AVX implies 32 bytes alignment for best performance, one can compile SSE code to be ABI compatible with AVX code by defining \c EIGEN_MAX_ALIGN_BYTES=32. - \b \c EIGEN_MAX_STATIC_ALIGN_BYTES - Same as \c EIGEN_MAX_ALIGN_BYTES but for statically allocated data only. By default, if only \c EIGEN_MAX_ALIGN_BYTES is defined, then \c EIGEN_MAX_STATIC_ALIGN_BYTES == \c EIGEN_MAX_ALIGN_BYTES, otherwise a default value is automatically computed based on architecture, compiler, and OS (can be smaller than the default value of EIGEN_MAX_ALIGN_BYTES on architectures that do not support stack alignment). Let us emphasize that \c EIGEN_MAX_*_ALIGN_BYTES define only a diserable upper bound. In practice data is aligned to largest power-of-two common divisor of \c EIGEN_MAX_STATIC_ALIGN_BYTES and the size of the data, such that memory is not wasted. - \b \c EIGEN_DONT_PARALLELIZE - if defined, this disables multi-threading. This is only relevant if you enabled OpenMP. See \ref TopicMultiThreading for details. - \b \c EIGEN_DONT_VECTORIZE - disables explicit vectorization when defined. Not defined by default, unless alignment is disabled by %Eigen's platform test or the user defining \c EIGEN_DONT_ALIGN. - \b \c EIGEN_UNALIGNED_VECTORIZE - disables/enables vectorization with unaligned stores. Default is 1 (enabled). If set to 0 (disabled), then expression for which the destination cannot be aligned are not vectorized (e.g., unaligned small fixed size vectors or matrices) - \b \c EIGEN_FAST_MATH - enables some optimizations which might affect the accuracy of the result. This currently enables the SSE vectorization of sin() and cos(), and speedups sqrt() for single precision. Defined to 1 by default. Define it to 0 to disable. - \b \c EIGEN_UNROLLING_LIMIT - defines the size of a loop to enable meta unrolling. Set it to zero to disable unrolling. The size of a loop here is expressed in %Eigen's own notion of "number of FLOPS", it does not correspond to the number of iterations or the number of instructions. The default is value 110. - \b \c EIGEN_STACK_ALLOCATION_LIMIT - defines the maximum bytes for a buffer to be allocated on the stack. For internal temporary buffers, dynamic memory allocation is employed as a fall back. For fixed-size matrices or arrays, exceeding this threshold raises a compile time assertion. Use 0 to set no limit. Default is 128 KB. - \b \c EIGEN_NO_CUDA - disables CUDA support when defined. Might be useful in .cu files for which Eigen is used on the host only, and never called from device code. - \b \c EIGEN_STRONG_INLINE - This macro is used to qualify critical functions and methods that we expect the compiler to inline. By default it is defined to \c __forceinline for MSVC and ICC, and to \c inline for other compilers. A tipical usage is to define it to \c inline for MSVC users wanting faster compilation times, at the risk of performance degradations in some rare cases for which MSVC inliner fails to do a good job. - \b \c EIGEN_DEFAULT_L1_CACHE_SIZE - Sets the default L1 cache size that is used in Eigen's GEBP kernel when the correct cache size cannot be determined at runtime. - \b \c EIGEN_DEFAULT_L2_CACHE_SIZE - Sets the default L2 cache size that is used in Eigen's GEBP kernel when the correct cache size cannot be determined at runtime. - \b \c EIGEN_DEFAULT_L3_CACHE_SIZE - Sets the default L3 cache size that is used in Eigen's GEBP kernel when the correct cache size cannot be determined at runtime. - \c EIGEN_DONT_ALIGN - Deprecated, it is a synonym for \c EIGEN_MAX_ALIGN_BYTES=0. It disables alignment completely. %Eigen will not try to align its objects and does not expect that any objects passed to it are aligned. This will turn off vectorization if \b \c EIGEN_UNALIGNED_VECTORIZE=1. Not defined by default. - \c EIGEN_DONT_ALIGN_STATICALLY - Deprecated, it is a synonym for \c EIGEN_MAX_STATIC_ALIGN_BYTES=0. It disables alignment of arrays on the stack. Not defined by default, unless \c EIGEN_DONT_ALIGN is defined. \section TopicPreprocessorDirectivesPlugins Plugins It is possible to add new methods to many fundamental classes in %Eigen by writing a plugin. As explained in the section \ref TopicCustomizing_Plugins, the plugin is specified by defining a \c EIGEN_xxx_PLUGIN macro. The following macros are supported; none of them are defined by default. - \b EIGEN_ARRAY_PLUGIN - filename of plugin for extending the Array class. - \b EIGEN_ARRAYBASE_PLUGIN - filename of plugin for extending the ArrayBase class. - \b EIGEN_CWISE_PLUGIN - filename of plugin for extending the Cwise class. - \b EIGEN_DENSEBASE_PLUGIN - filename of plugin for extending the DenseBase class. - \b EIGEN_DYNAMICSPARSEMATRIX_PLUGIN - filename of plugin for extending the DynamicSparseMatrix class. - \b EIGEN_FUNCTORS_PLUGIN - filename of plugin for adding new functors and specializations of functor_traits. - \b EIGEN_MAPBASE_PLUGIN - filename of plugin for extending the MapBase class. - \b EIGEN_MATRIX_PLUGIN - filename of plugin for extending the Matrix class. - \b EIGEN_MATRIXBASE_PLUGIN - filename of plugin for extending the MatrixBase class. - \b EIGEN_PLAINOBJECTBASE_PLUGIN - filename of plugin for extending the PlainObjectBase class. - \b EIGEN_QUATERNION_PLUGIN - filename of plugin for extending the Quaternion class. - \b EIGEN_QUATERNIONBASE_PLUGIN - filename of plugin for extending the QuaternionBase class. - \b EIGEN_SPARSEMATRIX_PLUGIN - filename of plugin for extending the SparseMatrix class. - \b EIGEN_SPARSEMATRIXBASE_PLUGIN - filename of plugin for extending the SparseMatrixBase class. - \b EIGEN_SPARSEVECTOR_PLUGIN - filename of plugin for extending the SparseVector class. - \b EIGEN_TRANSFORM_PLUGIN - filename of plugin for extending the Transform class. - \b EIGEN_VECTORWISEOP_PLUGIN - filename of plugin for extending the VectorwiseOp class. \section TopicPreprocessorDirectivesDevelopers Macros for Eigen developers These macros are mainly meant for people developing %Eigen and for testing purposes. Even though, they might be useful for power users and the curious for debugging and testing purpose, they \b should \b not \b be \b used by real-word code. - \b EIGEN_DEFAULT_TO_ROW_MAJOR - when defined, the default storage order for matrices becomes row-major instead of column-major. Not defined by default. - \b EIGEN_INTERNAL_DEBUGGING - if defined, enables assertions in %Eigen's internal routines. This is useful for debugging %Eigen itself. Not defined by default. - \b EIGEN_NO_MALLOC - if defined, any request from inside the %Eigen to allocate memory from the heap results in an assertion failure. This is useful to check that some routine does not allocate memory dynamically. Not defined by default. - \b EIGEN_RUNTIME_NO_MALLOC - if defined, a new switch is introduced which can be turned on and off by calling set_is_malloc_allowed(bool). If malloc is not allowed and %Eigen tries to allocate memory dynamically anyway, an assertion failure results. Not defined by default. */ } piqp-octave/src/eigen/doc/ftv2node.png0000644000175100001770000000012614107270226017533 0ustar runnerdocker‰PNG  IHDRɪ|IDATxíݱðøScOx@ –¨y}IEND®B`‚piqp-octave/src/eigen/doc/MatrixfreeSolverExample.dox0000644000175100001770000000136614107270226022636 0ustar runnerdocker namespace Eigen { /** \eigenManualPage MatrixfreeSolverExample Matrix-free solvers Iterative solvers such as ConjugateGradient and BiCGSTAB can be used in a matrix free context. To this end, user must provide a wrapper class inheriting EigenBase<> and implementing the following methods: - \c Index \c rows() and \c Index \c cols(): returns number of rows and columns respectively - \c operator* with your type and an %Eigen dense column vector (its actual implementation goes in a specialization of the internal::generic_product_impl class) \c Eigen::internal::traits<> must also be specialized for the wrapper type. Here is a complete example wrapping an Eigen::SparseMatrix: \include matrixfree_cg.cpp Output: \verbinclude matrixfree_cg.out */ }piqp-octave/src/eigen/doc/TopicResizing.dox0000644000175100001770000000021114107270226020576 0ustar runnerdockernamespace Eigen { /** \page TopicResizing Resizing TODO: write this dox page! Is linked from the tutorial on the Matrix class. */ } piqp-octave/src/eigen/doc/QuickStartGuide.dox0000644000175100001770000001466214107270226021074 0ustar runnerdockernamespace Eigen { /** \page GettingStarted Getting started \eigenAutoToc This is a very short guide on how to get started with Eigen. It has a dual purpose. It serves as a minimal introduction to the Eigen library for people who want to start coding as soon as possible. You can also read this page as the first part of the Tutorial, which explains the library in more detail; in this case you will continue with \ref TutorialMatrixClass. \section GettingStartedInstallation How to "install" Eigen? In order to use Eigen, you just need to download and extract Eigen's source code (see the wiki for download instructions). In fact, the header files in the \c Eigen subdirectory are the only files required to compile programs using Eigen. The header files are the same for all platforms. It is not necessary to use CMake or install anything. \section GettingStartedFirstProgram A simple first program Here is a rather simple program to get you started. \include QuickStart_example.cpp We will explain the program after telling you how to compile it. \section GettingStartedCompiling Compiling and running your first program There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the -I option to achieve this, so you can compile the program with a command like this: \code g++ -I /path/to/eigen/ my_program.cpp -o my_program \endcode On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into /usr/local/include/. This way, you can compile the program with: \code g++ my_program.cpp -o my_program \endcode When you run the program, it produces the following output: \include QuickStart_example.out \section GettingStartedExplanation Explanation of the first program The Eigen header files define many types, but for simple applications it may be enough to use only the \c MatrixXd type. This represents a matrix of arbitrary size (hence the \c X in \c MatrixXd), in which every entry is a \c double (hence the \c d in \c MatrixXd). See the \ref QuickRef_Types "quick reference guide" for an overview of the different types you can use to represent a matrix. The \c Eigen/Dense header file defines all member functions for the MatrixXd type and related types (see also the \ref QuickRef_Headers "table of header files"). All classes and functions defined in this header file (and other Eigen header files) are in the \c Eigen namespace. The first line of the \c main function declares a variable of type \c MatrixXd and specifies that it is a matrix with 2 rows and 2 columns (the entries are not initialized). The statement m(0,0) = 3 sets the entry in the top-left corner to 3. You need to use round parentheses to refer to entries in the matrix. As usual in computer science, the index of the first index is 0, as opposed to the convention in mathematics that the first index is 1. The following three statements sets the other three entries. The final line outputs the matrix \c m to the standard output stream. \section GettingStartedExample2 Example 2: Matrices and vectors Here is another example, which combines matrices with vectors. Concentrate on the left-hand program for now; we will talk about the right-hand program later.
Size set at run time:Size set at compile time:
\include QuickStart_example2_dynamic.cpp \include QuickStart_example2_fixed.cpp
The output is as follows: \include QuickStart_example2_dynamic.out \section GettingStartedExplanation2 Explanation of the second example The second example starts by declaring a 3-by-3 matrix \c m which is initialized using the \link DenseBase::Random(Index,Index) Random() \endlink method with random values between -1 and 1. The next line applies a linear mapping such that the values are between 10 and 110. The function call \link DenseBase::Constant(Index,Index,const Scalar&) MatrixXd::Constant\endlink(3,3,1.2) returns a 3-by-3 matrix expression having all coefficients equal to 1.2. The rest is standard arithmetic. The next line of the \c main function introduces a new type: \c VectorXd. This represents a (column) vector of arbitrary size. Here, the vector \c v is created to contain \c 3 coefficients which are left uninitialized. The one but last line uses the so-called comma-initializer, explained in \ref TutorialAdvancedInitialization, to set all coefficients of the vector \c v to be as follows: \f[ v = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}. \f] The final line of the program multiplies the matrix \c m with the vector \c v and outputs the result. Now look back at the second example program. We presented two versions of it. In the version in the left column, the matrix is of type \c MatrixXd which represents matrices of arbitrary size. The version in the right column is similar, except that the matrix is of type \c Matrix3d, which represents matrices of a fixed size (here 3-by-3). Because the type already encodes the size of the matrix, it is not necessary to specify the size in the constructor; compare MatrixXd m(3,3) with Matrix3d m. Similarly, we have \c VectorXd on the left (arbitrary size) versus \c Vector3d on the right (fixed size). Note that here the coefficients of vector \c v are directly set in the constructor, though the same syntax of the left example could be used too. The use of fixed-size matrices and vectors has two advantages. The compiler emits better (faster) code because it knows the size of the matrices and vectors. Specifying the size in the type also allows for more rigorous checking at compile-time. For instance, the compiler will complain if you try to multiply a \c Matrix4d (a 4-by-4 matrix) with a \c Vector3d (a vector of size 3). However, the use of many types increases compilation time and the size of the executable. The size of the matrix may also not be known at compile-time. A rule of thumb is to use fixed-size matrices for size 4-by-4 and smaller. \section GettingStartedConclusion Where to go from here? It's worth taking the time to read the \ref TutorialMatrixClass "long tutorial". However if you think you don't need it, you can directly use the classes documentation and our \ref QuickRefPage. \li \b Next: \ref TutorialMatrixClass */ } piqp-octave/src/eigen/doc/TopicCMakeGuide.dox0000644000175100001770000000357214107270226020757 0ustar runnerdockernamespace Eigen { /** \page TopicCMakeGuide Using %Eigen in CMake Projects %Eigen provides native CMake support which allows the library to be easily used in CMake projects. \note %CMake 3.0 (or later) is required to enable this functionality. %Eigen exports a CMake target called `Eigen3::Eigen` which can be imported using the `find_package` CMake command and used by calling `target_link_libraries` as in the following example: \code{.cmake} cmake_minimum_required (VERSION 3.0) project (myproject) find_package (Eigen3 3.3 REQUIRED NO_MODULE) add_executable (example example.cpp) target_link_libraries (example Eigen3::Eigen) \endcode The above code snippet must be placed in a file called `CMakeLists.txt` alongside `example.cpp`. After running \code{.sh} $ cmake path-to-example-directory \endcode CMake will produce project files that generate an executable called `example` which requires at least version 3.3 of %Eigen. Here, `path-to-example-directory` is the path to the directory that contains both `CMakeLists.txt` and `example.cpp`. Do not forget to set the \c CMAKE_PREFIX_PATH variable if Eigen is not installed in a default location or if you want to pick a specific version. For instance: \code{.sh} $ cmake path-to-example-directory -DCMAKE_PREFIX_PATH=$HOME/mypackages \endcode An alternative is to set the \c Eigen3_DIR cmake's variable to the respective path containing the \c Eigen3*.cmake files. For instance: \code{.sh} $ cmake path-to-example-directory -DEigen3_DIR=$HOME/mypackages/share/eigen3/cmake/ \endcode If the `REQUIRED` option is omitted when locating %Eigen using `find_package`, one can check whether the package was found as follows: \code{.cmake} find_package (Eigen3 3.3 NO_MODULE) if (TARGET Eigen3::Eigen) # Use the imported target endif (TARGET Eigen3::Eigen) \endcode */ } piqp-octave/src/eigen/doc/SparseQuickReference.dox0000644000175100001770000002017014107270226022064 0ustar runnerdockernamespace Eigen { /** \eigenManualPage SparseQuickRefPage Quick reference guide for sparse matrices \eigenAutoToc


In this page, we give a quick summary of the main operations available for sparse matrices in the class SparseMatrix. First, it is recommended to read the introductory tutorial at \ref TutorialSparse. The important point to have in mind when working on sparse matrices is how they are stored : i.e either row major or column major. The default is column major. Most arithmetic operations on sparse matrices will assert that they have the same storage order. \section SparseMatrixInit Sparse Matrix Initialization
Category Operations Notes
Constructor \code SparseMatrix sm1(1000,1000); SparseMatrix,RowMajor> sm2; \endcode Default is ColMajor
Resize/Reserve \code sm1.resize(m,n); // Change sm1 to a m x n matrix. sm1.reserve(nnz); // Allocate room for nnz nonzeros elements. \endcode Note that when calling reserve(), it is not required that nnz is the exact number of nonzero elements in the final matrix. However, an exact estimation will avoid multiple reallocations during the insertion phase.
Assignment \code SparseMatrix sm1; // Initialize sm2 with sm1. SparseMatrix sm2(sm1), sm3; // Assignment and evaluations modify the storage order. sm3 = sm1; \endcode The copy constructor can be used to convert from a storage order to another
Element-wise Insertion \code // Insert a new element; sm1.insert(i, j) = v_ij; // Update the value v_ij sm1.coeffRef(i,j) = v_ij; sm1.coeffRef(i,j) += v_ij; sm1.coeffRef(i,j) -= v_ij; \endcode insert() assumes that the element does not already exist; otherwise, use coeffRef()
Batch insertion \code std::vector< Eigen::Triplet > tripletList; tripletList.reserve(estimation_of_entries); // -- Fill tripletList with nonzero elements... sm1.setFromTriplets(TripletList.begin(), TripletList.end()); \endcode A complete example is available at \link TutorialSparseFilling Triplet Insertion \endlink.
Constant or Random Insertion \code sm1.setZero(); \endcode Remove all non-zero coefficients
\section SparseBasicInfos Matrix properties Beyond the basic functions rows() and cols(), there are some useful functions that are available to easily get some information from the matrix.
\code sm1.rows(); // Number of rows sm1.cols(); // Number of columns sm1.nonZeros(); // Number of non zero values sm1.outerSize(); // Number of columns (resp. rows) for a column major (resp. row major ) sm1.innerSize(); // Number of rows (resp. columns) for a row major (resp. column major) sm1.norm(); // Euclidian norm of the matrix sm1.squaredNorm(); // Squared norm of the matrix sm1.blueNorm(); sm1.isVector(); // Check if sm1 is a sparse vector or a sparse matrix sm1.isCompressed(); // Check if sm1 is in compressed form ... \endcode
\section SparseBasicOps Arithmetic operations It is easy to perform arithmetic operations on sparse matrices provided that the dimensions are adequate and that the matrices have the same storage order. Note that the evaluation can always be done in a matrix with a different storage order. In the following, \b sm denotes a sparse matrix, \b dm a dense matrix and \b dv a dense vector.
Operations Code Notes
add subtract \code sm3 = sm1 + sm2; sm3 = sm1 - sm2; sm2 += sm1; sm2 -= sm1; \endcode sm1 and sm2 should have the same storage order
scalar product\code sm3 = sm1 * s1; sm3 *= s1; sm3 = s1 * sm1 + s2 * sm2; sm3 /= s1;\endcode Many combinations are possible if the dimensions and the storage order agree.
%Sparse %Product \code sm3 = sm1 * sm2; dm2 = sm1 * dm1; dv2 = sm1 * dv1; \endcode
transposition, adjoint \code sm2 = sm1.transpose(); sm2 = sm1.adjoint(); \endcode Note that the transposition change the storage order. There is no support for transposeInPlace().
Permutation \code perm.indices(); // Reference to the vector of indices sm1.twistedBy(perm); // Permute rows and columns sm2 = sm1 * perm; // Permute the columns sm2 = perm * sm1; // Permute the columns \endcode
Component-wise ops \code sm1.cwiseProduct(sm2); sm1.cwiseQuotient(sm2); sm1.cwiseMin(sm2); sm1.cwiseMax(sm2); sm1.cwiseAbs(); sm1.cwiseSqrt(); \endcode sm1 and sm2 should have the same storage order
\section sparseotherops Other supported operations
Code Notes
Sub-matrices
\code sm1.block(startRow, startCol, rows, cols); sm1.block(startRow, startCol); sm1.topLeftCorner(rows, cols); sm1.topRightCorner(rows, cols); sm1.bottomLeftCorner( rows, cols); sm1.bottomRightCorner( rows, cols); \endcode Contrary to dense matrices, here all these methods are read-only.\n See \ref TutorialSparse_SubMatrices and below for read-write sub-matrices.
Range
\code sm1.innerVector(outer); // RW sm1.innerVectors(start, size); // RW sm1.leftCols(size); // RW sm2.rightCols(size); // RO because sm2 is row-major sm1.middleRows(start, numRows); // RO because sm1 is column-major sm1.middleCols(start, numCols); // RW sm1.col(j); // RW \endcode A inner vector is either a row (for row-major) or a column (for column-major).\n As stated earlier, for a read-write sub-matrix (RW), the evaluation can be done in a matrix with different storage order.
Triangular and selfadjoint views
\code sm2 = sm1.triangularview(); sm2 = sm1.selfadjointview(); \endcode Several combination between triangular views and blocks views are possible \code \endcode
Triangular solve
\code dv2 = sm1.triangularView().solve(dv1); dv2 = sm1.topLeftCorner(size, size) .triangularView().solve(dv1); \endcode For general sparse solve, Use any suitable module described at \ref TopicSparseSystems
Low-level API
\code sm1.valuePtr(); // Pointer to the values sm1.innerIndexPtr(); // Pointer to the indices. sm1.outerIndexPtr(); // Pointer to the beginning of each inner vector \endcode If the matrix is not in compressed form, makeCompressed() should be called before.\n Note that these functions are mostly provided for interoperability purposes with external libraries.\n A better access to the values of the matrix is done by using the InnerIterator class as described in \link TutorialSparse the Tutorial Sparse \endlink section
Mapping external buffers
\code int outerIndexPtr[cols+1]; int innerIndices[nnz]; double values[nnz]; Map > sm1(rows,cols,nnz,outerIndexPtr, // read-write innerIndices,values); Map > sm2(...); // read-only \endcode As for dense matrices, class Map can be used to see external buffers as an %Eigen's SparseMatrix object.
*/ } piqp-octave/src/eigen/doc/eigendoxy_header.html.in0000644000175100001770000000473114107270226022102 0ustar runnerdocker $projectname: $title $title $treeview $search $mathjax
Please, help us to better know about our user community by answering the following short survey: https://forms.gle/wpyrxWi18ox9Z5ae9
$projectname  $projectnumber
$projectbrief
$projectbrief
$searchbox
piqp-octave/src/eigen/doc/PassingByValue.dox0000644000175100001770000000221614107270226020710 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicPassingByValue Passing Eigen objects by value to functions Passing objects by value is almost always a very bad idea in C++, as this means useless copies, and one should pass them by reference instead. With %Eigen, this is even more important: passing \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen objects" by value is not only inefficient, it can be illegal or make your program crash! And the reason is that these %Eigen objects have alignment modifiers that aren't respected when they are passed by value. For example, a function like this, where \c v is passed by value: \code void my_function(Eigen::Vector2d v); \endcode needs to be rewritten as follows, passing \c v by const reference: \code void my_function(const Eigen::Vector2d& v); \endcode Likewise if you have a class having an %Eigen object as member: \code struct Foo { Eigen::Vector2d v; }; void my_function(Foo v); \endcode This function also needs to be rewritten like this: \code void my_function(const Foo& v); \endcode Note that on the other hand, there is no problem with functions that return objects by value. */ } piqp-octave/src/eigen/doc/eigendoxy.css0000644000175100001770000001075014107270226020007 0ustar runnerdocker /******** Eigen specific CSS code ************/ /**** Styles removing elements ****/ /* remove the "modules|classes" link for module pages (they are already in the TOC) */ div.summary { display:none; } /* remove */ div.contents hr { display:none; } /**** ****/ p, dl.warning, dl.attention, dl.note { max-width:60em; text-align:justify; } li { max-width:55em; text-align:justify; } img { border: 0; } div.fragment { display:table; /* this allows the element to be larger than its parent */ padding: 0pt; } pre.fragment { border: 1px solid #cccccc; margin: 2px 0px 2px 0px; padding: 3px 5px 3px 5px; } /* Common style for all Eigen's tables */ table.example, table.manual, table.manual-vl, table.manual-hl { max-width:100%; border-collapse: collapse; border-style: solid; border-width: 1px; border-color: #cccccc; font-size: 1em; box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); -moz-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); } table.example th, table.manual th, table.manual-vl th, table.manual-hl th { padding: 0.5em 0.5em 0.5em 0.5em; text-align: left; padding-right: 1em; color: #555555; background-color: #F4F4E5; background-image: -webkit-gradient(linear,center top,center bottom,from(#FFFFFF), color-stop(0.3,#FFFFFF), color-stop(0.30,#FFFFFF), color-stop(0.98,#F4F4E5), to(#ECECDE)); background-image: -moz-linear-gradient(center top, #FFFFFF 0%, #FFFFFF 30%, #F4F4E5 98%, #ECECDE); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FFFFFF', endColorstr='#F4F4E5'); } table.example td, table.manual td, table.manual-vl td, table.manual-hl td { vertical-align:top; border-width: 1px; border-color: #cccccc; } /* header of headers */ table th.meta { text-align:center; font-size: 1.2em; background-color:#FFFFFF; } /* intermediate header */ table th.inter { text-align:left; background-color:#FFFFFF; background-image:none; border-style:solid solid solid solid; border-width: 1px; border-color: #cccccc; } /** class for example / output tables **/ table.example { } table.example th { } table.example td { padding: 0.5em 0.5em 0.5em 0.5em; vertical-align:top; } /* standard class for the manual */ table.manual, table.manual-vl, table.manual-hl { padding: 0.2em 0em 0.5em 0em; } table.manual th, table.manual-vl th, table.manual-hl th { margin: 0em 0em 0.3em 0em; } table.manual td, table.manual-vl td, table.manual-hl td { padding: 0.3em 0.5em 0.3em 0.5em; vertical-align:top; border-width: 1px; } table.manual td.alt, table.manual tr.alt, table.manual-vl td.alt, table.manual-vl tr.alt { background-color: #F4F4E5; } table.manual-vl th, table.manual-vl td, table.manual-vl td.alt { border-color: #cccccc; border-width: 1px; border-style: none solid none solid; } table.manual-vl th.inter { border-style: solid solid solid solid; } table.manual-hl td { border-color: #cccccc; border-width: 1px; border-style: solid none solid none; } table td.code { font-family: monospace; } h2 { margin-top:2em; border-style: none none solid none; border-width: 1px; border-color: #cccccc; } /**** Table of content in the side-nav ****/ div.toc { margin:0; padding: 0.3em 0 0 0; width:100%; float:none; position:absolute; bottom:0; border-radius:0px; border-style: solid none none none; max-height:50%; overflow-y: scroll; } div.toc h3 { margin-left: 0.5em; margin-bottom: 0.2em; } div.toc ul { margin: 0.2em 0 0.4em 0.5em; } span.cpp11,span.cpp14,span.cpp17 { color: #119911; font-weight: bold; } .newin3x { color: #a37c1a; font-weight: bold; } div.warningbox { max-width:60em; border-style: solid solid solid solid; border-color: red; border-width: 3px; } /**** old Eigen's styles ****/ table.tutorial_code td { border-color: transparent; /* required for Firefox */ padding: 3pt 5pt 3pt 5pt; vertical-align: top; } /* Whenever doxygen meets a '\n' or a '
', it will put * the text containing the character into a

. * This little hack together with table.tutorial_code td.note * aims at fixing this issue. */ table.tutorial_code td.note p.starttd { margin: 0px; border: none; padding: 0px; } div.eimainmenu { text-align: center; } /* center version number on main page */ h3.version { text-align: center; } td.width20em p.endtd { width: 20em; } /* needed for huge screens */ .ui-resizable-e { background-repeat: repeat-y; } piqp-octave/src/eigen/doc/CMakeLists.txt0000644000175100001770000001150714107270226020043 0ustar runnerdockerproject(EigenDoc) set_directory_properties(PROPERTIES EXCLUDE_FROM_ALL TRUE) project(EigenDoc) if(CMAKE_COMPILER_IS_GNUCXX) if(CMAKE_SYSTEM_NAME MATCHES Linux) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O1 -g1") endif() endif() # some examples and snippets needs c++11, so let's check it once check_cxx_compiler_flag("-std=c++11" EIGEN_COMPILER_SUPPORT_CPP11) option(EIGEN_INTERNAL_DOCUMENTATION "Build internal documentation" OFF) option(EIGEN_DOC_USE_MATHJAX "Use MathJax for rendering math in HTML docs" ON) # Set some Doxygen flags set(EIGEN_DOXY_PROJECT_NAME "Eigen") set(EIGEN_DOXY_OUTPUT_DIRECTORY_SUFFIX "") set(EIGEN_DOXY_INPUT "\"${Eigen_SOURCE_DIR}/Eigen\" \"${Eigen_SOURCE_DIR}/doc\"") set(EIGEN_DOXY_HTML_COLORSTYLE_HUE "220") set(EIGEN_DOXY_TAGFILES "") if(EIGEN_INTERNAL_DOCUMENTATION) set(EIGEN_DOXY_INTERNAL "YES") else() set(EIGEN_DOXY_INTERNAL "NO") endif() if (EIGEN_DOC_USE_MATHJAX) set(EIGEN_DOXY_USE_MATHJAX "YES") else () set(EIGEN_DOXY_USE_MATHJAX "NO") endif() configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile ) set(EIGEN_DOXY_PROJECT_NAME "Eigen-unsupported") set(EIGEN_DOXY_OUTPUT_DIRECTORY_SUFFIX "/unsupported") set(EIGEN_DOXY_INPUT "\"${Eigen_SOURCE_DIR}/unsupported/Eigen\" \"${Eigen_SOURCE_DIR}/unsupported/doc\"") set(EIGEN_DOXY_HTML_COLORSTYLE_HUE "0") set(EIGEN_DOXY_TAGFILES "\"${Eigen_BINARY_DIR}/doc/Eigen.doxytags=..\"") #set(EIGEN_DOXY_TAGFILES "") configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile-unsupported ) configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_header.html.in ${CMAKE_CURRENT_BINARY_DIR}/eigendoxy_header.html ) configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_footer.html.in ${CMAKE_CURRENT_BINARY_DIR}/eigendoxy_footer.html ) configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_layout.xml.in ${CMAKE_CURRENT_BINARY_DIR}/eigendoxy_layout.xml ) configure_file( ${Eigen_SOURCE_DIR}/unsupported/doc/eigendoxy_layout.xml.in ${Eigen_BINARY_DIR}/doc/unsupported/eigendoxy_layout.xml ) set(examples_targets "") set(snippets_targets "") add_definitions("-DEIGEN_MAKING_DOCS") add_custom_target(all_examples) add_subdirectory(examples) add_subdirectory(special_examples) add_subdirectory(snippets) add_custom_target( doc-eigen-prerequisites ALL COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/html/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/eigen_navtree_hacks.js ${CMAKE_CURRENT_BINARY_DIR}/html/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/Eigen_Silly_Professor_64x64.png ${CMAKE_CURRENT_BINARY_DIR}/html/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/ftv2pnode.png ${CMAKE_CURRENT_BINARY_DIR}/html/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/ftv2node.png ${CMAKE_CURRENT_BINARY_DIR}/html/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/AsciiQuickReference.txt ${CMAKE_CURRENT_BINARY_DIR}/html/ WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} ) add_custom_target( doc-unsupported-prerequisites ALL COMMAND ${CMAKE_COMMAND} -E make_directory ${Eigen_BINARY_DIR}/doc/html/unsupported COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/eigen_navtree_hacks.js ${CMAKE_CURRENT_BINARY_DIR}/html/unsupported/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/Eigen_Silly_Professor_64x64.png ${CMAKE_CURRENT_BINARY_DIR}/html/unsupported/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/ftv2pnode.png ${CMAKE_CURRENT_BINARY_DIR}/html/unsupported/ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/ftv2node.png ${CMAKE_CURRENT_BINARY_DIR}/html/unsupported/ WORKING_DIRECTORY ${Eigen_BINARY_DIR}/doc ) add_dependencies(doc-eigen-prerequisites all_snippets all_examples) add_dependencies(doc-unsupported-prerequisites unsupported_snippets unsupported_examples) add_custom_target(doc ALL COMMAND doxygen COMMAND doxygen Doxyfile-unsupported COMMAND ${CMAKE_COMMAND} -E copy ${Eigen_BINARY_DIR}/doc/html/group__TopicUnalignedArrayAssert.html ${Eigen_BINARY_DIR}/doc/html/TopicUnalignedArrayAssert.html COMMAND ${CMAKE_COMMAND} -E rename html eigen-doc COMMAND ${CMAKE_COMMAND} -E remove eigen-doc/eigen-doc.tgz eigen-doc/unsupported/_formulas.log eigen-doc/_formulas.log COMMAND ${CMAKE_COMMAND} -E tar cfz eigen-doc.tgz eigen-doc COMMAND ${CMAKE_COMMAND} -E rename eigen-doc.tgz eigen-doc/eigen-doc.tgz COMMAND ${CMAKE_COMMAND} -E rename eigen-doc html WORKING_DIRECTORY ${Eigen_BINARY_DIR}/doc) add_dependencies(doc doc-eigen-prerequisites doc-unsupported-prerequisites) piqp-octave/src/eigen/doc/B01_Experimental.dox0000644000175100001770000000457114107270226021061 0ustar runnerdockernamespace Eigen { /** \page Experimental Experimental parts of Eigen \eigenAutoToc \section Experimental_summary Summary With the 2.0 release, Eigen's API is, to a large extent, stable. However, we wish to retain the freedom to make API incompatible changes. To that effect, we call many parts of Eigen "experimental" which means that they are not subject to API stability guarantee. Our goal is that for the 2.1 release (expected in July 2009) most of these parts become API-stable too. We are aware that API stability is a major concern for our users. That's why it's a priority for us to reach it, but at the same time we're being serious about not calling Eigen API-stable too early. Experimental features may at any time: \li be removed; \li be subject to an API incompatible change; \li introduce API or ABI incompatible changes in your own code if you let them affect your API or ABI. \section Experimental_modules Experimental modules The following modules are considered entirely experimental, and we make no firm API stability guarantee about them for the time being: \li SVD \li QR \li Cholesky \li Sparse \li Geometry (this one should be mostly stable, but it's a little too early to make a formal guarantee) \section Experimental_core Experimental parts of the Core module In the Core module, the only classes subject to ABI stability guarantee (meaning that you can use it for data members in your public ABI) is: \li Matrix \li Map All other classes offer no ABI guarantee, e.g. the layout of their data can be changed. The only classes subject to (even partial) API stability guarantee (meaning that you can safely construct and use objects) are: \li MatrixBase : partial API stability (see below) \li Matrix : full API stability (except for experimental stuff inherited from MatrixBase) \li Map : full API stability (except for experimental stuff inherited from MatrixBase) All other classes offer no direct API guarantee, e.g. their methods can be changed; however notice that most classes inherit MatrixBase and that this is where most of their API comes from -- so in practice most of the API is stable. A few MatrixBase methods are considered experimental, hence not part of any API stability guarantee: \li all methods documented as internal \li all methods hidden in the Doxygen documentation \li all methods marked as experimental \li all methods defined in experimental modules */ } piqp-octave/src/eigen/doc/DenseDecompositionBenchmark.dox0000644000175100001770000001163514107270226023427 0ustar runnerdockernamespace Eigen { /** \eigenManualPage DenseDecompositionBenchmark Benchmark of dense decompositions This page presents a speed comparison of the dense matrix decompositions offered by %Eigen for a wide range of square matrices and overconstrained problems. For a more general overview on the features and numerical robustness of linear solvers and decompositions, check this \link TopicLinearAlgebraDecompositions table \endlink. This benchmark has been run on a laptop equipped with an Intel core i7 \@ 2,6 GHz, and compiled with clang with \b AVX and \b FMA instruction sets enabled but without multi-threading. It uses \b single \b precision \b float numbers. For double, you can get a good estimate by multiplying the timings by a factor 2. The square matrices are symmetric, and for the overconstrained matrices, the reported timmings include the cost to compute the symmetric covariance matrix \f$ A^T A \f$ for the first four solvers based on Cholesky and LU, as denoted by the \b * symbol (top-right corner part of the table). Timings are in \b milliseconds, and factors are relative to the LLT decomposition which is the fastest but also the least general and robust.
solver/size 8x8 100x100 1000x1000 4000x4000 10000x8 10000x100 10000x1000 10000x4000
LLT0.050.425.83374.556.79 *30.15 *236.34 *3847.17 *
LDLT0.07 (x1.3)0.65 (x1.5)26.86 (x4.6)2361.18 (x6.3)6.81 (x1) *31.91 (x1.1) *252.61 (x1.1) *5807.66 (x1.5) *
PartialPivLU0.08 (x1.5)0.69 (x1.6)15.63 (x2.7)709.32 (x1.9)6.81 (x1) *31.32 (x1) *241.68 (x1) *4270.48 (x1.1) *
FullPivLU0.1 (x1.9)4.48 (x10.6)281.33 (x48.2)-6.83 (x1) *32.67 (x1.1) *498.25 (x2.1) *-
HouseholderQR0.19 (x3.5)2.18 (x5.2)23.42 (x4)1337.52 (x3.6)34.26 (x5)129.01 (x4.3)377.37 (x1.6)4839.1 (x1.3)
ColPivHouseholderQR0.23 (x4.3)2.23 (x5.3)103.34 (x17.7)9987.16 (x26.7)36.05 (x5.3)163.18 (x5.4)2354.08 (x10)37860.5 (x9.8)
CompleteOrthogonalDecomposition0.23 (x4.3)2.22 (x5.2)99.44 (x17.1)10555.3 (x28.2)35.75 (x5.3)169.39 (x5.6)2150.56 (x9.1)36981.8 (x9.6)
FullPivHouseholderQR0.23 (x4.3)4.64 (x11)289.1 (x49.6)-69.38 (x10.2)446.73 (x14.8)4852.12 (x20.5)-
JacobiSVD1.01 (x18.6)71.43 (x168.4)--113.81 (x16.7)1179.66 (x39.1)--
BDCSVD1.07 (x19.7)21.83 (x51.5)331.77 (x56.9)18587.9 (x49.6)110.53 (x16.3)397.67 (x13.2)2975 (x12.6)48593.2 (x12.6)
\b *: This decomposition do not support direct least-square solving for over-constrained problems, and the reported timing include the cost to form the symmetric covariance matrix \f$ A^T A \f$. \b Observations: + LLT is always the fastest solvers. + For largely over-constrained problems, the cost of Cholesky/LU decompositions is dominated by the computation of the symmetric covariance matrix. + For large problem sizes, only the decomposition implementing a cache-friendly blocking strategy scale well. Those include LLT, PartialPivLU, HouseholderQR, and BDCSVD. This explain why for a 4k x 4k matrix, HouseholderQR is faster than LDLT. In the future, LDLT and ColPivHouseholderQR will also implement blocking strategies. + CompleteOrthogonalDecomposition is based on ColPivHouseholderQR and they thus achieve the same level of performance. The above table has been generated by the bench/dense_solvers.cpp file, feel-free to hack it to generate a table matching your hardware, compiler, and favorite problem sizes. */ } piqp-octave/src/eigen/doc/TutorialSlicingIndexing.dox0000644000175100001770000002002114107270226022610 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialSlicingIndexing Slicing and Indexing This page presents the numerous possibilities offered by `operator()` to index sub-set of rows and columns. This API has been introduced in %Eigen 3.4. It supports all the feature proposed by the \link TutorialBlockOperations block API \endlink, and much more. In particular, it supports \b slicing that consists in taking a set of rows, columns, or elements, uniformly spaced within a matrix or indexed from an array of indices. \eigenAutoToc \section TutorialSlicingOverview Overview All the aforementioned operations are handled through the generic DenseBase::operator()(const RowIndices&, const ColIndices&) method. Each argument can be: - An integer indexing a single row or column, including symbolic indices. - The symbol Eigen::all representing the whole set of respective rows or columns in increasing order. - An ArithmeticSequence as constructed by the Eigen::seq, Eigen::seqN, or Eigen::lastN functions. - Any 1D vector/array of integers including %Eigen's vector/array, expressions, std::vector, std::array, as well as plain C arrays: `int[N]`. More generally, it can accepts any object exposing the following two member functions: \code operator[]() const; size() const; \endcode where `` stands for any integer type compatible with Eigen::Index (i.e. `std::ptrdiff_t`). \section TutorialSlicingBasic Basic slicing Taking a set of rows, columns, or elements, uniformly spaced within a matrix or vector is achieved through the Eigen::seq or Eigen::seqN functions where "seq" stands for arithmetic sequence. Their signatures are summarized below:
function description example
\code seq(firstIdx,lastIdx) \endcode represents the sequence of integers ranging from \c firstIdx to \c lastIdx \code seq(2,5) <=> {2,3,4,5} \endcode
\code seq(firstIdx,lastIdx,incr) \endcode same but using the increment \c incr to advance from one index to the next \code seq(2,8,2) <=> {2,4,6,8} \endcode
\code seqN(firstIdx,size) \endcode represents the sequence of \c size integers starting from \c firstIdx \code seqN(2,5) <=> {2,3,4,5,6} \endcode
\code seqN(firstIdx,size,incr) \endcode same but using the increment \c incr to advance from one index to the next \code seqN(2,3,3) <=> {2,5,8} \endcode
The \c firstIdx and \c lastIdx parameters can also be defined with the help of the Eigen::last symbol representing the index of the last row, column or element of the underlying matrix/vector once the arithmetic sequence is passed to it through operator(). Here are some examples for a 2D array/matrix \c A and a 1D array/vector \c v.
Intent Code Block-API equivalence
Bottom-left corner starting at row \c i with \c n columns \code A(seq(i,last), seqN(0,n)) \endcode \code A.bottomLeftCorner(A.rows()-i,n) \endcode
%Block starting at \c i,j having \c m rows, and \c n columns \code A(seqN(i,m), seqN(i,n) \endcode \code A.block(i,j,m,n) \endcode
%Block starting at \c i0,j0 and ending at \c i1,j1 \code A(seq(i0,i1), seq(j0,j1) \endcode \code A.block(i0,j0,i1-i0+1,j1-j0+1) \endcode
Even columns of A \code A(all, seq(0,last,2)) \endcode
First \c n odd rows A \code A(seqN(1,n,2), all) \endcode
The last past one column \code A(all, last-1) \endcode \code A.col(A.cols()-2) \endcode
The middle row \code A(last/2,all) \endcode \code A.row((A.rows()-1)/2) \endcode
Last elements of v starting at i \code v(seq(i,last)) \endcode \code v.tail(v.size()-i) \endcode
Last \c n elements of v \code v(seq(last+1-n,last)) \endcode \code v.tail(n) \endcode
As seen in the last exemple, referencing the last n elements (or rows/columns) is a bit cumbersome to write. This becomes even more tricky and error prone with a non-default increment. Here comes \link Eigen::lastN(SizeType) Eigen::lastN(size) \endlink, and \link Eigen::lastN(SizeType,IncrType) Eigen::lastN(size,incr) \endlink:
Intent Code Block-API equivalence
Last \c n elements of v \code v(lastN(n)) \endcode \code v.tail(n) \endcode
Bottom-right corner of A of size \c m times \c n \code v(lastN(m), lastN(n)) \endcode \code A.bottomRightCorner(m,n) \endcode
Bottom-right corner of A of size \c m times \c n \code v(lastN(m), lastN(n)) \endcode \code A.bottomRightCorner(m,n) \endcode
Last \c n columns taking 1 column over 3 \code A(all, lastN(n,3)) \endcode
\section TutorialSlicingFixed Compile time size and increment In terms of performance, %Eigen and the compiler can take advantage of compile-time size and increment. To this end, you can enforce compile-time parameters using Eigen::fix. Such compile-time value can be combined with the Eigen::last symbol: \code v(seq(last-fix<7>, last-fix<2>)) \endcode In this example %Eigen knowns at compile-time that the returned expression has 6 elements. It is equivalent to: \code v(seqN(last-7, fix<6>)) \endcode We can revisit the even columns of A example as follows: \code A(all, seq(0,last,fix<2>)) \endcode \section TutorialSlicingReverse Reverse order Row/column indices can also be enumerated in decreasing order using a negative increment. For instance, one over two columns of A from the column 20 to 10: \code A(all, seq(20, 10, fix<-2>)) \endcode The last \c n rows starting from the last one: \code A(seqN(last, n, fix<-1>), all) \endcode You can also use the ArithmeticSequence::reverse() method to reverse its order. The previous example can thus also be written as: \code A(lastN(n).reverse(), all) \endcode \section TutorialSlicingArray Array of indices The generic `operator()` can also takes as input an arbitrary list of row or column indices stored as either an `ArrayXi`, a `std::vector`, `std::array`, etc.
Example:Output:
\include Slicing_stdvector_cxx11.cpp \verbinclude Slicing_stdvector_cxx11.out
You can also directly pass a static array:
Example:Output:
\include Slicing_rawarray_cxx11.cpp \verbinclude Slicing_rawarray_cxx11.out
or expressions:
Example:Output:
\include Slicing_arrayexpr.cpp \verbinclude Slicing_arrayexpr.out
When passing an object with a compile-time size such as `Array4i`, `std::array`, or a static array, then the returned expression also exhibit compile-time dimensions. \section TutorialSlicingCustomArray Custom index list More generally, `operator()` can accept as inputs any object \c ind of type \c T compatible with: \code Index s = ind.size(); or Index s = size(ind); Index i; i = ind[i]; \endcode This means you can easily build your own fancy sequence generator and pass it to `operator()`. Here is an exemple enlarging a given matrix while padding the additional first rows and columns through repetition:
Example:Output:
\include Slicing_custom_padding_cxx11.cpp \verbinclude Slicing_custom_padding_cxx11.out

*/ /* TODO add: so_repeat_inner.cpp so_repeleme.cpp */ } piqp-octave/src/eigen/doc/StlContainers.dox0000644000175100001770000000752314107270226020612 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicStlContainers Using STL Containers with Eigen \eigenAutoToc \section StlContainers_summary Executive summary If you're compiling in \cpp17 mode only with a sufficiently recent compiler (e.g., GCC>=7, clang>=5, MSVC>=19.12), then everything is taken care by the compiler and you can stop reading. Otherwise, using STL containers on \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", or classes having members of such types, requires the use of an over-aligned allocator. That is, an allocator capable of allocating buffers with 16, 32, or even 64 bytes alignment. %Eigen does provide one ready for use: aligned_allocator. Prior to \cpp11, if you want to use the `std::vector` container, then you also have to \#include . These issues arise only with \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" and \ref TopicStructHavingEigenMembers "structures having such Eigen objects as member". For other %Eigen types, such as Vector3f or MatrixXd, no special care is needed when using STL containers. \section allocator Using an aligned allocator STL containers take an optional template parameter, the allocator type. When using STL containers on \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", you need tell the container to use an allocator that will always allocate memory at 16-byte-aligned (or more) locations. Fortunately, %Eigen does provide such an allocator: Eigen::aligned_allocator. For example, instead of \code std::map \endcode you need to use \code std::map, Eigen::aligned_allocator > > \endcode Note that the third parameter `std::less` is just the default value, but we have to include it because we want to specify the fourth parameter, which is the allocator type. \section StlContainers_vector The case of std::vector This section is for c++98/03 users only. \cpp11 (or above) users can stop reading here. So in c++98/03, the situation with `std::vector` is more complicated because of a bug in the standard (explanation below). To workaround the issue, we had to specialize it for the Eigen::aligned_allocator type. In practice you \b must use the Eigen::aligned_allocator (not another aligned allocator), \b and \#include . Here is an example: \code #include /* ... */ std::vector > \endcode \b Explanation: The `resize()` method of `std::vector` takes a `value_type` argument (defaulting to `value_type()`). So with `std::vector`, some Eigen::Vector4d objects will be passed by value, which discards any alignment modifiers, so a Eigen::Vector4d can be created at an unaligned location. In order to avoid that, the only solution we saw was to specialize `std::vector` to make it work on a slight modification of, here, Eigen::Vector4d, that is able to deal properly with this situation. \subsection vector_spec An alternative - specializing std::vector for Eigen types As an alternative to the recommended approach described above, you have the option to specialize std::vector for Eigen types requiring alignment. The advantage is that you won't need to declare std::vector all over with Eigen::aligned_allocator. One drawback on the other hand side is that the specialization needs to be defined before all code pieces in which e.g. `std::vector` is used. Otherwise, without knowing the specialization the compiler will compile that particular instance with the default `std::allocator` and you program is most likely to crash. Here is an example: \code #include /* ... */ EIGEN_DEFINE_STL_VECTOR_SPECIALIZATION(Matrix2d) std::vector \endcode */ } piqp-octave/src/eigen/doc/eigendoxy_layout.xml.in0000644000175100001770000001233114107270226022016 0ustar runnerdocker piqp-octave/src/eigen/doc/ClassHierarchy.dox0000644000175100001770000001427414107270226020727 0ustar runnerdockernamespace Eigen { /** \page TopicClassHierarchy The class hierarchy This page explains the design of the core classes in Eigen's class hierarchy and how they fit together. Casual users probably need not concern themselves with these details, but it may be useful for both advanced users and Eigen developers. \eigenAutoToc \section TopicClassHierarchyPrinciples Principles Eigen's class hierarchy is designed so that virtual functions are avoided where their overhead would significantly impair performance. Instead, Eigen achieves polymorphism with the Curiously Recurring Template Pattern (CRTP). In this pattern, the base class (for instance, \c MatrixBase) is in fact a template class, and the derived class (for instance, \c Matrix) inherits the base class with the derived class itself as a template argument (in this case, \c Matrix inherits from \c MatrixBase<Matrix>). This allows Eigen to resolve the polymorphic function calls at compile time. In addition, the design avoids multiple inheritance. One reason for this is that in our experience, some compilers (like MSVC) fail to perform empty base class optimization, which is crucial for our fixed-size types. \section TopicClassHierarchyCoreClasses The core classes These are the classes that you need to know about if you want to write functions that accept or return Eigen objects. - Matrix means plain dense matrix. If \c m is a \c %Matrix, then, for instance, \c m+m is no longer a \c %Matrix, it is a "matrix expression". - MatrixBase means dense matrix expression. This means that a \c %MatrixBase is something that can be added, matrix-multiplied, LU-decomposed, QR-decomposed... All matrix expression classes, including \c %Matrix itself, inherit \c %MatrixBase. - Array means plain dense array. If \c x is an \c %Array, then, for instance, \c x+x is no longer an \c %Array, it is an "array expression". - ArrayBase means dense array expression. This means that an \c %ArrayBase is something that can be added, array-multiplied, and on which you can perform all sorts of array operations... All array expression classes, including \c %Array itself, inherit \c %ArrayBase. - DenseBase means dense (matrix or array) expression. Both \c %ArrayBase and \c %MatrixBase inherit \c %DenseBase. \c %DenseBase is where all the methods go that apply to dense expressions regardless of whether they are matrix or array expressions. For example, the \link DenseBase::block() block(...) \endlink methods are in \c %DenseBase. \section TopicClassHierarchyBaseClasses Base classes These classes serve as base classes for the five core classes mentioned above. They are more internal and so less interesting for users of the Eigen library. - PlainObjectBase means dense (matrix or array) plain object, i.e. something that stores its own dense array of coefficients. This is where, for instance, the \link PlainObjectBase::resize() resize() \endlink methods go. \c %PlainObjectBase is inherited by \c %Matrix and by \c %Array. But above, we said that \c %Matrix inherits \c %MatrixBase and \c %Array inherits \c %ArrayBase. So does that mean multiple inheritance? No, because \c %PlainObjectBase \e itself inherits \c %MatrixBase or \c %ArrayBase depending on whether we are in the matrix or array case. When we said above that \c %Matrix inherited \c %MatrixBase, we omitted to say it does so indirectly via \c %PlainObjectBase. Same for \c %Array. - DenseCoeffsBase means something that has dense coefficient accessors. It is a base class for \c %DenseBase. The reason for \c %DenseCoeffsBase to exist is that the set of available coefficient accessors is very different depending on whether a dense expression has direct memory access or not (the \c DirectAccessBit flag). For example, if \c x is a plain matrix, then \c x has direct access, and \c x.transpose() and \c x.block(...) also have direct access, because their coefficients can be read right off memory, but for example, \c x+x does not have direct memory access, because obtaining any of its coefficients requires a computation (an addition), it can't be just read off memory. - EigenBase means anything that can be evaluated into a plain dense matrix or array (even if that would be a bad idea). \c %EigenBase is really the absolute base class for anything that remotely looks like a matrix or array. It is a base class for \c %DenseCoeffsBase, so it sits below all our dense class hierarchy, but it is not limited to dense expressions. For example, \c %EigenBase is also inherited by diagonal matrices, sparse matrices, etc... \section TopicClassHierarchyInheritanceDiagrams Inheritance diagrams The inheritance diagram for Matrix looks as follows:

EigenBase<%Matrix>
  <-- DenseCoeffsBase<%Matrix>    (direct access case)
    <-- DenseBase<%Matrix>
      <-- MatrixBase<%Matrix>
        <-- PlainObjectBase<%Matrix>    (matrix case)
          <-- Matrix
The inheritance diagram for Array looks as follows:
EigenBase<%Array>
  <-- DenseCoeffsBase<%Array>    (direct access case)
    <-- DenseBase<%Array>
      <-- ArrayBase<%Array>
        <-- PlainObjectBase<%Array>    (array case)
          <-- Array
The inheritance diagram for some other matrix expression class, here denoted by \c SomeMatrixXpr, looks as follows:
EigenBase<SomeMatrixXpr>
  <-- DenseCoeffsBase<SomeMatrixXpr>    (direct access or no direct access case)
    <-- DenseBase<SomeMatrixXpr>
      <-- MatrixBase<SomeMatrixXpr>
        <-- SomeMatrixXpr
The inheritance diagram for some other array expression class, here denoted by \c SomeArrayXpr, looks as follows:
EigenBase<SomeArrayXpr>
  <-- DenseCoeffsBase<SomeArrayXpr>    (direct access or no direct access case)
    <-- DenseBase<SomeArrayXpr>
      <-- ArrayBase<SomeArrayXpr>
        <-- SomeArrayXpr
Finally, consider an example of something that is not a dense expression, for instance a diagonal matrix. The corresponding inheritance diagram is:
EigenBase<%DiagonalMatrix>
  <-- DiagonalBase<%DiagonalMatrix>
    <-- DiagonalMatrix
*/ } piqp-octave/src/eigen/doc/snippets/0000755000175100001770000000000014107270226017144 5ustar runnerdockerpiqp-octave/src/eigen/doc/snippets/MatrixBase_adjoint.cpp0000644000175100001770000000025114107270226023415 0ustar runnerdockerMatrix2cf m = Matrix2cf::Random(); cout << "Here is the 2x2 complex matrix m:" << endl << m << endl; cout << "Here is the adjoint of m:" << endl << m.adjoint() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseProduct.cpp0000644000175100001770000000023114107270226024436 0ustar runnerdockerMatrix3i a = Matrix3i::Random(), b = Matrix3i::Random(); Matrix3i c = a.cwiseProduct(b); cout << "a:\n" << a << "\nb:\n" << b << "\nc:\n" << c << endl; piqp-octave/src/eigen/doc/snippets/Cwise_minus_equal.cpp0000644000175100001770000000005514107270226023324 0ustar runnerdockerArray3d v(1,2,3); v -= 5; cout << v << endl; piqp-octave/src/eigen/doc/snippets/DenseBase_LinSpacedInt.cpp0000644000175100001770000000064414107270226024102 0ustar runnerdockercout << "Even spacing inputs:" << endl; cout << VectorXi::LinSpaced(8,1,4).transpose() << endl; cout << VectorXi::LinSpaced(8,1,8).transpose() << endl; cout << VectorXi::LinSpaced(8,1,15).transpose() << endl; cout << "Uneven spacing inputs:" << endl; cout << VectorXi::LinSpaced(8,1,7).transpose() << endl; cout << VectorXi::LinSpaced(8,1,9).transpose() << endl; cout << VectorXi::LinSpaced(8,1,16).transpose() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_tanh.cpp0000644000175100001770000000010014107270226021723 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(5,0,1); cout << tanh(v) << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver.cpp0000644000175100001770000000055214107270226030644 0ustar runnerdockerSelfAdjointEigenSolver es; Matrix4f X = Matrix4f::Random(4,4); Matrix4f A = X + X.transpose(); es.compute(A); cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl; es.compute(A + Matrix4f::Identity(4,4)); // re-use es to compute eigenvalues of A+I cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_bottomLeftCorner_int_int.cpp0000644000175100001770000000041714107270226027005 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomLeftCorner(2, 2):" << endl; cout << m.bottomLeftCorner(2, 2) << endl; m.bottomLeftCorner(2, 2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_ones_int.cpp0000644000175100001770000000011514107270226023602 0ustar runnerdockercout << 6 * RowVectorXi::Ones(4) << endl; cout << VectorXf::Ones(2) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_cube.cpp0000644000175100001770000000005414107270226021717 0ustar runnerdockerArray3d v(2,3,4); cout << v.cube() << endl; piqp-octave/src/eigen/doc/snippets/LLT_example.cpp0000644000175100001770000000100714107270226022014 0ustar runnerdockerMatrixXd A(3,3); A << 4,-1,2, -1,6,0, 2,0,5; cout << "The matrix A is" << endl << A << endl; LLT lltOfA(A); // compute the Cholesky decomposition of A MatrixXd L = lltOfA.matrixL(); // retrieve factor L in the decomposition // The previous two lines can also be written as "L = A.llt().matrixL()" cout << "The Cholesky factor L is" << endl << L << endl; cout << "To check this, let us compute L * L.transpose()" << endl; cout << L * L.transpose() << endl; cout << "This should equal the matrix A" << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_ReshapeMat2Mat.cpp0000644000175100001770000000025314107270226024310 0ustar runnerdockerMatrixXf M1(2,6); // Column-major storage M1 << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12; Map M2(M1.data(), 6,2); cout << "M2:" << endl << M2 << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_triangularView.cpp0000644000175100001770000000107514107270226024775 0ustar runnerdockerMatrix3i m = Matrix3i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the upper-triangular matrix extracted from m:" << endl << Matrix3i(m.triangularView()) << endl; cout << "Here is the strictly-upper-triangular matrix extracted from m:" << endl << Matrix3i(m.triangularView()) << endl; cout << "Here is the unit-lower-triangular matrix extracted from m:" << endl << Matrix3i(m.triangularView()) << endl; // FIXME need to implement output for triangularViews (Bug 885) piqp-octave/src/eigen/doc/snippets/Cwise_boolean_xor.cpp0000644000175100001770000000007714107270226023315 0ustar runnerdockerArray3d v(-1,2,1), w(-3,2,3); cout << ((v #include #ifndef M_PI #define M_PI 3.1415926535897932384626433832795 #endif using namespace Eigen; using namespace std; int main(int, char**) { cout.precision(3); // intentionally remove indentation of snippet { ${snippet_source_code} } return 0; } piqp-octave/src/eigen/doc/snippets/Tutorial_SlicingCol.cpp0000644000175100001770000000114314107270226023560 0ustar runnerdockerMatrixXf M1 = MatrixXf::Random(3,8); cout << "Column major input:" << endl << M1 << "\n"; Map > M2(M1.data(), M1.rows(), (M1.cols()+2)/3, OuterStride<>(M1.outerStride()*3)); cout << "1 column over 3:" << endl << M2 << "\n"; typedef Matrix RowMajorMatrixXf; RowMajorMatrixXf M3(M1); cout << "Row major input:" << endl << M3 << "\n"; Map > M4(M3.data(), M3.rows(), (M3.cols()+2)/3, Stride(M3.outerStride(),3)); cout << "1 column over 3:" << endl << M4 << "\n"; piqp-octave/src/eigen/doc/snippets/MatrixBase_col.cpp0000644000175100001770000000012214107270226022537 0ustar runnerdockerMatrix3d m = Matrix3d::Identity(); m.col(1) = Vector3d(4,5,6); cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_applyOnTheRight.cpp0000644000175100001770000000044414107270226025052 0ustar runnerdockerMatrix3f A = Matrix3f::Random(3,3), B; B << 0,1,0, 0,0,1, 1,0,0; cout << "At start, A = " << endl << A << endl; A *= B; cout << "After A *= B, A = " << endl << A << endl; A.applyOnTheRight(B); // equivalent to A *= B cout << "After applyOnTheRight, A = " << endl << A << endl; piqp-octave/src/eigen/doc/snippets/Cwise_max.cpp0000644000175100001770000000006614107270226021571 0ustar runnerdockerArray3d v(2,3,4), w(4,2,3); cout << v.max(w) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_range_for_loop_2d_cxx11.cpp0000644000175100001770000000022414107270226026135 0ustar runnerdockerMatrix2i A = Matrix2i::Random(); cout << "Here are the coeffs of the 2x2 matrix A:\n"; for(auto x : A.reshaped()) cout << x << " "; cout << "\n"; piqp-octave/src/eigen/doc/snippets/FullPivLU_solve.cpp0000644000175100001770000000063514107270226022706 0ustar runnerdockerMatrix m = Matrix::Random(); Matrix2f y = Matrix2f::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the matrix y:" << endl << y << endl; Matrix x = m.fullPivLu().solve(y); if((m*x).isApprox(y)) { cout << "Here is a solution x to the equation mx=y:" << endl << x << endl; } else cout << "The equation mx=y does not have any solution." << endl; piqp-octave/src/eigen/doc/snippets/Cwise_slash_equal.cpp0000644000175100001770000000006714107270226023306 0ustar runnerdockerArray3d v(3,2,4), w(5,4,2); v /= w; cout << v << endl; piqp-octave/src/eigen/doc/snippets/Cwise_scalar_power_array.cpp0000644000175100001770000000012514107270226024657 0ustar runnerdockerArray e(2,-3,1./3.); cout << "10^[" << e << "] = " << pow(10,e) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_reshaped_to_vector.cpp0000644000175100001770000000043714107270226025652 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped().transpose():" << endl << m.reshaped().transpose() << endl; cout << "Here is m.reshaped().transpose(): " << endl << m.reshaped().transpose() << endl; piqp-octave/src/eigen/doc/snippets/DenseBase_setLinSpaced.cpp0000644000175100001770000000007414107270226024140 0ustar runnerdockerVectorXf v; v.setLinSpaced(5,0.5f,1.5f); cout << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseNotEqual.cpp0000644000175100001770000000043614107270226024555 0ustar runnerdockerMatrixXi m(2,2); m << 1, 0, 1, 1; cout << "Comparing m with identity matrix:" << endl; cout << m.cwiseNotEqual(MatrixXi::Identity(2,2)) << endl; Index count = m.cwiseNotEqual(MatrixXi::Identity(2,2)).count(); cout << "Number of coefficients that are not equal: " << count << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_bottomLeftCorner.cpp0000644000175100001770000000042214107270226030674 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomLeftCorner<2,2>():" << endl; cout << m.bottomLeftCorner<2,2>() << endl; m.bottomLeftCorner<2,2>().setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_setRandom.cpp0000644000175100001770000000011014107270226023713 0ustar runnerdockerMatrix4i m = Matrix4i::Zero(); m.col(1).setRandom(); cout << m << endl; piqp-octave/src/eigen/doc/snippets/Vectorwise_reverse.cpp0000644000175100001770000000103014107270226023527 0ustar runnerdockerMatrixXi m = MatrixXi::Random(3,4); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the rowwise reverse of m:" << endl << m.rowwise().reverse() << endl; cout << "Here is the colwise reverse of m:" << endl << m.colwise().reverse() << endl; cout << "Here is the coefficient (1,0) in the rowise reverse of m:" << endl << m.rowwise().reverse()(1,0) << endl; cout << "Let us overwrite this coefficient with the value 4." << endl; //m.colwise().reverse()(1,0) = 4; cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_cos.cpp0000644000175100001770000000007214107270226021565 0ustar runnerdockerArray3d v(M_PI, M_PI/2, M_PI/3); cout << v.cos() << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointView_eigenvalues.cpp0000644000175100001770000000027014107270226025273 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); VectorXd eivals = ones.selfadjointView().eigenvalues(); cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << eivals << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setOnes_int_int.cpp0000644000175100001770000000006014107270226024334 0ustar runnerdockerMatrixXf m; m.setOnes(3, 3); cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_transpose.cpp0000644000175100001770000000063614107270226024012 0ustar runnerdockerMatrix2i m = Matrix2i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the transpose of m:" << endl << m.transpose() << endl; cout << "Here is the coefficient (1,0) in the transpose of m:" << endl << m.transpose()(1,0) << endl; cout << "Let us overwrite this coefficient with the value 0." << endl; m.transpose()(1,0) = 0; cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/FullPivLU_kernel.cpp0000644000175100001770000000047514107270226023040 0ustar runnerdockerMatrixXf m = MatrixXf::Random(3,5); cout << "Here is the matrix m:" << endl << m << endl; MatrixXf ker = m.fullPivLu().kernel(); cout << "Here is a matrix whose columns form a basis of the kernel of m:" << endl << ker << endl; cout << "By definition of the kernel, m*ker is zero:" << endl << m*ker << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_Block.cpp0000644000175100001770000000020414107270226027256 0ustar runnerdockerMatrixXf matA(2, 2); matA << 1, 2, 3, 4; MatrixXf matB(4, 4); matB << matA, matA/10, matA/10, matA; std::cout << matB << std::endl; piqp-octave/src/eigen/doc/snippets/Tutorial_std_sort.cpp0000644000175100001770000000031314107270226023371 0ustar runnerdockerArray4i v = Array4i::Random().abs(); cout << "Here is the initial vector v:\n" << v.transpose() << "\n"; std::sort(v.begin(), v.end()); cout << "Here is the sorted vector v:\n" << v.transpose() << "\n"; piqp-octave/src/eigen/doc/snippets/TopicAliasing_block.cpp0000644000175100001770000000041314107270226023546 0ustar runnerdockerMatrixXi mat(3,3); mat << 1, 2, 3, 4, 5, 6, 7, 8, 9; cout << "Here is the matrix mat:\n" << mat << endl; // This assignment shows the aliasing problem mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2); cout << "After the assignment, mat = \n" << mat << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_isDiagonal.cpp0000644000175100001770000000036114107270226024041 0ustar runnerdockerMatrix3d m = 10000 * Matrix3d::Identity(); m(0,2) = 1; cout << "Here's the matrix m:" << endl << m << endl; cout << "m.isDiagonal() returns: " << m.isDiagonal() << endl; cout << "m.isDiagonal(1e-3) returns: " << m.isDiagonal(1e-3) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_greater.cpp0000644000175100001770000000006314107270226022432 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v>w) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_pow.cpp0000644000175100001770000000006514107270226021610 0ustar runnerdockerArray3d v(8,27,64); cout << v.pow(0.333333) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_Map_using.cpp0000644000175100001770000000160014107270226023452 0ustar runnerdockertypedef Matrix MatrixType; typedef Map MapType; typedef Map MapTypeConst; // a read-only map const int n_dims = 5; MatrixType m1(n_dims), m2(n_dims); m1.setRandom(); m2.setRandom(); float *p = &m2(0); // get the address storing the data for m2 MapType m2map(p,m2.size()); // m2map shares data with m2 MapTypeConst m2mapconst(p,m2.size()); // a read-only accessor for m2 cout << "m1: " << m1 << endl; cout << "m2: " << m2 << endl; cout << "Squared euclidean distance: " << (m1-m2).squaredNorm() << endl; cout << "Squared euclidean distance, using map: " << (m1-m2map).squaredNorm() << endl; m2map(3) = 7; // this will change m2, since they share the same array cout << "Updated m2: " << m2 << endl; cout << "m2 coefficient 2, constant accessor: " << m2mapconst(2) << endl; /* m2mapconst(2) = 5; */ // this yields a compile-time error piqp-octave/src/eigen/doc/snippets/TopicAliasing_mult5.cpp0000644000175100001770000000015514107270226023525 0ustar runnerdockerMatrixXf A(2,2), B(3,2); B << 2, 0, 0, 3, 1, 1; A << 2, 0, 0, -2; A = (B * A).eval().cwiseAbs(); cout << A; piqp-octave/src/eigen/doc/snippets/Cwise_min.cpp0000644000175100001770000000006614107270226021567 0ustar runnerdockerArray3d v(2,3,4), w(4,2,3); cout << v.min(w) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_identity_int_int.cpp0000644000175100001770000000005214107270226025341 0ustar runnerdockercout << MatrixXd::Identity(4, 3) << endl; piqp-octave/src/eigen/doc/snippets/HessenbergDecomposition_matrixH.cpp0000644000175100001770000000060714107270226026171 0ustar runnerdockerMatrix4f A = MatrixXf::Random(4,4); cout << "Here is a random 4x4 matrix:" << endl << A << endl; HessenbergDecomposition hessOfA(A); MatrixXf H = hessOfA.matrixH(); cout << "The Hessenberg matrix H is:" << endl << H << endl; MatrixXf Q = hessOfA.matrixQ(); cout << "The orthogonal matrix Q is:" << endl << Q << endl; cout << "Q H Q^T is:" << endl << Q * H * Q.transpose() << endl; piqp-octave/src/eigen/doc/snippets/Triangular_solve.cpp0000644000175100001770000000101014107270226023160 0ustar runnerdockerMatrix3d m = Matrix3d::Zero(); m.triangularView().setOnes(); cout << "Here is the matrix m:\n" << m << endl; Matrix3d n = Matrix3d::Ones(); n.triangularView() *= 2; cout << "Here is the matrix n:\n" << n << endl; cout << "And now here is m.inverse()*n, taking advantage of the fact that" " m is upper-triangular:\n" << m.triangularView().solve(n) << endl; cout << "And this is n*m.inverse():\n" << m.triangularView().solve(n); piqp-octave/src/eigen/doc/snippets/DenseBase_LinSpaced.cpp0000644000175100001770000000016514107270226023425 0ustar runnerdockercout << VectorXi::LinSpaced(4,7,10).transpose() << endl; cout << VectorXd::LinSpaced(5,0.0,1.0).transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseEqual.cpp0000644000175100001770000000042414107270226024071 0ustar runnerdockerMatrixXi m(2,2); m << 1, 0, 1, 1; cout << "Comparing m with identity matrix:" << endl; cout << m.cwiseEqual(MatrixXi::Identity(2,2)) << endl; Index count = m.cwiseEqual(MatrixXi::Identity(2,2)).count(); cout << "Number of coefficients that are equal: " << count << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_eigenvalues.cpp0000644000175100001770000000024014107270226024272 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); VectorXcd eivals = ones.eigenvalues(); cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << eivals << endl; piqp-octave/src/eigen/doc/snippets/FullPivHouseholderQR_solve.cpp0000644000175100001770000000050514107270226025106 0ustar runnerdockerMatrix3f m = Matrix3f::Random(); Matrix3f y = Matrix3f::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the matrix y:" << endl << y << endl; Matrix3f x; x = m.fullPivHouseholderQr().solve(y); assert(y.isApprox(m*x)); cout << "Here is a solution x to the equation mx=y:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/RealSchur_RealSchur_MatrixType.cpp0000644000175100001770000000065514107270226025704 0ustar runnerdockerMatrixXd A = MatrixXd::Random(6,6); cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl; RealSchur schur(A); cout << "The orthogonal matrix U is:" << endl << schur.matrixU() << endl; cout << "The quasi-triangular matrix T is:" << endl << schur.matrixT() << endl << endl; MatrixXd U = schur.matrixU(); MatrixXd T = schur.matrixT(); cout << "U * T * U^T = " << endl << U * T * U.transpose() << endl; piqp-octave/src/eigen/doc/snippets/EigenSolver_EigenSolver_MatrixType.cpp0000644000175100001770000000144014107270226026561 0ustar runnerdockerMatrixXd A = MatrixXd::Random(6,6); cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl; EigenSolver es(A); cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl; cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl; complex lambda = es.eigenvalues()[0]; cout << "Consider the first eigenvalue, lambda = " << lambda << endl; VectorXcd v = es.eigenvectors().col(0); cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl; cout << "... and A * v = " << endl << A.cast >() * v << endl << endl; MatrixXcd D = es.eigenvalues().asDiagonal(); MatrixXcd V = es.eigenvectors(); cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_block_correct.cpp0000644000175100001770000000041614107270226025272 0ustar runnerdockerMatrixXi mat(3,3); mat << 1, 2, 3, 4, 5, 6, 7, 8, 9; cout << "Here is the matrix mat:\n" << mat << endl; // The eval() solves the aliasing problem mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2).eval(); cout << "After the assignment, mat = \n" << mat << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setConstant_int.cpp0000644000175100001770000000006414107270226024353 0ustar runnerdockerVectorXf v; v.setConstant(3, 5); cout << v << endl; piqp-octave/src/eigen/doc/snippets/EigenSolver_compute.cpp0000644000175100001770000000055114107270226023627 0ustar runnerdockerEigenSolver es; MatrixXf A = MatrixXf::Random(4,4); es.compute(A, /* computeEigenvectors = */ false); cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl; es.compute(A + MatrixXf::Identity(4,4), false); // re-use es to compute eigenvalues of A+I cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_random.cpp0000644000175100001770000000005214107270226023244 0ustar runnerdockercout << 100 * Matrix2i::Random() << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_eigenvalues.cpp0000644000175100001770000000026414107270226026606 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); SelfAdjointEigenSolver es(ones); cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << es.eigenvalues() << endl; piqp-octave/src/eigen/doc/snippets/Slicing_custom_padding_cxx11.cpp0000644000175100001770000000056114107270226025346 0ustar runnerdockerstruct pad { Index size() const { return out_size; } Index operator[] (Index i) const { return std::max(0,i-(out_size-in_size)); } Index in_size, out_size; }; Matrix3i A; A.reshaped() = VectorXi::LinSpaced(9,1,9); cout << "Initial matrix A:\n" << A << "\n\n"; MatrixXi B(5,5); B = A(pad{3,5}, pad{3,5}); cout << "A(pad{3,N}, pad{3,N}):\n" << B << "\n\n"; piqp-octave/src/eigen/doc/snippets/Matrix_setRandom_int.cpp0000644000175100001770000000005714107270226024004 0ustar runnerdockerVectorXf v; v.setRandom(3); cout << v << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setZero_int_int.cpp0000644000175100001770000000006014107270226024347 0ustar runnerdockerMatrixXf m; m.setZero(3, 3); cout << m << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_squaredNorm.cpp0000644000175100001770000000026414107270226024636 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the square norm of each row:" << endl << m.rowwise().squaredNorm() << endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_mult2.cpp0000644000175100001770000000034614107270226023524 0ustar runnerdockerMatrixXf matA(2,2), matB(2,2); matA << 2, 0, 0, 2; // Simple but not quite as efficient matB = matA * matA; cout << matB << endl << endl; // More complicated but also more efficient matB.noalias() = matA * matA; cout << matB; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseMax.cpp0000644000175100001770000000007414107270226023550 0ustar runnerdockerVector3d v(2,3,4), w(4,2,3); cout << v.cwiseMax(w) << endl; piqp-octave/src/eigen/doc/snippets/DirectionWise_hnormalized.cpp0000644000175100001770000000056014107270226025015 0ustar runnerdockerMatrix4Xd M = Matrix4Xd::Random(4,5); Projective3d P(Matrix4d::Random()); cout << "The matrix M is:" << endl << M << endl << endl; cout << "M.colwise().hnormalized():" << endl << M.colwise().hnormalized() << endl << endl; cout << "P*M:" << endl << P*M << endl << endl; cout << "(P*M).colwise().hnormalized():" << endl << (P*M).colwise().hnormalized() << endl << endl; piqp-octave/src/eigen/doc/snippets/Cwise_plus.cpp0000644000175100001770000000004714107270226021766 0ustar runnerdockerArray3d v(1,2,3); cout << v+5 << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType2.cpp0000644000175100001770000000061414107270226030222 0ustar runnerdockerMatrixXd X = MatrixXd::Random(5,5); MatrixXd A = X * X.transpose(); X = MatrixXd::Random(5,5); MatrixXd B = X * X.transpose(); GeneralizedSelfAdjointEigenSolver es(A,B,EigenvaluesOnly); cout << "The eigenvalues of the pencil (A,B) are:" << endl << es.eigenvalues() << endl; es.compute(B,A,false); cout << "The eigenvalues of the pencil (B,A) are:" << endl << es.eigenvalues() << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_Map_rowmajor.cpp0000644000175100001770000000045314107270226024172 0ustar runnerdockerint array[8]; for(int i = 0; i < 8; ++i) array[i] = i; cout << "Column-major:\n" << Map >(array) << endl; cout << "Row-major:\n" << Map >(array) << endl; cout << "Row-major using stride:\n" << Map, Unaligned, Stride<1,4> >(array) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_singular.cpp0000644000175100001770000000040014107270226024561 0ustar runnerdockerMatrix3f A; Vector3f b; A << 1,2,3, 4,5,6, 7,8,9; b << 3, 3, 4; cout << "Here is the matrix A:" << endl << A << endl; cout << "Here is the vector b:" << endl << b << endl; Vector3f x; x = A.lu().solve(b); cout << "The solution is:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_mult1.cpp0000644000175100001770000000011414107270226023514 0ustar runnerdockerMatrixXf matA(2,2); matA << 2, 0, 0, 2; matA = matA * matA; cout << matA; piqp-octave/src/eigen/doc/snippets/HessenbergDecomposition_packedMatrix.cpp0000644000175100001770000000074214107270226027171 0ustar runnerdockerMatrix4d A = Matrix4d::Random(4,4); cout << "Here is a random 4x4 matrix:" << endl << A << endl; HessenbergDecomposition hessOfA(A); Matrix4d pm = hessOfA.packedMatrix(); cout << "The packed matrix M is:" << endl << pm << endl; cout << "The upper Hessenberg part corresponds to the matrix H, which is:" << endl << hessOfA.matrixH() << endl; Vector3d hc = hessOfA.householderCoefficients(); cout << "The vector of Householder coefficients is:" << endl << hc << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_setZero.cpp0000644000175100001770000000011014107270226023412 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); m.row(1).setZero(); cout << m << endl; piqp-octave/src/eigen/doc/snippets/AngleAxis_mimic_euler.cpp0000644000175100001770000000032214107270226024072 0ustar runnerdockerMatrix3f m; m = AngleAxisf(0.25*M_PI, Vector3f::UnitX()) * AngleAxisf(0.5*M_PI, Vector3f::UnitY()) * AngleAxisf(0.33*M_PI, Vector3f::UnitZ()); cout << m << endl << "is unitary: " << m.isUnitary() << endl; piqp-octave/src/eigen/doc/snippets/class_FullPivLU.cpp0000644000175100001770000000133414107270226022660 0ustar runnerdockertypedef Matrix Matrix5x3; typedef Matrix Matrix5x5; Matrix5x3 m = Matrix5x3::Random(); cout << "Here is the matrix m:" << endl << m << endl; Eigen::FullPivLU lu(m); cout << "Here is, up to permutations, its LU decomposition matrix:" << endl << lu.matrixLU() << endl; cout << "Here is the L part:" << endl; Matrix5x5 l = Matrix5x5::Identity(); l.block<5,3>(0,0).triangularView() = lu.matrixLU(); cout << l << endl; cout << "Here is the U part:" << endl; Matrix5x3 u = lu.matrixLU().triangularView(); cout << u << endl; cout << "Let us now reconstruct the original matrix m:" << endl; cout << lu.permutationP().inverse() * l * u * lu.permutationQ().inverse() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_start_int.cpp0000644000175100001770000000034214107270226023775 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.head(2):" << endl << v.head(2) << endl; v.head(2).setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/Cwise_not_equal.cpp0000644000175100001770000000006414107270226022771 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v!=w) << endl; piqp-octave/src/eigen/doc/snippets/EigenSolver_pseudoEigenvectors.cpp0000644000175100001770000000065614107270226026036 0ustar runnerdockerMatrixXd A = MatrixXd::Random(6,6); cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl; EigenSolver es(A); MatrixXd D = es.pseudoEigenvalueMatrix(); MatrixXd V = es.pseudoEigenvectors(); cout << "The pseudo-eigenvalue matrix D is:" << endl << D << endl; cout << "The pseudo-eigenvector matrix V is:" << endl << V << endl; cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseQuotient.cpp0000644000175100001770000000010114107270226024622 0ustar runnerdockerVector3d v(2,3,4), w(4,2,3); cout << v.cwiseQuotient(w) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_times_equal.cpp0000644000175100001770000000006714107270226023315 0ustar runnerdockerArray3d v(1,2,3), w(2,3,0); v *= w; cout << v << endl; piqp-octave/src/eigen/doc/snippets/ComplexSchur_matrixT.cpp0000644000175100001770000000040714107270226023775 0ustar runnerdockerMatrixXcf A = MatrixXcf::Random(4,4); cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl; ComplexSchur schurOfA(A, false); // false means do not compute U cout << "The triangular matrix T is:" << endl << schurOfA.matrixT() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_boolean_not.cpp0000644000175100001770000000015114107270226023276 0ustar runnerdockerArray3d v(1,2,3); v(1) *= 0.0/0.0; v(2) /= 0.0; cout << v << endl << endl; cout << !isfinite(v) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_minus.cpp0000644000175100001770000000004714107270226022136 0ustar runnerdockerArray3d v(1,2,3); cout << v-5 << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_rightCols_int.cpp0000644000175100001770000000035714107270226024604 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.rightCols(2):" << endl; cout << a.rightCols(2) << endl; a.rightCols(2).setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_prod.cpp0000644000175100001770000000025114107270226023276 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the product of each row:" << endl << m.rowwise().prod() << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_count.cpp0000644000175100001770000000040714107270226023465 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; Matrix res = (m.array() >= 0.5).rowwise().count(); cout << "Here is the count of elements larger or equal than 0.5 of each row:" << endl; cout << res << endl; piqp-octave/src/eigen/doc/snippets/Array_variadic_ctor_cxx11.cpp0000644000175100001770000000014614107270226024644 0ustar runnerdockerArray a(1, 2, 3, 4, 5, 6); Array b {1, 2, 3}; cout << a << "\n\n" << b << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_ones_int_int.cpp0000644000175100001770000000004514107270226024456 0ustar runnerdockercout << MatrixXi::Ones(2,3) << endl; piqp-octave/src/eigen/doc/snippets/tut_arithmetic_transpose_conjugate.cpp0000644000175100001770000000042514107270226027033 0ustar runnerdockerMatrixXcf a = MatrixXcf::Random(2,2); cout << "Here is the matrix a\n" << a << endl; cout << "Here is the matrix a^T\n" << a.transpose() << endl; cout << "Here is the conjugate of a\n" << a.conjugate() << endl; cout << "Here is the matrix a^*\n" << a.adjoint() << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_matrix_inverse.cpp0000644000175100001770000000022214107270226025776 0ustar runnerdockerMatrix3f A; Vector3f b; A << 1,2,3, 4,5,6, 7,8,10; b << 3, 3, 4; Vector3f x = A.inverse() * b; cout << "The solution is:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_SlicingVec.cpp0000644000175100001770000000026414107270226023563 0ustar runnerdockerRowVectorXf v = RowVectorXf::LinSpaced(20,0,19); cout << "Input:" << endl << v << endl; Map > v2(v.data(), v.size()/2); cout << "Even:" << v2 << endl; piqp-octave/src/eigen/doc/snippets/LeastSquaresQR.cpp0000644000175100001770000000026114107270226022526 0ustar runnerdockerMatrixXf A = MatrixXf::Random(3, 2); VectorXf b = VectorXf::Random(3); cout << "The solution using the QR decomposition is:\n" << A.colPivHouseholderQr().solve(b) << endl; piqp-octave/src/eigen/doc/snippets/Map_outer_stride.cpp0000644000175100001770000000021214107270226023150 0ustar runnerdockerint array[12]; for(int i = 0; i < 12; ++i) array[i] = i; cout << Map >(array, 3, 3, OuterStride<>(4)) << endl; piqp-octave/src/eigen/doc/snippets/DenseBase_LinSpaced_seq_deprecated.cpp0000644000175100001770000000021314107270226026447 0ustar runnerdockercout << VectorXi::LinSpaced(Sequential,4,7,10).transpose() << endl; cout << VectorXd::LinSpaced(Sequential,5,0.0,1.0).transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_row.cpp0000644000175100001770000000012214107270226022571 0ustar runnerdockerMatrix3d m = Matrix3d::Identity(); m.row(1) = Vector3d(4,5,6); cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_selfadjointView.cpp0000644000175100001770000000055114107270226025125 0ustar runnerdockerMatrix3i m = Matrix3i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the symmetric matrix extracted from the upper part of m:" << endl << Matrix3i(m.selfadjointView()) << endl; cout << "Here is the symmetric matrix extracted from the lower part of m:" << endl << Matrix3i(m.selfadjointView()) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_inverse.cpp0000644000175100001770000000005714107270226022457 0ustar runnerdockerArray3d v(2,3,4); cout << v.inverse() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_atan.cpp0000644000175100001770000000010114107270226021715 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(5,0,1); cout << v.atan() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_topLeftCorner_int_int.cpp0000644000175100001770000000040614107270226026301 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topLeftCorner(2, 2):" << endl; cout << m.topLeftCorner(2, 2) << endl; m.topLeftCorner(2, 2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_mult3.cpp0000644000175100001770000000012614107270226023521 0ustar runnerdockerMatrixXf matA(2,2); matA << 2, 0, 0, 2; matA.noalias() = matA * matA; cout << matA; piqp-octave/src/eigen/doc/snippets/MatrixBase_identity.cpp0000644000175100001770000000006214107270226023616 0ustar runnerdockercout << Matrix::Identity() << endl; piqp-octave/src/eigen/doc/snippets/DirectionWise_replicate.cpp0000644000175100001770000000027214107270226024451 0ustar runnerdockerMatrixXi m = MatrixXi::Random(2,3); cout << "Here is the matrix m:" << endl << m << endl; cout << "m.colwise().replicate<3>() = ..." << endl; cout << m.colwise().replicate<3>() << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_minCoeff.cpp0000644000175100001770000000026014107270226024060 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the minimum of each column:" << endl << m.colwise().minCoeff() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_reshaped_auto.cpp0000644000175100001770000000044314107270226024613 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped(2, AutoSize):" << endl << m.reshaped(2, AutoSize) << endl; cout << "Here is m.reshaped(AutoSize, fix<8>):" << endl << m.reshaped(AutoSize, fix<8>) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_end.cpp0000644000175100001770000000034614107270226025305 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.tail(2):" << endl << v.tail<2>() << endl; v.tail<2>().setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_isIdentity.cpp0000644000175100001770000000035314107270226024115 0ustar runnerdockerMatrix3d m = Matrix3d::Identity(); m(0,2) = 1e-4; cout << "Here's the matrix m:" << endl << m << endl; cout << "m.isIdentity() returns: " << m.isIdentity() << endl; cout << "m.isIdentity(1e-3) returns: " << m.isIdentity(1e-3) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_random_int.cpp0000644000175100001770000000004514107270226024120 0ustar runnerdockercout << VectorXi::Random(2) << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.cpp0000644000175100001770000000146014107270226033031 0ustar runnerdockerMatrixXd X = MatrixXd::Random(5,5); MatrixXd A = X + X.transpose(); cout << "Here is a random symmetric 5x5 matrix, A:" << endl << A << endl << endl; SelfAdjointEigenSolver es(A); cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl; cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl; double lambda = es.eigenvalues()[0]; cout << "Consider the first eigenvalue, lambda = " << lambda << endl; VectorXd v = es.eigenvectors().col(0); cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl; cout << "... and A * v = " << endl << A * v << endl << endl; MatrixXd D = es.eigenvalues().asDiagonal(); MatrixXd V = es.eigenvectors(); cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl; piqp-octave/src/eigen/doc/snippets/ComplexSchur_compute.cpp0000644000175100001770000000045514107270226024024 0ustar runnerdockerMatrixXcf A = MatrixXcf::Random(4,4); ComplexSchur schur(4); schur.compute(A); cout << "The matrix T in the decomposition of A is:" << endl << schur.matrixT() << endl; schur.compute(A.inverse()); cout << "The matrix T in the decomposition of A^(-1) is:" << endl << schur.matrixT() << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_compute.cpp0000644000175100001770000000061014107270226025234 0ustar runnerdockerTridiagonalization tri; MatrixXf X = MatrixXf::Random(4,4); MatrixXf A = X + X.transpose(); tri.compute(A); cout << "The matrix T in the tridiagonal decomposition of A is: " << endl; cout << tri.matrixT() << endl; tri.compute(2*A); // re-use tri to compute eigenvalues of 2A cout << "The matrix T in the tridiagonal decomposition of 2A is: " << endl; cout << tri.matrixT() << endl; piqp-octave/src/eigen/doc/snippets/FullPivLU_image.cpp0000644000175100001770000000056114107270226022636 0ustar runnerdockerMatrix3d m; m << 1,1,0, 1,3,2, 0,1,1; cout << "Here is the matrix m:" << endl << m << endl; cout << "Notice that the middle column is the sum of the two others, so the " << "columns are linearly dependent." << endl; cout << "Here is a matrix whose columns have the same span but are linearly independent:" << endl << m.fullPivLu().image(m) << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_operatorInverseSqrt.cpp0000644000175100001770000000065214107270226030341 0ustar runnerdockerMatrixXd X = MatrixXd::Random(4,4); MatrixXd A = X * X.transpose(); cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl; SelfAdjointEigenSolver es(A); cout << "The inverse square root of A is: " << endl; cout << es.operatorInverseSqrt() << endl; cout << "We can also compute it with operatorSqrt() and inverse(). That yields: " << endl; cout << es.operatorSqrt().inverse() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_leftCols_int.cpp0000644000175100001770000000035414107270226024416 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.leftCols(2):" << endl; cout << a.leftCols(2) << endl; a.leftCols(2).setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_block_int_int_int_int.cpp0000644000175100001770000000041014107270226031743 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the block:" << endl << m.block<2, Dynamic>(1, 1, 2, 3) << endl; m.block<2, Dynamic>(1, 1, 2, 3).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_householderCoefficients.cpp0000644000175100001770000000045714107270226030434 0ustar runnerdockerMatrix4d X = Matrix4d::Random(4,4); Matrix4d A = X + X.transpose(); cout << "Here is a random symmetric 4x4 matrix:" << endl << A << endl; Tridiagonalization triOfA(A); Vector3d hc = triOfA.householderCoefficients(); cout << "The vector of Householder coefficients is:" << endl << hc << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_sum.cpp0000644000175100001770000000024414107270226023140 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the sum of each row:" << endl << m.rowwise().sum() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_greater_equal.cpp0000644000175100001770000000006414107270226023622 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v>=w) << endl; piqp-octave/src/eigen/doc/snippets/HouseholderQR_solve.cpp0000644000175100001770000000054514107270226023610 0ustar runnerdockertypedef Matrix Matrix3x3; Matrix3x3 m = Matrix3x3::Random(); Matrix3f y = Matrix3f::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the matrix y:" << endl << y << endl; Matrix3f x; x = m.householderQr().solve(y); assert(y.isApprox(m*x)); cout << "Here is a solution x to the equation mx=y:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/Matrix_resize_int.cpp0000644000175100001770000000035314107270226023350 0ustar runnerdockerVectorXd v(10); v.resize(3); RowVector3d w; w.resize(3); // this is legal, but has no effect cout << "v: " << v.rows() << " rows, " << v.cols() << " cols" << endl; cout << "w: " << w.rows() << " rows, " << w.cols() << " cols" << endl; piqp-octave/src/eigen/doc/snippets/ComplexSchur_matrixU.cpp0000644000175100001770000000033514107270226023776 0ustar runnerdockerMatrixXcf A = MatrixXcf::Random(4,4); cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl; ComplexSchur schurOfA(A); cout << "The unitary matrix U is:" << endl << schurOfA.matrixU() << endl; piqp-octave/src/eigen/doc/snippets/Matrix_variadic_ctor_cxx11.cpp0000644000175100001770000000015014107270226025025 0ustar runnerdockerMatrix a(1, 2, 3, 4, 5, 6); Matrix b {1, 2, 3}; cout << a << "\n\n" << b << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_rightCols.cpp0000644000175100001770000000036514107270226026476 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.rightCols<2>():" << endl; cout << a.rightCols<2>() << endl; a.rightCols<2>().setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/HessenbergDecomposition_compute.cpp0000644000175100001770000000052314107270226026226 0ustar runnerdockerMatrixXcf A = MatrixXcf::Random(4,4); HessenbergDecomposition hd(4); hd.compute(A); cout << "The matrix H in the decomposition of A is:" << endl << hd.matrixH() << endl; hd.compute(2*A); // re-use hd to compute and store decomposition of 2A cout << "The matrix H in the decomposition of 2A is:" << endl << hd.matrixH() << endl; piqp-octave/src/eigen/doc/snippets/Slicing_rawarray_cxx11.cpp0000644000175100001770000000027614107270226024201 0ustar runnerdocker#if EIGEN_HAS_STATIC_ARRAY_TEMPLATE MatrixXi A = MatrixXi::Random(4,6); cout << "Initial matrix A:\n" << A << "\n\n"; cout << "A(all,{4,2,5,5,3}):\n" << A(all,{4,2,5,5,3}) << "\n\n"; #endif piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_LinSpaced.cpp0000644000175100001770000000042014107270226030066 0ustar runnerdockerArrayXXf table(10, 4); table.col(0) = ArrayXf::LinSpaced(10, 0, 90); table.col(1) = M_PI / 180 * table.col(0); table.col(2) = table.col(1).sin(); table.col(3) = table.col(1).cos(); std::cout << " Degrees Radians Sine Cosine\n"; std::cout << table << std::endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_hnormalized.cpp0000644000175100001770000000057114107270226024306 0ustar runnerdockerVector4d v = Vector4d::Random(); Projective3d P(Matrix4d::Random()); cout << "v = " << v.transpose() << "]^T" << endl; cout << "v.hnormalized() = " << v.hnormalized().transpose() << "]^T" << endl; cout << "P*v = " << (P*v).transpose() << "]^T" << endl; cout << "(P*v).hnormalized() = " << (P*v).hnormalized().transpose() << "]^T" << endl; piqp-octave/src/eigen/doc/snippets/VectorwiseOp_homogeneous.cpp0000644000175100001770000000071514107270226024714 0ustar runnerdockerMatrix3Xd M = Matrix3Xd::Random(3,5); Projective3d P(Matrix4d::Random()); cout << "The matrix M is:" << endl << M << endl << endl; cout << "M.colwise().homogeneous():" << endl << M.colwise().homogeneous() << endl << endl; cout << "P * M.colwise().homogeneous():" << endl << P * M.colwise().homogeneous() << endl << endl; cout << "P * M.colwise().homogeneous().hnormalized(): " << endl << (P * M.colwise().homogeneous()).colwise().hnormalized() << endl << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_norm.cpp0000644000175100001770000000025114107270226023305 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the norm of each column:" << endl << m.colwise().norm() << endl; piqp-octave/src/eigen/doc/snippets/IOFormat.cpp0000644000175100001770000000113314107270226021326 0ustar runnerdockerstd::string sep = "\n----------------------------------------\n"; Matrix3d m1; m1 << 1.111111, 2, 3.33333, 4, 5, 6, 7, 8.888888, 9; IOFormat CommaInitFmt(StreamPrecision, DontAlignCols, ", ", ", ", "", "", " << ", ";"); IOFormat CleanFmt(4, 0, ", ", "\n", "[", "]"); IOFormat OctaveFmt(StreamPrecision, 0, ", ", ";\n", "", "", "[", "]"); IOFormat HeavyFmt(FullPrecision, 0, ", ", ";\n", "[", "]", "[", "]"); std::cout << m1 << sep; std::cout << m1.format(CommaInitFmt) << sep; std::cout << m1.format(CleanFmt) << sep; std::cout << m1.format(OctaveFmt) << sep; std::cout << m1.format(HeavyFmt) << sep; piqp-octave/src/eigen/doc/snippets/Cwise_floor.cpp0000644000175100001770000000013514107270226022122 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(7,-2,2); cout << v << endl << endl; cout << floor(v) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_setOnes.cpp0000644000175100001770000000011014107270226023377 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); m.row(1).setOnes(); cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_diagonal_int.cpp0000644000175100001770000000041614107270226024420 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here are the coefficients on the 1st super-diagonal and 2nd sub-diagonal of m:" << endl << m.diagonal(1).transpose() << endl << m.diagonal(-2).transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_computeInverseAndDetWithCheck.cpp0000644000175100001770000000063214107270226027652 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; Matrix3d inverse; bool invertible; double determinant; m.computeInverseAndDetWithCheck(inverse,determinant,invertible); cout << "Its determinant is " << determinant << endl; if(invertible) { cout << "It is invertible, and its inverse is:" << endl << inverse << endl; } else { cout << "It is not invertible." << endl; } piqp-octave/src/eigen/doc/snippets/tut_arithmetic_redux_minmax.cpp0000644000175100001770000000072414107270226025460 0ustar runnerdocker Matrix3f m = Matrix3f::Random(); std::ptrdiff_t i, j; float minOfM = m.minCoeff(&i,&j); cout << "Here is the matrix m:\n" << m << endl; cout << "Its minimum coefficient (" << minOfM << ") is at position (" << i << "," << j << ")\n\n"; RowVector4i v = RowVector4i::Random(); int maxOfV = v.maxCoeff(&i); cout << "Here is the vector v: " << v << endl; cout << "Its maximum coefficient (" << maxOfV << ") is at position " << i << endl; piqp-octave/src/eigen/doc/snippets/ComplexEigenSolver_compute.cpp0000644000175100001770000000143014107270226025154 0ustar runnerdockerMatrixXcf A = MatrixXcf::Random(4,4); cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl; ComplexEigenSolver ces; ces.compute(A); cout << "The eigenvalues of A are:" << endl << ces.eigenvalues() << endl; cout << "The matrix of eigenvectors, V, is:" << endl << ces.eigenvectors() << endl << endl; complex lambda = ces.eigenvalues()[0]; cout << "Consider the first eigenvalue, lambda = " << lambda << endl; VectorXcf v = ces.eigenvectors().col(0); cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl; cout << "... and A * v = " << endl << A * v << endl << endl; cout << "Finally, V * D * V^(-1) = " << endl << ces.eigenvectors() * ces.eigenvalues().asDiagonal() * ces.eigenvectors().inverse() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_less_equal.cpp0000644000175100001770000000006414107270226023137 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v<=w) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_cosh.cpp0000644000175100001770000000010014107270226021725 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(5,0,1); cout << cosh(v) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseMin.cpp0000644000175100001770000000007414107270226023546 0ustar runnerdockerVector3d v(2,3,4), w(4,2,3); cout << v.cwiseMin(w) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_leftCols.cpp0000644000175100001770000000036214107270226026310 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.leftCols<2>():" << endl; cout << a.leftCols<2>() << endl; a.leftCols<2>().setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_end_int.cpp0000644000175100001770000000034214107270226023406 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.tail(2):" << endl << v.tail(2) << endl; v.tail(2).setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_reshaped_vs_resize_1.cpp0000644000175100001770000000037114107270226025640 0ustar runnerdockerMatrixXi m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped(2, 8):" << endl << m.reshaped(2, 8) << endl; m.resize(2,8); cout << "Here is the matrix m after m.resize(2,8):" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/EigenSolver_eigenvalues.cpp0000644000175100001770000000026014107270226024457 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); EigenSolver es(ones, false); cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << es.eigenvalues() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_exp.cpp0000644000175100001770000000005314107270226021574 0ustar runnerdockerArray3d v(1,2,3); cout << v.exp() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_block_int_int_int_int.cpp0000644000175100001770000000037214107270226026333 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.block(1, 1, 2, 2):" << endl << m.block(1, 1, 2, 2) << endl; m.block(1, 1, 2, 2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_topRows_int.cpp0000644000175100001770000000035114107270226024315 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.topRows(2):" << endl; cout << a.topRows(2) << endl; a.topRows(2).setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_topLeftCorner_int_int.cpp0000644000175100001770000000044414107270226031722 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topLeftCorner<2,Dynamic>(2,2):" << endl; cout << m.topLeftCorner<2,Dynamic>(2,2) << endl; m.topLeftCorner<2,Dynamic>(2,2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_cwise.cpp0000644000175100001770000000111714107270226023570 0ustar runnerdockerMatrixXf mat(2,2); mat << 1, 2, 4, 7; cout << "Here is the matrix mat:\n" << mat << endl << endl; mat = 2 * mat; cout << "After 'mat = 2 * mat', mat = \n" << mat << endl << endl; mat = mat - MatrixXf::Identity(2,2); cout << "After the subtraction, it becomes\n" << mat << endl << endl; ArrayXXf arr = mat; arr = arr.square(); cout << "After squaring, it becomes\n" << arr << endl << endl; // Combining all operations in one statement: mat << 1, 2, 4, 7; mat = (2 * mat - MatrixXf::Identity(2,2)).array().square(); cout << "Doing everything at once yields\n" << mat << endl << endl; piqp-octave/src/eigen/doc/snippets/HouseholderSequence_HouseholderSequence.cpp0000644000175100001770000000244414107270226027660 0ustar runnerdockerMatrix3d v = Matrix3d::Random(); cout << "The matrix v is:" << endl; cout << v << endl; Vector3d v0(1, v(1,0), v(2,0)); cout << "The first Householder vector is: v_0 = " << v0.transpose() << endl; Vector3d v1(0, 1, v(2,1)); cout << "The second Householder vector is: v_1 = " << v1.transpose() << endl; Vector3d v2(0, 0, 1); cout << "The third Householder vector is: v_2 = " << v2.transpose() << endl; Vector3d h = Vector3d::Random(); cout << "The Householder coefficients are: h = " << h.transpose() << endl; Matrix3d H0 = Matrix3d::Identity() - h(0) * v0 * v0.adjoint(); cout << "The first Householder reflection is represented by H_0 = " << endl; cout << H0 << endl; Matrix3d H1 = Matrix3d::Identity() - h(1) * v1 * v1.adjoint(); cout << "The second Householder reflection is represented by H_1 = " << endl; cout << H1 << endl; Matrix3d H2 = Matrix3d::Identity() - h(2) * v2 * v2.adjoint(); cout << "The third Householder reflection is represented by H_2 = " << endl; cout << H2 << endl; cout << "Their product is H_0 H_1 H_2 = " << endl; cout << H0 * H1 * H2 << endl; HouseholderSequence hhSeq(v, h); Matrix3d hhSeqAsMatrix(hhSeq); cout << "If we construct a HouseholderSequence from v and h" << endl; cout << "and convert it to a matrix, we get:" << endl; cout << hhSeqAsMatrix << endl; piqp-octave/src/eigen/doc/snippets/tut_arithmetic_transpose_aliasing.cpp0000644000175100001770000000027414107270226026645 0ustar runnerdockerMatrix2i a; a << 1, 2, 3, 4; cout << "Here is the matrix a:\n" << a << endl; a = a.transpose(); // !!! do NOT do this !!! cout << "and the result of the aliasing effect:\n" << a << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_Tridiagonalization_MatrixType.cpp0000644000175100001770000000067514107270226031614 0ustar runnerdockerMatrixXd X = MatrixXd::Random(5,5); MatrixXd A = X + X.transpose(); cout << "Here is a random symmetric 5x5 matrix:" << endl << A << endl << endl; Tridiagonalization triOfA(A); MatrixXd Q = triOfA.matrixQ(); cout << "The orthogonal matrix Q is:" << endl << Q << endl; MatrixXd T = triOfA.matrixT(); cout << "The tridiagonal matrix T is:" << endl << T << endl << endl; cout << "Q * T * Q^T = " << endl << Q * T * Q.transpose() << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_diagonal.cpp0000644000175100001770000000105014107270226025335 0ustar runnerdockerMatrixXcd X = MatrixXcd::Random(4,4); MatrixXcd A = X + X.adjoint(); cout << "Here is a random self-adjoint 4x4 matrix:" << endl << A << endl << endl; Tridiagonalization triOfA(A); MatrixXd T = triOfA.matrixT(); cout << "The tridiagonal matrix T is:" << endl << T << endl << endl; cout << "We can also extract the diagonals of T directly ..." << endl; VectorXd diag = triOfA.diagonal(); cout << "The diagonal is:" << endl << diag << endl; VectorXd subdiag = triOfA.subDiagonal(); cout << "The subdiagonal is:" << endl << subdiag << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_ThreeWays.cpp0000644000175100001770000000155614107270226030152 0ustar runnerdockerconst int size = 6; MatrixXd mat1(size, size); mat1.topLeftCorner(size/2, size/2) = MatrixXd::Zero(size/2, size/2); mat1.topRightCorner(size/2, size/2) = MatrixXd::Identity(size/2, size/2); mat1.bottomLeftCorner(size/2, size/2) = MatrixXd::Identity(size/2, size/2); mat1.bottomRightCorner(size/2, size/2) = MatrixXd::Zero(size/2, size/2); std::cout << mat1 << std::endl << std::endl; MatrixXd mat2(size, size); mat2.topLeftCorner(size/2, size/2).setZero(); mat2.topRightCorner(size/2, size/2).setIdentity(); mat2.bottomLeftCorner(size/2, size/2).setIdentity(); mat2.bottomRightCorner(size/2, size/2).setZero(); std::cout << mat2 << std::endl << std::endl; MatrixXd mat3(size, size); mat3 << MatrixXd::Zero(size/2, size/2), MatrixXd::Identity(size/2, size/2), MatrixXd::Identity(size/2, size/2), MatrixXd::Zero(size/2, size/2); std::cout << mat3 << std::endl; piqp-octave/src/eigen/doc/snippets/GeneralizedEigenSolver.cpp0000644000175100001770000000071014107270226024242 0ustar runnerdockerGeneralizedEigenSolver ges; MatrixXf A = MatrixXf::Random(4,4); MatrixXf B = MatrixXf::Random(4,4); ges.compute(A, B); cout << "The (complex) numerators of the generalzied eigenvalues are: " << ges.alphas().transpose() << endl; cout << "The (real) denominatore of the generalzied eigenvalues are: " << ges.betas().transpose() << endl; cout << "The (complex) generalzied eigenvalues are (alphas./beta): " << ges.eigenvalues().transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_bottomRightCorner.cpp0000644000175100001770000000042514107270226031062 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomRightCorner<2,2>():" << endl; cout << m.bottomRightCorner<2,2>() << endl; m.bottomRightCorner<2,2>().setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_Zero.cpp0000644000175100001770000000051414107270226027147 0ustar runnerdockerstd::cout << "A fixed-size array:\n"; Array33f a1 = Array33f::Zero(); std::cout << a1 << "\n\n"; std::cout << "A one-dimensional dynamic-size array:\n"; ArrayXf a2 = ArrayXf::Zero(3); std::cout << a2 << "\n\n"; std::cout << "A two-dimensional dynamic-size array:\n"; ArrayXXf a3 = ArrayXXf::Zero(3, 4); std::cout << a3 << "\n"; piqp-octave/src/eigen/doc/snippets/ColPivHouseholderQR_solve.cpp0000644000175100001770000000050414107270226024720 0ustar runnerdockerMatrix3f m = Matrix3f::Random(); Matrix3f y = Matrix3f::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the matrix y:" << endl << y << endl; Matrix3f x; x = m.colPivHouseholderQr().solve(y); assert(y.isApprox(m*x)); cout << "Here is a solution x to the equation mx=y:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/Cwise_isNaN.cpp0000644000175100001770000000014514107270226022012 0ustar runnerdockerArray3d v(1,2,3); v(1) *= 0.0/0.0; v(2) /= 0.0; cout << v << endl << endl; cout << isnan(v) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_ceil.cpp0000644000175100001770000000013414107270226021714 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(7,-2,2); cout << v << endl << endl; cout << ceil(v) << endl; piqp-octave/src/eigen/doc/snippets/HouseholderQR_householderQ.cpp0000644000175100001770000000045414107270226025121 0ustar runnerdockerMatrixXf A(MatrixXf::Random(5,3)), thinQ(MatrixXf::Identity(5,3)), Q; A.setRandom(); HouseholderQR qr(A); Q = qr.householderQ(); thinQ = qr.householderQ() * thinQ; std::cout << "The complete unitary matrix Q is:\n" << Q << "\n\n"; std::cout << "The thin matrix Q is:\n" << thinQ << "\n\n"; piqp-octave/src/eigen/doc/snippets/Matrix_setRandom_int_int.cpp0000644000175100001770000000006214107270226024652 0ustar runnerdockerMatrixXf m; m.setRandom(3, 3); cout << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_log10.cpp0000644000175100001770000000005714107270226021726 0ustar runnerdockerArray4d v(-1,0,1,2); cout << log10(v) << endl; piqp-octave/src/eigen/doc/snippets/.krazy0000644000175100001770000000004214107270226020301 0ustar runnerdockerEXCLUDE copyright EXCLUDE license piqp-octave/src/eigen/doc/snippets/LeastSquaresNormalEquations.cpp0000644000175100001770000000030014107270226025317 0ustar runnerdockerMatrixXf A = MatrixXf::Random(3, 2); VectorXf b = VectorXf::Random(3); cout << "The solution using normal equations is:\n" << (A.transpose() * A).ldlt().solve(A.transpose() * b) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_arg.cpp0000644000175100001770000000012514107270226021551 0ustar runnerdockerArrayXcf v = ArrayXcf::Random(3); cout << v << endl << endl; cout << arg(v) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_operatorNorm.cpp0000644000175100001770000000020414107270226024452 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); cout << "The operator norm of the 3x3 matrix of ones is " << ones.operatorNorm() << endl; piqp-octave/src/eigen/doc/snippets/Array_initializer_list_vector_cxx11.cpp0000644000175100001770000000007714107270226026776 0ustar runnerdockerArray v {{1, 2, 3, 4, 5}}; cout << v << endl; piqp-octave/src/eigen/doc/snippets/JacobiSVD_basic.cpp0000644000175100001770000000114614107270226022557 0ustar runnerdockerMatrixXf m = MatrixXf::Random(3,2); cout << "Here is the matrix m:" << endl << m << endl; JacobiSVD svd(m, ComputeThinU | ComputeThinV); cout << "Its singular values are:" << endl << svd.singularValues() << endl; cout << "Its left singular vectors are the columns of the thin U matrix:" << endl << svd.matrixU() << endl; cout << "Its right singular vectors are the columns of the thin V matrix:" << endl << svd.matrixV() << endl; Vector3f rhs(1, 0, 0); cout << "Now consider this rhs vector:" << endl << rhs << endl; cout << "A least-squares solution of m*x = rhs is:" << endl << svd.solve(rhs) << endl; piqp-octave/src/eigen/doc/snippets/Matrix_resize_int_NoChange.cpp0000644000175100001770000000015714107270226025114 0ustar runnerdockerMatrixXd m(3,4); m.resize(5, NoChange); cout << "m: " << m.rows() << " rows, " << m.cols() << " cols" << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_rowwise.cpp0000644000175100001770000000043114107270226023464 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the sum of each row:" << endl << m.rowwise().sum() << endl; cout << "Here is the maximum absolute value of each row:" << endl << m.cwiseAbs().rowwise().maxCoeff() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_array_power_array.cpp0000644000175100001770000000035014107270226024530 0ustar runnerdockerArray x(8,25,3), e(1./3.,0.5,2.); cout << "[" << x << "]^[" << e << "] = " << x.pow(e) << endl; // using ArrayBase::pow cout << "[" << x << "]^[" << e << "] = " << pow(x,e) << endl; // using Eigen::pow piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_operatorSqrt.cpp0000644000175100001770000000055314107270226027005 0ustar runnerdockerMatrixXd X = MatrixXd::Random(4,4); MatrixXd A = X * X.transpose(); cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl; SelfAdjointEigenSolver es(A); MatrixXd sqrtA = es.operatorSqrt(); cout << "The square root of A is: " << endl << sqrtA << endl; cout << "If we square this, we get: " << endl << sqrtA*sqrtA << endl; piqp-octave/src/eigen/doc/snippets/BiCGSTAB_simple.cpp0000644000175100001770000000061214107270226022436 0ustar runnerdocker int n = 10000; VectorXd x(n), b(n); SparseMatrix A(n,n); /* ... fill A and b ... */ BiCGSTAB > solver; solver.compute(A); x = solver.solve(b); std::cout << "#iterations: " << solver.iterations() << std::endl; std::cout << "estimated error: " << solver.error() << std::endl; /* ... update b ... */ x = solver.solve(b); // solve again piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseInverse.cpp0000644000175100001770000000012714107270226024435 0ustar runnerdockerMatrixXd m(2,3); m << 2, 0.5, 1, 3, 0.25, 1; cout << m.cwiseInverse() << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_decomposeInPlace.cpp0000644000175100001770000000104614107270226026776 0ustar runnerdockerMatrixXd X = MatrixXd::Random(5,5); MatrixXd A = X + X.transpose(); cout << "Here is a random symmetric 5x5 matrix:" << endl << A << endl << endl; VectorXd diag(5); VectorXd subdiag(4); VectorXd hcoeffs(4); // Scratch space for householder reflector. internal::tridiagonalization_inplace(A, diag, subdiag, hcoeffs, true); cout << "The orthogonal matrix Q is:" << endl << A << endl; cout << "The diagonal of the tridiagonal matrix T is:" << endl << diag << endl; cout << "The subdiagonal of the tridiagonal matrix T is:" << endl << subdiag << endl; piqp-octave/src/eigen/doc/snippets/Cwise_isInf.cpp0000644000175100001770000000014514107270226022052 0ustar runnerdockerArray3d v(1,2,3); v(1) *= 0.0/0.0; v(2) /= 0.0; cout << v << endl << endl; cout << isinf(v) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_zero_int_int.cpp0000644000175100001770000000004514107270226024471 0ustar runnerdockercout << MatrixXi::Zero(2,3) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_reuse_decomposition.cpp0000644000175100001770000000056014107270226027023 0ustar runnerdockerMatrix3f A(3,3); A << 1,2,3, 4,5,6, 7,8,10; PartialPivLU luOfA(A); // compute LU decomposition of A Vector3f b; b << 3,3,4; Vector3f x; x = luOfA.solve(b); cout << "The solution with right-hand side (3,3,4) is:" << endl; cout << x << endl; b << 1,1,1; x = luOfA.solve(b); cout << "The solution with right-hand side (1,1,1) is:" << endl; cout << x << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_eigenvectors.cpp0000644000175100001770000000030114107270226026764 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); SelfAdjointEigenSolver es(ones); cout << "The first eigenvector of the 3x3 matrix of ones is:" << endl << es.eigenvectors().col(0) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_std_sort_rows_cxx11.cpp0000644000175100001770000000033214107270226025470 0ustar runnerdockerArrayXXi A = ArrayXXi::Random(4,4).abs(); cout << "Here is the initial matrix A:\n" << A << "\n"; for(auto row : A.rowwise()) std::sort(row.begin(), row.end()); cout << "Here is the sorted matrix A:\n" << A << "\n"; piqp-octave/src/eigen/doc/snippets/Matrix_resize_int_int.cpp0000644000175100001770000000062714107270226024226 0ustar runnerdockerMatrixXd m(2,3); m << 1,2,3,4,5,6; cout << "here's the 2x3 matrix m:" << endl << m << endl; cout << "let's resize m to 3x2. This is a conservative resizing because 2*3==3*2." << endl; m.resize(3,2); cout << "here's the 3x2 matrix m:" << endl << m << endl; cout << "now let's resize m to size 2x2. This is NOT a conservative resizing, so it becomes uninitialized:" << endl; m.resize(2,2); cout << m << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_Join.cpp0000644000175100001770000000041214107270226027124 0ustar runnerdockerRowVectorXd vec1(3); vec1 << 1, 2, 3; std::cout << "vec1 = " << vec1 << std::endl; RowVectorXd vec2(4); vec2 << 1, 4, 9, 16; std::cout << "vec2 = " << vec2 << std::endl; RowVectorXd joined(7); joined << vec1, vec2; std::cout << "joined = " << joined << std::endl; piqp-octave/src/eigen/doc/snippets/TopicAliasing_mult4.cpp0000644000175100001770000000014614107270226023524 0ustar runnerdockerMatrixXf A(2,2), B(3,2); B << 2, 0, 0, 3, 1, 1; A << 2, 0, 0, -2; A = (B * A).cwiseAbs(); cout << A; piqp-octave/src/eigen/doc/snippets/Slicing_arrayexpr.cpp0000644000175100001770000000024714107270226023340 0ustar runnerdockerArrayXi ind(5); ind<<4,2,5,5,3; MatrixXi A = MatrixXi::Random(4,6); cout << "Initial matrix A:\n" << A << "\n\n"; cout << "A(all,ind-1):\n" << A(all,ind-1) << "\n\n"; piqp-octave/src/eigen/doc/snippets/Tutorial_commainit_01b.cpp0000644000175100001770000000016114107270226024153 0ustar runnerdockerMatrix3f m; m.row(0) << 1, 2, 3; m.block(1,0,2,2) << 4, 5, 7, 8; m.col(2).tail(2) << 6, 9; std::cout << m; piqp-octave/src/eigen/doc/snippets/MatrixBase_zero.cpp0000644000175100001770000000010714107270226022744 0ustar runnerdockercout << Matrix2d::Zero() << endl; cout << RowVector4i::Zero() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_square.cpp0000644000175100001770000000005614107270226022303 0ustar runnerdockerArray3d v(2,3,4); cout << v.square() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_array.cpp0000644000175100001770000000010614107270226023102 0ustar runnerdockerVector3d v(1,2,3); v.array() += 3; v.array() -= 2; cout << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_random_int_int.cpp0000644000175100001770000000004714107270226024774 0ustar runnerdockercout << MatrixXi::Random(2,3) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_reverse.cpp0000644000175100001770000000062714107270226023447 0ustar runnerdockerMatrixXi m = MatrixXi::Random(3,4); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the reverse of m:" << endl << m.reverse() << endl; cout << "Here is the coefficient (1,0) in the reverse of m:" << endl << m.reverse()(1,0) << endl; cout << "Let us overwrite this coefficient with the value 4." << endl; m.reverse()(1,0) = 4; cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/CMakeLists.txt0000644000175100001770000000313014107270226021701 0ustar runnerdockerfile(GLOB snippets_SRCS "*.cpp") add_custom_target(all_snippets) foreach(snippet_src ${snippets_SRCS}) get_filename_component(snippet ${snippet_src} NAME_WE) set(compile_snippet_target compile_${snippet}) set(compile_snippet_src ${compile_snippet_target}.cpp) if((NOT ${snippet_src} MATCHES "cxx11") OR EIGEN_COMPILER_SUPPORT_CPP11) file(READ ${snippet_src} snippet_source_code) configure_file(${CMAKE_CURRENT_SOURCE_DIR}/compile_snippet.cpp.in ${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src}) add_executable(${compile_snippet_target} ${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src}) if(EIGEN_STANDARD_LIBRARIES_TO_LINK_TO) target_link_libraries(${compile_snippet_target} ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO}) endif() if(${snippet_src} MATCHES "cxx11") set_target_properties(${compile_snippet_target} PROPERTIES COMPILE_FLAGS "-std=c++11") endif() if(${snippet_src} MATCHES "deprecated") set_target_properties(${compile_snippet_target} PROPERTIES COMPILE_FLAGS "-DEIGEN_NO_DEPRECATED_WARNING") endif() add_custom_command( TARGET ${compile_snippet_target} POST_BUILD COMMAND ${compile_snippet_target} ARGS >${CMAKE_CURRENT_BINARY_DIR}/${snippet}.out ) add_dependencies(all_snippets ${compile_snippet_target}) set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src} PROPERTIES OBJECT_DEPENDS ${snippet_src}) else() message("skip snippet ${snippet_src} because compiler does not support C++11") endif() endforeach() piqp-octave/src/eigen/doc/snippets/Cwise_sign.cpp0000644000175100001770000000005514107270226021742 0ustar runnerdockerArray3d v(-3,5,0); cout << v.sign() << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_multiple_rhs.cpp0000644000175100001770000000047614107270226025461 0ustar runnerdockerMatrix3f A(3,3); A << 1,2,3, 4,5,6, 7,8,10; Matrix B; B << 3,1, 3,1, 4,1; Matrix X; X = A.fullPivLu().solve(B); cout << "The solution with right-hand side (3,3,4) is:" << endl; cout << X.col(0) << endl; cout << "The solution with right-hand side (1,1,1) is:" << endl; cout << X.col(1) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_zero_int.cpp0000644000175100001770000000011114107270226023611 0ustar runnerdockercout << RowVectorXi::Zero(4) << endl; cout << VectorXf::Zero(2) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_log.cpp0000644000175100001770000000005314107270226021561 0ustar runnerdockerArray3d v(1,2,3); cout << v.log() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_all.cpp0000644000175100001770000000101314107270226022532 0ustar runnerdockerVector3f boxMin(Vector3f::Zero()), boxMax(Vector3f::Ones()); Vector3f p0 = Vector3f::Random(), p1 = Vector3f::Random().cwiseAbs(); // let's check if p0 and p1 are inside the axis aligned box defined by the corners boxMin,boxMax: cout << "Is (" << p0.transpose() << ") inside the box: " << ((boxMin.array()p0.array()).all()) << endl; cout << "Is (" << p1.transpose() << ") inside the box: " << ((boxMin.array()p1.array()).all()) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_triangular_inplace.cpp0000644000175100001770000000023714107270226026610 0ustar runnerdockerMatrix3f A; Vector3f b; A << 1,2,3, 0,5,6, 0,0,10; b << 3, 3, 4; A.triangularView().solveInPlace(b); cout << "The solution is:" << endl << b << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_colwise_iterator_cxx11.cpp0000644000175100001770000000072714107270226026377 0ustar runnerdockerMatrix3i m = Matrix3i::Random(); cout << "Here is the initial matrix m:" << endl << m << endl; int i = -1; for(auto c: m.colwise()) { c *= i; ++i; } cout << "Here is the matrix m after the for-range-loop:" << endl << m << endl; auto cols = m.colwise(); auto it = std::find_if(cols.cbegin(), cols.cend(), [](Matrix3i::ConstColXpr x) { return x.squaredNorm() == 0; }); cout << "The first empty column is: " << distance(cols.cbegin(),it) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_ones.cpp0000644000175100001770000000011314107270226022726 0ustar runnerdockercout << Matrix2d::Ones() << endl; cout << 6 * RowVector4i::Ones() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_boolean_or.cpp0000644000175100001770000000010014107270226023110 0ustar runnerdockerArray3d v(-1,2,1), w(-3,2,3); cout << ((v DataMatrix; // let's generate some samples on the 3D plane of equation z = 2x+3y (with some noise) DataMatrix samples = DataMatrix::Random(12,2); VectorXf elevations = 2*samples.col(0) + 3*samples.col(1) + VectorXf::Random(12)*0.1; // and let's solve samples * [x y]^T = elevations in least square sense: Matrix xy = (samples.adjoint() * samples).llt().solve((samples.adjoint()*elevations)); cout << xy << endl; piqp-octave/src/eigen/doc/snippets/RealSchur_compute.cpp0000644000175100001770000000052714107270226023300 0ustar runnerdockerMatrixXf A = MatrixXf::Random(4,4); RealSchur schur(4); schur.compute(A, /* computeU = */ false); cout << "The matrix T in the decomposition of A is:" << endl << schur.matrixT() << endl; schur.compute(A.inverse(), /* computeU = */ false); cout << "The matrix T in the decomposition of A^(-1) is:" << endl << schur.matrixT() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_set.cpp0000644000175100001770000000044214107270226022562 0ustar runnerdockerMatrix3i m1; m1 << 1, 2, 3, 4, 5, 6, 7, 8, 9; cout << m1 << endl << endl; Matrix3i m2 = Matrix3i::Identity(); m2.block(0,0, 2,2) << 10, 11, 12, 13; cout << m2 << endl << endl; Vector2i v1; v1 << 14, 15; m2 << v1.transpose(), 16, v1, m1.block(1,1,2,2); cout << m2 << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_fixedBlock_int_int.cpp0000644000175100001770000000042214107270226025563 0ustar runnerdockerMatrix4d m = Vector4d(1,2,3,4).asDiagonal(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.fixed<2, 2>(2, 2):" << endl << m.block<2, 2>(2, 2) << endl; m.block<2, 2>(2, 0) = m.block<2, 2>(2, 2); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/ComplexEigenSolver_eigenvalues.cpp0000644000175100001770000000033014107270226026005 0ustar runnerdockerMatrixXcf ones = MatrixXcf::Ones(3,3); ComplexEigenSolver ces(ones, /* computeEigenvectors = */ false); cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << ces.eigenvalues() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_reshaped_fixed.cpp0000644000175100001770000000026214107270226024741 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped(fix<2>,fix<8>):" << endl << m.reshaped(fix<2>,fix<8>) << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_ReshapeMat2Vec.cpp0000644000175100001770000000045314107270226024306 0ustar runnerdockerMatrixXf M1(3,3); // Column-major storage M1 << 1, 2, 3, 4, 5, 6, 7, 8, 9; Map v1(M1.data(), M1.size()); cout << "v1:" << endl << v1 << endl; Matrix M2(M1); Map v2(M2.data(), M2.size()); cout << "v2:" << endl << v2 << endl; piqp-octave/src/eigen/doc/snippets/Map_simple.cpp0000644000175100001770000000013514107270226021735 0ustar runnerdockerint array[9]; for(int i = 0; i < 9; ++i) array[i] = i; cout << Map(array) << endl; piqp-octave/src/eigen/doc/snippets/tut_arithmetic_transpose_inplace.cpp0000644000175100001770000000025614107270226026471 0ustar runnerdockerMatrixXf a(2,3); a << 1, 2, 3, 4, 5, 6; cout << "Here is the initial matrix a:\n" << a << endl; a.transposeInPlace(); cout << "and after being transposed:\n" << a << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setConstant_int_int.cpp0000644000175100001770000000006714107270226025230 0ustar runnerdockerMatrixXf m; m.setConstant(3, 3, 5); cout << m << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_solve_triangular.cpp0000644000175100001770000000042114107270226025110 0ustar runnerdockerMatrix3f A; Vector3f b; A << 1,2,3, 0,5,6, 0,0,10; b << 3, 3, 4; cout << "Here is the matrix A:" << endl << A << endl; cout << "Here is the vector b:" << endl << b << endl; Vector3f x = A.triangularView().solve(b); cout << "The solution is:" << endl << x << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_colwise.cpp0000644000175100001770000000043714107270226023440 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the sum of each column:" << endl << m.colwise().sum() << endl; cout << "Here is the maximum absolute value of each column:" << endl << m.cwiseAbs().colwise().maxCoeff() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_bottomRightCorner_int_int.cpp0000644000175100001770000000046014107270226032605 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomRightCorner<2,Dynamic>(2,2):" << endl; cout << m.bottomRightCorner<2,Dynamic>(2,2) << endl; m.bottomRightCorner<2,Dynamic>(2,2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_round.cpp0000644000175100001770000000013514107270226022130 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(7,-2,2); cout << v << endl << endl; cout << round(v) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_tan.cpp0000644000175100001770000000007214107270226021563 0ustar runnerdockerArray3d v(M_PI, M_PI/2, M_PI/3); cout << v.tan() << endl; piqp-octave/src/eigen/doc/snippets/Matrix_Map_stride.cpp0000644000175100001770000000023514107270226023263 0ustar runnerdockerMatrix4i A; A << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16; std::cout << Matrix2i::Map(&A(1,1),Stride<8,2>()) << std::endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_reshaped_int_int.cpp0000644000175100001770000000024014107270226025302 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped(2, 8):" << endl << m.reshaped(2, 8) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_less.cpp0000644000175100001770000000006314107270226021747 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v(1,1):" << endl << m.block<2,2>(1,1) << endl; m.block<2,2>(1,1).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_topLeftCorner.cpp0000644000175100001770000000041114107270226030170 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topLeftCorner<2,2>():" << endl; cout << m.topLeftCorner<2,2>() << endl; m.topLeftCorner<2,2>().setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseAbs.cpp0000644000175100001770000000012014107270226023520 0ustar runnerdockerMatrixXd m(2,3); m << 2, -4, 6, -5, 1, 0; cout << m.cwiseAbs() << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_AdvancedInitialization_CommaTemporary.cpp0000644000175100001770000000025014107270226031164 0ustar runnerdockerMatrixXf mat = MatrixXf::Random(2, 3); std::cout << mat << std::endl << std::endl; mat = (MatrixXf(2,2) << 0, 1, 1, 0).finished() * mat; std::cout << mat << std::endl; piqp-octave/src/eigen/doc/snippets/Matrix_initializer_list_vector_cxx11.cpp0000644000175100001770000000005014107270226027153 0ustar runnerdockerVectorXi v {{1, 2}}; cout << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_isOnes.cpp0000644000175100001770000000033014107270226023223 0ustar runnerdockerMatrix3d m = Matrix3d::Ones(); m(0,2) += 1e-4; cout << "Here's the matrix m:" << endl << m << endl; cout << "m.isOnes() returns: " << m.isOnes() << endl; cout << "m.isOnes(1e-3) returns: " << m.isOnes(1e-3) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_asDiagonal.cpp0000644000175100001770000000007014107270226024026 0ustar runnerdockercout << Matrix3i(Vector3i(2,5,6).asDiagonal()) << endl; piqp-octave/src/eigen/doc/snippets/Matrix_initializer_list_23_cxx11.cpp0000644000175100001770000000007414107270226026103 0ustar runnerdockerMatrixXd m { {1, 2, 3}, {4, 5, 6} }; cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseSign.cpp0000644000175100001770000000012014107270226023713 0ustar runnerdockerMatrixXd m(2,3); m << 2, -4, 6, -5, 1, 0; cout << m.cwiseSign() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_bottomRightCorner_int_int.cpp0000644000175100001770000000042214107270226027164 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomRightCorner(2, 2):" << endl; cout << m.bottomRightCorner(2, 2) << endl; m.bottomRightCorner(2, 2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_homogeneous.cpp0000644000175100001770000000074714107270226024327 0ustar runnerdockerVector3d v = Vector3d::Random(), w; Projective3d P(Matrix4d::Random()); cout << "v = [" << v.transpose() << "]^T" << endl; cout << "h.homogeneous() = [" << v.homogeneous().transpose() << "]^T" << endl; cout << "(P * v.homogeneous()) = [" << (P * v.homogeneous()).transpose() << "]^T" << endl; cout << "(P * v.homogeneous()).hnormalized() = [" << (P * v.homogeneous()).eval().hnormalized().transpose() << "]^T" << endl; piqp-octave/src/eigen/doc/snippets/Map_general_stride.cpp0000644000175100001770000000024414107270226023434 0ustar runnerdockerint array[24]; for(int i = 0; i < 24; ++i) array[i] = i; cout << Map > (array, 3, 3, Stride(8, 2)) << endl; piqp-octave/src/eigen/doc/snippets/DirectionWise_replicate_int.cpp0000644000175100001770000000026314107270226025323 0ustar runnerdockerVector3i v = Vector3i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "v.rowwise().replicate(5) = ..." << endl; cout << v.rowwise().replicate(5) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_eval.cpp0000644000175100001770000000072314107270226022720 0ustar runnerdockerMatrix2f M = Matrix2f::Random(); Matrix2f m; m = M; cout << "Here is the matrix m:" << endl << m << endl; cout << "Now we want to copy a column into a row." << endl; cout << "If we do m.col(1) = m.row(0), then m becomes:" << endl; m.col(1) = m.row(0); cout << m << endl << "which is wrong!" << endl; cout << "Now let us instead do m.col(1) = m.row(0).eval(). Then m becomes" << endl; m = M; m.col(1) = m.row(0).eval(); cout << m << endl << "which is right." << endl; piqp-octave/src/eigen/doc/snippets/Cwise_acos.cpp0000644000175100001770000000006714107270226021732 0ustar runnerdockerArray3d v(0, sqrt(2.)/2, 1); cout << v.acos() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_isOrthogonal.cpp0000644000175100001770000000044514107270226024442 0ustar runnerdockerVector3d v(1,0,0); Vector3d w(1e-4,0,1); cout << "Here's the vector v:" << endl << v << endl; cout << "Here's the vector w:" << endl << w << endl; cout << "v.isOrthogonal(w) returns: " << v.isOrthogonal(w) << endl; cout << "v.isOrthogonal(w,1e-3) returns: " << v.isOrthogonal(w,1e-3) << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setZero_int.cpp0000644000175100001770000000005514107270226023501 0ustar runnerdockerVectorXf v; v.setZero(3); cout << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_setIdentity.cpp0000644000175100001770000000012314107270226024270 0ustar runnerdockerMatrix4i m = Matrix4i::Zero(); m.block<3,3>(1,0).setIdentity(); cout << m << endl; piqp-octave/src/eigen/doc/snippets/SparseMatrix_coeffs.cpp0000644000175100001770000000063314107270226023621 0ustar runnerdockerSparseMatrix A(3,3); A.insert(1,2) = 0; A.insert(0,1) = 1; A.insert(2,0) = 2; A.makeCompressed(); cout << "The matrix A is:" << endl << MatrixXd(A) << endl; cout << "it has " << A.nonZeros() << " stored non zero coefficients that are: " << A.coeffs().transpose() << endl; A.coeffs() += 10; cout << "After adding 10 to every stored non zero coefficient, the matrix A is:" << endl << MatrixXd(A) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_select.cpp0000644000175100001770000000016314107270226023246 0ustar runnerdockerMatrixXi m(3, 3); m << 1, 2, 3, 4, 5, 6, 7, 8, 9; m = (m.array() >= 5).select(-m, m); cout << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_topRightCorner_int_int.cpp0000644000175100001770000000041114107270226026460 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topRightCorner(2, 2):" << endl; cout << m.topRightCorner(2, 2) << endl; m.topRightCorner(2, 2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseSqrt.cpp0000644000175100001770000000006214107270226023751 0ustar runnerdockerVector3d v(1,2,4); cout << v.cwiseSqrt() << endl; piqp-octave/src/eigen/doc/snippets/Map_placement_new.cpp0000644000175100001770000000026714107270226023273 0ustar runnerdockerint data[] = {1,2,3,4,5,6,7,8,9}; Map v(data,4); cout << "The mapped vector v is: " << v << "\n"; new (&v) Map(data+4,5); cout << "Now v is: " << v << "\n"; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_start.cpp0000644000175100001770000000034614107270226025674 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.head(2):" << endl << v.head<2>() << endl; v.head<2>().setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/Array_initializer_list_23_cxx11.cpp0000644000175100001770000000007414107270226025715 0ustar runnerdockerArrayXXi a { {1, 2, 3}, {3, 4, 5} }; cout << a << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_segment.cpp0000644000175100001770000000036414107270226026201 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.segment<2>(1):" << endl << v.segment<2>(1) << endl; v.segment<2>(2).setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_reshaped_vs_resize_2.cpp0000644000175100001770000000056414107270226025645 0ustar runnerdockerMatrix m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.reshaped(2, 8):" << endl << m.reshaped(2, 8) << endl; cout << "Here is m.reshaped(2, 8):" << endl << m.reshaped(2, 8) << endl; m.resize(2,8); cout << "Here is the matrix m after m.resize(2,8):" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_rint.cpp0000644000175100001770000000013414107270226021754 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(7,-2,2); cout << v << endl << endl; cout << rint(v) << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointView_operatorNorm.cpp0000644000175100001770000000023514107270226025454 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); cout << "The operator norm of the 3x3 matrix of ones is " << ones.selfadjointView().operatorNorm() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_segment_int_int.cpp0000644000175100001770000000036414107270226025160 0ustar runnerdockerRowVector4i v = RowVector4i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "Here is v.segment(1, 2):" << endl << v.segment(1, 2) << endl; v.segment(1, 2).setZero(); cout << "Now the vector v is:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_applyOnTheLeft.cpp0000644000175100001770000000031714107270226024666 0ustar runnerdockerMatrix3f A = Matrix3f::Random(3,3), B; B << 0,1,0, 0,0,1, 1,0,0; cout << "At start, A = " << endl << A << endl; A.applyOnTheLeft(B); cout << "After applyOnTheLeft, A = " << endl << A << endl; piqp-octave/src/eigen/doc/snippets/Matrix_setIdentity_int_int.cpp0000644000175100001770000000006414107270226025225 0ustar runnerdockerMatrixXf m; m.setIdentity(3, 3); cout << m << endl; piqp-octave/src/eigen/doc/snippets/tut_matrix_assignment_resizing.cpp0000644000175100001770000000030114107270226026204 0ustar runnerdockerMatrixXf a(2,2); std::cout << "a is of size " << a.rows() << "x" << a.cols() << std::endl; MatrixXf b(3,3); a = b; std::cout << "a is now of size " << a.rows() << "x" << a.cols() << std::endl; piqp-octave/src/eigen/doc/snippets/Cwise_equal_equal.cpp0000644000175100001770000000006414107270226023300 0ustar runnerdockerArray3d v(1,2,3), w(3,2,1); cout << (v==w) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_abs.cpp0000644000175100001770000000005514107270226021547 0ustar runnerdockerArray3d v(1,-2,-3); cout << v.abs() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_diagonal.cpp0000644000175100001770000000027414107270226023550 0ustar runnerdockerMatrix3i m = Matrix3i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here are the coefficients on the main diagonal of m:" << endl << m.diagonal() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_topRightCorner.cpp0000644000175100001770000000041414107270226030356 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topRightCorner<2,2>():" << endl; cout << m.topRightCorner<2,2>() << endl; m.topRightCorner<2,2>().setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/PartialRedux_maxCoeff.cpp0000644000175100001770000000026014107270226024062 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is the maximum of each column:" << endl << m.colwise().maxCoeff() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_replicate_int_int.cpp0000644000175100001770000000024314107270226025462 0ustar runnerdockerVector3i v = Vector3i::Random(); cout << "Here is the vector v:" << endl << v << endl; cout << "v.replicate(2,5) = ..." << endl; cout << v.replicate(2,5) << endl; piqp-octave/src/eigen/doc/snippets/PartialPivLU_solve.cpp0000644000175100001770000000056414107270226023401 0ustar runnerdockerMatrixXd A = MatrixXd::Random(3,3); MatrixXd B = MatrixXd::Random(3,2); cout << "Here is the invertible matrix A:" << endl << A << endl; cout << "Here is the matrix B:" << endl << B << endl; MatrixXd X = A.lu().solve(B); cout << "Here is the (unique) solution X to the equation AX=B:" << endl << X << endl; cout << "Relative error: " << (A*X-B).norm() / B.norm() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_diagonal_template_int.cpp0000644000175100001770000000042214107270226026310 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here are the coefficients on the 1st super-diagonal and 2nd sub-diagonal of m:" << endl << m.diagonal<1>().transpose() << endl << m.diagonal<-2>().transpose() << endl; piqp-octave/src/eigen/doc/snippets/Tridiagonalization_packedMatrix.cpp0000644000175100001770000000061214107270226026176 0ustar runnerdockerMatrix4d X = Matrix4d::Random(4,4); Matrix4d A = X + X.transpose(); cout << "Here is a random symmetric 4x4 matrix:" << endl << A << endl; Tridiagonalization triOfA(A); Matrix4d pm = triOfA.packedMatrix(); cout << "The packed matrix M is:" << endl << pm << endl; cout << "The diagonal and subdiagonal corresponds to the matrix T, which is:" << endl << triOfA.matrixT() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_abs2.cpp0000644000175100001770000000005614107270226021632 0ustar runnerdockerArray3d v(1,-2,-3); cout << v.abs2() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_plus_equal.cpp0000644000175100001770000000005514107270226023154 0ustar runnerdockerArray3d v(1,2,3); v += 5; cout << v << endl; piqp-octave/src/eigen/doc/snippets/Slicing_stdvector_cxx11.cpp0000644000175100001770000000024414107270226024361 0ustar runnerdockerstd::vector ind{4,2,5,5,3}; MatrixXi A = MatrixXi::Random(4,6); cout << "Initial matrix A:\n" << A << "\n\n"; cout << "A(all,ind):\n" << A(all,ind) << "\n\n"; piqp-octave/src/eigen/doc/snippets/Tutorial_commainit_02.cpp0000644000175100001770000000032714107270226024016 0ustar runnerdockerint rows=5, cols=5; MatrixXf m(rows,cols); m << (Matrix3f() << 1, 2, 3, 4, 5, 6, 7, 8, 9).finished(), MatrixXf::Zero(3,cols-3), MatrixXf::Zero(rows-3,3), MatrixXf::Identity(rows-3,cols-3); cout << m; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseAbs2.cpp0000644000175100001770000000012114107270226023603 0ustar runnerdockerMatrixXd m(2,3); m << 2, -4, 6, -5, 1, 0; cout << m.cwiseAbs2() << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.cpp0000644000175100001770000000147214107270226033116 0ustar runnerdockerMatrixXd X = MatrixXd::Random(5,5); MatrixXd A = X + X.transpose(); cout << "Here is a random symmetric matrix, A:" << endl << A << endl; X = MatrixXd::Random(5,5); MatrixXd B = X * X.transpose(); cout << "and a random postive-definite matrix, B:" << endl << B << endl << endl; GeneralizedSelfAdjointEigenSolver es(A,B); cout << "The eigenvalues of the pencil (A,B) are:" << endl << es.eigenvalues() << endl; cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl; double lambda = es.eigenvalues()[0]; cout << "Consider the first eigenvalue, lambda = " << lambda << endl; VectorXd v = es.eigenvectors().col(0); cout << "If v is the corresponding eigenvector, then A * v = " << endl << A * v << endl; cout << "... and lambda * B * v = " << endl << lambda * B * v << endl << endl; piqp-octave/src/eigen/doc/snippets/RealQZ_compute.cpp0000644000175100001770000000146314107270226022546 0ustar runnerdockerMatrixXf A = MatrixXf::Random(4,4); MatrixXf B = MatrixXf::Random(4,4); RealQZ qz(4); // preallocate space for 4x4 matrices qz.compute(A,B); // A = Q S Z, B = Q T Z // print original matrices and result of decomposition cout << "A:\n" << A << "\n" << "B:\n" << B << "\n"; cout << "S:\n" << qz.matrixS() << "\n" << "T:\n" << qz.matrixT() << "\n"; cout << "Q:\n" << qz.matrixQ() << "\n" << "Z:\n" << qz.matrixZ() << "\n"; // verify precision cout << "\nErrors:" << "\n|A-QSZ|: " << (A-qz.matrixQ()*qz.matrixS()*qz.matrixZ()).norm() << ", |B-QTZ|: " << (B-qz.matrixQ()*qz.matrixT()*qz.matrixZ()).norm() << "\n|QQ* - I|: " << (qz.matrixQ()*qz.matrixQ().adjoint() - MatrixXf::Identity(4,4)).norm() << ", |ZZ* - I|: " << (qz.matrixZ()*qz.matrixZ().adjoint() - MatrixXf::Identity(4,4)).norm() << "\n"; piqp-octave/src/eigen/doc/snippets/Matrix_resize_NoChange_int.cpp0000644000175100001770000000015714107270226025114 0ustar runnerdockerMatrixXd m(3,4); m.resize(NoChange, 5); cout << "m: " << m.rows() << " rows, " << m.cols() << " cols" << endl; piqp-octave/src/eigen/doc/snippets/Cwise_product.cpp0000644000175100001770000000021514107270226022460 0ustar runnerdockerArray33i a = Array33i::Random(), b = Array33i::Random(); Array33i c = a * b; cout << "a:\n" << a << "\nb:\n" << b << "\nc:\n" << c << endl; piqp-octave/src/eigen/doc/snippets/TopicStorageOrders_example.cpp0000644000175100001770000000101514107270226025142 0ustar runnerdockerMatrix Acolmajor; Acolmajor << 8, 2, 2, 9, 9, 1, 4, 4, 3, 5, 4, 5; cout << "The matrix A:" << endl; cout << Acolmajor << endl << endl; cout << "In memory (column-major):" << endl; for (int i = 0; i < Acolmajor.size(); i++) cout << *(Acolmajor.data() + i) << " "; cout << endl << endl; Matrix Arowmajor = Acolmajor; cout << "In memory (row-major):" << endl; for (int i = 0; i < Arowmajor.size(); i++) cout << *(Arowmajor.data() + i) << " "; cout << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_commainit_01.cpp0000644000175100001770000000010614107270226024010 0ustar runnerdockerMatrix3f m; m << 1, 2, 3, 4, 5, 6, 7, 8, 9; std::cout << m; piqp-octave/src/eigen/doc/snippets/BiCGSTAB_step_by_step.cpp0000644000175100001770000000062714107270226023653 0ustar runnerdocker int n = 10000; VectorXd x(n), b(n); SparseMatrix A(n,n); /* ... fill A and b ... */ BiCGSTAB > solver(A); // start from a random solution x = VectorXd::Random(n); solver.setMaxIterations(1); int i = 0; do { x = solver.solveWithGuess(b,x); std::cout << i << " : " << solver.error() << std::endl; ++i; } while (solver.info()!=Success && i<100); piqp-octave/src/eigen/doc/snippets/Jacobi_makeJacobi.cpp0000644000175100001770000000044514107270226023147 0ustar runnerdockerMatrix2f m = Matrix2f::Random(); m = (m + m.adjoint()).eval(); JacobiRotation J; J.makeJacobi(m, 0, 1); cout << "Here is the matrix m:" << endl << m << endl; m.applyOnTheLeft(0, 1, J.adjoint()); m.applyOnTheRight(0, 1, J); cout << "Here is the matrix J' * m * J:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_boolean_and.cpp0000644000175100001770000000010014107270226023232 0ustar runnerdockerArray3d v(-1,2,1), w(-3,2,3); cout << ((v > (array, 6) // the inner stride has already been passed as template parameter << endl; piqp-octave/src/eigen/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType.cpp0000644000175100001770000000055514107270226030144 0ustar runnerdockerSelfAdjointEigenSolver es(4); MatrixXf X = MatrixXf::Random(4,4); MatrixXf A = X + X.transpose(); es.compute(A); cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl; es.compute(A + MatrixXf::Identity(4,4)); // re-use es to compute eigenvalues of A+I cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_bottomRows_int.cpp0000644000175100001770000000036214107270226025021 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.bottomRows(2):" << endl; cout << a.bottomRows(2) << endl; a.bottomRows(2).setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/Cwise_sinh.cpp0000644000175100001770000000010014107270226021732 0ustar runnerdockerArrayXd v = ArrayXd::LinSpaced(5,0,1); cout << sinh(v) << endl; piqp-octave/src/eigen/doc/snippets/Cwise_isFinite.cpp0000644000175100001770000000015014107270226022550 0ustar runnerdockerArray3d v(1,2,3); v(1) *= 0.0/0.0; v(2) /= 0.0; cout << v << endl << endl; cout << isfinite(v) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_array_const.cpp0000644000175100001770000000035214107270226024313 0ustar runnerdockerVector3d v(-1,2,-3); cout << "the absolute values:" << endl << v.array().abs() << endl; cout << "the absolute values plus one:" << endl << v.array().abs()+1 << endl; cout << "sum of the squares: " << v.array().square().sum() << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_noalias.cpp0000644000175100001770000000020114107270226023406 0ustar runnerdockerMatrix2d a, b, c; a << 1,2,3,4; b << 5,6,7,8; c.noalias() = a * b; // this computes the product directly to c cout << c << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_replicate.cpp0000644000175100001770000000025214107270226023736 0ustar runnerdockerMatrixXi m = MatrixXi::Random(2,3); cout << "Here is the matrix m:" << endl << m << endl; cout << "m.replicate<3,2>() = ..." << endl; cout << m.replicate<3,2>() << endl; piqp-octave/src/eigen/doc/snippets/Tutorial_range_for_loop_1d_cxx11.cpp0000644000175100001770000000016514107270226026140 0ustar runnerdockerVectorXi v = VectorXi::Random(4); cout << "Here is the vector v:\n"; for(auto x : v) cout << x << " "; cout << "\n"; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_topRows.cpp0000644000175100001770000000035714107270226026216 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.topRows<2>():" << endl; cout << a.topRows<2>() << endl; a.topRows<2>().setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_bottomLeftCorner_int_int.cpp0000644000175100001770000000045514107270226032426 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.bottomLeftCorner<2,Dynamic>(2,2):" << endl; cout << m.bottomLeftCorner<2,Dynamic>(2,2) << endl; m.bottomLeftCorner<2,Dynamic>(2,2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_inverse.cpp0000644000175100001770000000022114107270226023435 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Its inverse is:" << endl << m.inverse() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_asin.cpp0000644000175100001770000000006714107270226021737 0ustar runnerdockerArray3d v(0, sqrt(2.)/2, 1); cout << v.asin() << endl; piqp-octave/src/eigen/doc/snippets/EigenSolver_eigenvectors.cpp0000644000175100001770000000026514107270226024652 0ustar runnerdockerMatrixXd ones = MatrixXd::Ones(3,3); EigenSolver es(ones); cout << "The first eigenvector of the 3x3 matrix of ones is:" << endl << es.eigenvectors().col(0) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_isUnitary.cpp0000644000175100001770000000034714107270226023762 0ustar runnerdockerMatrix3d m = Matrix3d::Identity(); m(0,2) = 1e-4; cout << "Here's the matrix m:" << endl << m << endl; cout << "m.isUnitary() returns: " << m.isUnitary() << endl; cout << "m.isUnitary(1e-3) returns: " << m.isUnitary(1e-3) << endl; piqp-octave/src/eigen/doc/snippets/ComplexEigenSolver_eigenvectors.cpp0000644000175100001770000000030214107270226026172 0ustar runnerdockerMatrixXcf ones = MatrixXcf::Ones(3,3); ComplexEigenSolver ces(ones); cout << "The first eigenvector of the 3x3 matrix of ones is:" << endl << ces.eigenvectors().col(0) << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_int_topRightCorner_int_int.cpp0000644000175100001770000000044714107270226032110 0ustar runnerdockerMatrix4i m = Matrix4i::Random(); cout << "Here is the matrix m:" << endl << m << endl; cout << "Here is m.topRightCorner<2,Dynamic>(2,2):" << endl; cout << m.topRightCorner<2,Dynamic>(2,2) << endl; m.topRightCorner<2,Dynamic>(2,2).setZero(); cout << "Now the matrix m is:" << endl << m << endl; piqp-octave/src/eigen/doc/snippets/Cwise_quotient.cpp0000644000175100001770000000006114107270226022647 0ustar runnerdockerArray3d v(2,3,4), w(4,2,3); cout << v/w << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_computeInverseWithCheck.cpp0000644000175100001770000000047614107270226026600 0ustar runnerdockerMatrix3d m = Matrix3d::Random(); cout << "Here is the matrix m:" << endl << m << endl; Matrix3d inverse; bool invertible; m.computeInverseWithCheck(inverse,invertible); if(invertible) { cout << "It is invertible, and its inverse is:" << endl << inverse << endl; } else { cout << "It is not invertible." << endl; } piqp-octave/src/eigen/doc/snippets/MatrixBase_cast.cpp0000644000175100001770000000016714107270226022725 0ustar runnerdockerMatrix2d md = Matrix2d::Identity() * 0.45; Matrix2f mf = Matrix2f::Identity(); cout << md + mf.cast() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_sqrt.cpp0000644000175100001770000000005414107270226021772 0ustar runnerdockerArray3d v(1,2,4); cout << v.sqrt() << endl; piqp-octave/src/eigen/doc/snippets/Cwise_sin.cpp0000644000175100001770000000007214107270226021572 0ustar runnerdockerArray3d v(M_PI, M_PI/2, M_PI/3); cout << v.sin() << endl; piqp-octave/src/eigen/doc/snippets/Jacobi_makeGivens.cpp0000644000175100001770000000035414107270226023212 0ustar runnerdockerVector2f v = Vector2f::Random(); JacobiRotation G; G.makeGivens(v.x(), v.y()); cout << "Here is the vector v:" << endl << v << endl; v.applyOnTheLeft(0, 1, G.adjoint()); cout << "Here is the vector J' * v:" << endl << v << endl; piqp-octave/src/eigen/doc/snippets/MatrixBase_cwiseArg.cpp0000644000175100001770000000013714107270226023534 0ustar runnerdockerMatrixXcf v = MatrixXcf::Random(2, 3); cout << v << endl << endl; cout << v.cwiseArg() << endl;piqp-octave/src/eigen/doc/snippets/MatrixBase_template_int_bottomRows.cpp0000644000175100001770000000037014107270226026713 0ustar runnerdockerArray44i a = Array44i::Random(); cout << "Here is the array a:" << endl << a << endl; cout << "Here is a.bottomRows<2>():" << endl; cout << a.bottomRows<2>() << endl; a.bottomRows<2>().setZero(); cout << "Now the array a is:" << endl << a << endl; piqp-octave/src/eigen/doc/tutorial.cpp0000644000175100001770000000476014107270226017655 0ustar runnerdocker#include int main(int argc, char *argv[]) { std::cout.precision(2); // demo static functions Eigen::Matrix3f m3 = Eigen::Matrix3f::Random(); Eigen::Matrix4f m4 = Eigen::Matrix4f::Identity(); std::cout << "*** Step 1 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl; // demo non-static set... functions m4.setZero(); m3.diagonal().setOnes(); std::cout << "*** Step 2 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl; // demo fixed-size block() expression as lvalue and as rvalue m4.block<3,3>(0,1) = m3; m3.row(2) = m4.block<1,3>(2,0); std::cout << "*** Step 3 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl; // demo dynamic-size block() { int rows = 3, cols = 3; m4.block(0,1,3,3).setIdentity(); std::cout << "*** Step 4 ***\nm4:\n" << m4 << std::endl; } // demo vector blocks m4.diagonal().block(1,2).setOnes(); std::cout << "*** Step 5 ***\nm4.diagonal():\n" << m4.diagonal() << std::endl; std::cout << "m4.diagonal().start(3)\n" << m4.diagonal().start(3) << std::endl; // demo coeff-wise operations m4 = m4.cwise()*m4; m3 = m3.cwise().cos(); std::cout << "*** Step 6 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl; // sums of coefficients std::cout << "*** Step 7 ***\n m4.sum(): " << m4.sum() << std::endl; std::cout << "m4.col(2).sum(): " << m4.col(2).sum() << std::endl; std::cout << "m4.colwise().sum():\n" << m4.colwise().sum() << std::endl; std::cout << "m4.rowwise().sum():\n" << m4.rowwise().sum() << std::endl; // demo intelligent auto-evaluation m4 = m4 * m4; // auto-evaluates so no aliasing problem (performance penalty is low) Eigen::Matrix4f other = (m4 * m4).lazy(); // forces lazy evaluation m4 = m4 + m4; // here Eigen goes for lazy evaluation, as with most expressions m4 = -m4 + m4 + 5 * m4; // same here, Eigen chooses lazy evaluation for all that. m4 = m4 * (m4 + m4); // here Eigen chooses to first evaluate m4 + m4 into a temporary. // indeed, here it is an optimization to cache this intermediate result. m3 = m3 * m4.block<3,3>(1,1); // here Eigen chooses NOT to evaluate block() into a temporary // because accessing coefficients of that block expression is not more costly than accessing // coefficients of a plain matrix. m4 = m4 * m4.transpose(); // same here, lazy evaluation of the transpose. m4 = m4 * m4.transpose().eval(); // forces immediate evaluation of the transpose std::cout << "*** Step 8 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl; } piqp-octave/src/eigen/doc/CustomizingEigen_CustomScalar.dox0000644000175100001770000001050714107270226023761 0ustar runnerdockernamespace Eigen { /** \page TopicCustomizing_CustomScalar Using custom scalar types \anchor user_defined_scalars By default, Eigen currently supports standard floating-point types (\c float, \c double, \c std::complex, \c std::complex, \c long \c double), as well as all native integer types (e.g., \c int, \c unsigned \c int, \c short, etc.), and \c bool. On x86-64 systems, \c long \c double permits to locally enforces the use of x87 registers with extended accuracy (in comparison to SSE). In order to add support for a custom type \c T you need: -# make sure the common operator (+,-,*,/,etc.) are supported by the type \c T -# add a specialization of struct Eigen::NumTraits (see \ref NumTraits) -# define the math functions that makes sense for your type. This includes standard ones like sqrt, pow, sin, tan, conj, real, imag, etc, as well as abs2 which is Eigen specific. (see the file Eigen/src/Core/MathFunctions.h) The math function should be defined in the same namespace than \c T, or in the \c std namespace though that second approach is not recommended. Here is a concrete example adding support for the Adolc's \c adouble type. Adolc is an automatic differentiation library. The type \c adouble is basically a real value tracking the values of any number of partial derivatives. \code #ifndef ADOLCSUPPORT_H #define ADOLCSUPPORT_H #define ADOLC_TAPELESS #include #include namespace Eigen { template<> struct NumTraits : NumTraits // permits to get the epsilon, dummy_precision, lowest, highest functions { typedef adtl::adouble Real; typedef adtl::adouble NonInteger; typedef adtl::adouble Nested; enum { IsComplex = 0, IsInteger = 0, IsSigned = 1, RequireInitialization = 1, ReadCost = 1, AddCost = 3, MulCost = 3 }; }; } namespace adtl { inline const adouble& conj(const adouble& x) { return x; } inline const adouble& real(const adouble& x) { return x; } inline adouble imag(const adouble&) { return 0.; } inline adouble abs(const adouble& x) { return fabs(x); } inline adouble abs2(const adouble& x) { return x*x; } } #endif // ADOLCSUPPORT_H \endcode This other example adds support for the \c mpq_class type from GMP. It shows in particular how to change the way Eigen picks the best pivot during LU factorization. It selects the coefficient with the highest score, where the score is by default the absolute value of a number, but we can define a different score, for instance to prefer pivots with a more compact representation (this is an example, not a recommendation). Note that the scores should always be non-negative and only zero is allowed to have a score of zero. Also, this can interact badly with thresholds for inexact scalar types. \code #include #include #include namespace Eigen { template<> struct NumTraits : GenericNumTraits { typedef mpq_class Real; typedef mpq_class NonInteger; typedef mpq_class Nested; static inline Real epsilon() { return 0; } static inline Real dummy_precision() { return 0; } static inline int digits10() { return 0; } enum { IsInteger = 0, IsSigned = 1, IsComplex = 0, RequireInitialization = 1, ReadCost = 6, AddCost = 150, MulCost = 100 }; }; namespace internal { template<> struct scalar_score_coeff_op { struct result_type : boost::totally_ordered1 { std::size_t len; result_type(int i = 0) : len(i) {} // Eigen uses Score(0) and Score() result_type(mpq_class const& q) : len(mpz_size(q.get_num_mpz_t())+ mpz_size(q.get_den_mpz_t())-1) {} friend bool operator<(result_type x, result_type y) { // 0 is the worst possible pivot if (x.len == 0) return y.len > 0; if (y.len == 0) return false; // Prefer a pivot with a small representation return x.len > y.len; } friend bool operator==(result_type x, result_type y) { // Only used to test if the score is 0 return x.len == y.len; } }; result_type operator()(mpq_class const& x) const { return x; } }; } } \endcode */ } piqp-octave/src/eigen/doc/TopicLazyEvaluation.dox0000644000175100001770000001425214107270226021765 0ustar runnerdockernamespace Eigen { /** \page TopicLazyEvaluation Lazy Evaluation and Aliasing Executive summary: %Eigen has intelligent compile-time mechanisms to enable lazy evaluation and removing temporaries where appropriate. It will handle aliasing automatically in most cases, for example with matrix products. The automatic behavior can be overridden manually by using the MatrixBase::eval() and MatrixBase::noalias() methods. When you write a line of code involving a complex expression such as \code mat1 = mat2 + mat3 * (mat4 + mat5); \endcode %Eigen determines automatically, for each sub-expression, whether to evaluate it into a temporary variable. Indeed, in certain cases it is better to evaluate a sub-expression into a temporary variable, while in other cases it is better to avoid that. A traditional math library without expression templates always evaluates all sub-expressions into temporaries. So with this code, \code vec1 = vec2 + vec3; \endcode a traditional library would evaluate \c vec2 + vec3 into a temporary \c vec4 and then copy \c vec4 into \c vec1. This is of course inefficient: the arrays are traversed twice, so there are a lot of useless load/store operations. Expression-templates-based libraries can avoid evaluating sub-expressions into temporaries, which in many cases results in large speed improvements. This is called lazy evaluation as an expression is getting evaluated as late as possible. In %Eigen all expressions are lazy-evaluated. More precisely, an expression starts to be evaluated once it is assigned to a matrix. Until then nothing happens beyond constructing the abstract expression tree. In contrast to most other expression-templates-based libraries, however, %Eigen might choose to evaluate some sub-expressions into temporaries. There are two reasons for that: first, pure lazy evaluation is not always a good choice for performance; second, pure lazy evaluation can be very dangerous, for example with matrix products: doing mat = mat*mat gives a wrong result if the matrix product is directly evaluated within the destination matrix, because of the way matrix product works. For these reasons, %Eigen has intelligent compile-time mechanisms to determine automatically which sub-expression should be evaluated into a temporary variable. So in the basic example, \code mat1 = mat2 + mat3; \endcode %Eigen chooses not to introduce any temporary. Thus the arrays are traversed only once, producing optimized code. If you really want to force immediate evaluation, use \link MatrixBase::eval() eval()\endlink: \code mat1 = (mat2 + mat3).eval(); \endcode Here is now a more involved example: \code mat1 = -mat2 + mat3 + 5 * mat4; \endcode Here again %Eigen won't introduce any temporary, thus producing a single fused evaluation loop, which is clearly the correct choice. \section TopicLazyEvaluationWhichExpr Which sub-expressions are evaluated into temporaries? The default evaluation strategy is to fuse the operations in a single loop, and %Eigen will choose it except in a few circumstances. The first circumstance in which %Eigen chooses to evaluate a sub-expression is when it sees an assignment a = b; and the expression \c b has the evaluate-before-assigning \link flags flag\endlink. The most important example of such an expression is the \link Product matrix product expression\endlink. For example, when you do \code mat = mat * mat; \endcode %Eigen will evaluate mat * mat into a temporary matrix, and then copies it into the original \c mat. This guarantees a correct result as we saw above that lazy evaluation gives wrong results with matrix products. It also doesn't cost much, as the cost of the matrix product itself is much higher. Note that this temporary is introduced at evaluation time only, that is, within operator= in this example. The expression mat * mat still return a abstract product type. What if you know that the result does no alias the operand of the product and want to force lazy evaluation? Then use \link MatrixBase::noalias() .noalias()\endlink instead. Here is an example: \code mat1.noalias() = mat2 * mat2; \endcode Here, since we know that mat2 is not the same matrix as mat1, we know that lazy evaluation is not dangerous, so we may force lazy evaluation. Concretely, the effect of noalias() here is to bypass the evaluate-before-assigning \link flags flag\endlink. The second circumstance in which %Eigen chooses to evaluate a sub-expression, is when it sees a nested expression such as a + b where \c b is already an expression having the evaluate-before-nesting \link flags flag\endlink. Again, the most important example of such an expression is the \link Product matrix product expression\endlink. For example, when you do \code mat1 = mat2 * mat3 + mat4 * mat5; \endcode the products mat2 * mat3 and mat4 * mat5 gets evaluated separately into temporary matrices before being summed up in mat1. Indeed, to be efficient matrix products need to be evaluated within a destination matrix at hand, and not as simple "dot products". For small matrices, however, you might want to enforce a "dot-product" based lazy evaluation with lazyProduct(). Again, it is important to understand that those temporaries are created at evaluation time only, that is in operator =. See TopicPitfalls_auto_keyword for common pitfalls regarding this remark. The third circumstance in which %Eigen chooses to evaluate a sub-expression, is when its cost model shows that the total cost of an operation is reduced if a sub-expression gets evaluated into a temporary. Indeed, in certain cases, an intermediate result is sufficiently costly to compute and is reused sufficiently many times, that is worth "caching". Here is an example: \code mat1 = mat2 * (mat3 + mat4); \endcode Here, provided the matrices have at least 2 rows and 2 columns, each coefficient of the expression mat3 + mat4 is going to be used several times in the matrix product. Instead of computing the sum every time, it is much better to compute it once and store it in a temporary variable. %Eigen understands this and evaluates mat3 + mat4 into a temporary variable before evaluating the product. */ } piqp-octave/src/eigen/doc/TemplateKeyword.dox0000644000175100001770000001401514107270226021134 0ustar runnerdockernamespace Eigen { /** \page TopicTemplateKeyword The template and typename keywords in C++ There are two uses for the \c template and \c typename keywords in C++. One of them is fairly well known amongst programmers: to define templates. The other use is more obscure: to specify that an expression refers to a template function or a type. This regularly trips up programmers that use the %Eigen library, often leading to error messages from the compiler that are difficult to understand, such as "expected expression" or "no match for operator<". \eigenAutoToc \section TopicTemplateKeywordToDefineTemplates Using the template and typename keywords to define templates The \c template and \c typename keywords are routinely used to define templates. This is not the topic of this page as we assume that the reader is aware of this (otherwise consult a C++ book). The following example should illustrate this use of the \c template keyword. \code template bool isPositive(T x) { return x > 0; } \endcode We could just as well have written template <class T>; the keywords \c typename and \c class have the same meaning in this context. \section TopicTemplateKeywordExample An example showing the second use of the template keyword Let us illustrate the second use of the \c template keyword with an example. Suppose we want to write a function which copies all entries in the upper triangular part of a matrix into another matrix, while keeping the lower triangular part unchanged. A straightforward implementation would be as follows:
Example:Output:
\include TemplateKeyword_simple.cpp \verbinclude TemplateKeyword_simple.out
That works fine, but it is not very flexible. First, it only works with dynamic-size matrices of single-precision floats; the function \c copyUpperTriangularPart() does not accept static-size matrices or matrices with double-precision numbers. Second, if you use an expression such as mat.topLeftCorner(3,3) as the parameter \c src, then this is copied into a temporary variable of type MatrixXf; this copy can be avoided. As explained in \ref TopicFunctionTakingEigenTypes, both issues can be resolved by making \c copyUpperTriangularPart() accept any object of type MatrixBase. This leads to the following code:
Example:Output:
\include TemplateKeyword_flexible.cpp \verbinclude TemplateKeyword_flexible.out
The one line in the body of the function \c copyUpperTriangularPart() shows the second, more obscure use of the \c template keyword in C++. Even though it may look strange, the \c template keywords are necessary according to the standard. Without it, the compiler may reject the code with an error message like "no match for operator<". \section TopicTemplateKeywordExplanation Explanation The reason that the \c template keyword is necessary in the last example has to do with the rules for how templates are supposed to be compiled in C++. The compiler has to check the code for correct syntax at the point where the template is defined, without knowing the actual value of the template arguments (\c Derived1 and \c Derived2 in the example). That means that the compiler cannot know that dst.triangularView is a member template and that the following < symbol is part of the delimiter for the template parameter. Another possibility would be that dst.triangularView is a member variable with the < symbol referring to the operator<() function. In fact, the compiler should choose the second possibility, according to the standard. If dst.triangularView is a member template (as in our case), the programmer should specify this explicitly with the \c template keyword and write dst.template triangularView. The precise rules are rather complicated, but ignoring some subtleties we can summarize them as follows: - A dependent name is name that depends (directly or indirectly) on a template parameter. In the example, \c dst is a dependent name because it is of type MatrixBase<Derived1> which depends on the template parameter \c Derived1. - If the code contains either one of the constructs xxx.yyy or xxx->yyy and \c xxx is a dependent name and \c yyy refers to a member template, then the \c template keyword must be used before \c yyy, leading to xxx.template yyy or xxx->template yyy. - If the code contains the construct xxx::yyy and \c xxx is a dependent name and \c yyy refers to a member typedef, then the \c typename keyword must be used before the whole construct, leading to typename xxx::yyy. As an example where the \c typename keyword is required, consider the following code in \ref TutorialSparse for iterating over the non-zero entries of a sparse matrix type: \code SparseMatrixType mat(rows,cols); for (int k=0; k void iterateOverSparseMatrix(const SparseMatrix& mat; { for (int k=0; k::InnerIterator it(mat,k); it; ++it) { /* ... */ } } \endcode \section TopicTemplateKeywordResources Resources for further reading For more information and a fuller explanation of this topic, the reader may consult the following sources: - The book "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy contains a very good explanation in Appendix B ("The typename and template Keywords") which formed the basis for this page. - http://pages.cs.wisc.edu/~driscoll/typename.html - http://www.parashift.com/c++-faq-lite/templates.html#faq-35.18 - http://www.comeaucomputing.com/techtalk/templates/#templateprefix - http://www.comeaucomputing.com/techtalk/templates/#typename */ } piqp-octave/src/eigen/doc/HiPerformance.dox0000644000175100001770000001246514107270226020545 0ustar runnerdocker namespace Eigen { /** \page TopicWritingEfficientProductExpression Writing efficient matrix product expressions In general achieving good performance with Eigen does no require any special effort: simply write your expressions in the most high level way. This is especially true for small fixed size matrices. For large matrices, however, it might be useful to take some care when writing your expressions in order to minimize useless evaluations and optimize the performance. In this page we will give a brief overview of the Eigen's internal mechanism to simplify and evaluate complex product expressions, and discuss the current limitations. In particular we will focus on expressions matching level 2 and 3 BLAS routines, i.e, all kind of matrix products and triangular solvers. Indeed, in Eigen we have implemented a set of highly optimized routines which are very similar to BLAS's ones. Unlike BLAS, those routines are made available to user via a high level and natural API. Each of these routines can compute in a single evaluation a wide variety of expressions. Given an expression, the challenge is then to map it to a minimal set of routines. As explained latter, this mechanism has some limitations, and knowing them will allow you to write faster code by making your expressions more Eigen friendly. \section GEMM General Matrix-Matrix product (GEMM) Let's start with the most common primitive: the matrix product of general dense matrices. In the BLAS world this corresponds to the GEMM routine. Our equivalent primitive can perform the following operation: \f$ C.noalias() += \alpha op1(A) op2(B) \f$ where A, B, and C are column and/or row major matrices (or sub-matrices), alpha is a scalar value, and op1, op2 can be transpose, adjoint, conjugate, or the identity. When Eigen detects a matrix product, it analyzes both sides of the product to extract a unique scalar factor alpha, and for each side, its effective storage order, shape, and conjugation states. More precisely each side is simplified by iteratively removing trivial expressions such as scalar multiple, negation and conjugation. Transpose and Block expressions are not evaluated and they only modify the storage order and shape. All other expressions are immediately evaluated. For instance, the following expression: \code m1.noalias() -= s4 * (s1 * m2.adjoint() * (-(s3*m3).conjugate()*s2)) \endcode is automatically simplified to: \code m1.noalias() += (s1*s2*conj(s3)*s4) * m2.adjoint() * m3.conjugate() \endcode which exactly matches our GEMM routine. \subsection GEMM_Limitations Limitations Unfortunately, this simplification mechanism is not perfect yet and not all expressions which could be handled by a single GEMM-like call are correctly detected.
Not optimal expression Evaluated as Optimal version (single evaluation) Comments
\code m1 += m2 * m3; \endcode \code temp = m2 * m3; m1 += temp; \endcode \code m1.noalias() += m2 * m3; \endcode Use .noalias() to tell Eigen the result and right-hand-sides do not alias. Otherwise the product m2 * m3 is evaluated into a temporary.
\code m1.noalias() += s1 * (m2 * m3); \endcode This is a special feature of Eigen. Here the product between a scalar and a matrix product does not evaluate the matrix product but instead it returns a matrix product expression tracking the scalar scaling factor.
Without this optimization, the matrix product would be evaluated into a temporary as in the next example.
\code m1.noalias() += (m2 * m3).adjoint(); \endcode \code temp = m2 * m3; m1 += temp.adjoint(); \endcode \code m1.noalias() += m3.adjoint() * * m2.adjoint(); \endcode This is because the product expression has the EvalBeforeNesting bit which enforces the evaluation of the product by the Tranpose expression.
\code m1 = m1 + m2 * m3; \endcode \code temp = m2 * m3; m1 = m1 + temp; \endcode \code m1.noalias() += m2 * m3; \endcode Here there is no way to detect at compile time that the two m1 are the same, and so the matrix product will be immediately evaluated.
\code m1.noalias() = m4 + m2 * m3; \endcode \code temp = m2 * m3; m1 = m4 + temp; \endcode \code m1 = m4; m1.noalias() += m2 * m3; \endcode First of all, here the .noalias() in the first expression is useless because m2*m3 will be evaluated anyway. However, note how this expression can be rewritten so that no temporary is required. (tip: for very small fixed size matrix it is slightly better to rewrite it like this: m1.noalias() = m2 * m3; m1 += m4;
\code m1.noalias() += (s1*m2).block(..) * m3; \endcode \code temp = (s1*m2).block(..); m1 += temp * m3; \endcode \code m1.noalias() += s1 * m2.block(..) * m3; \endcode This is because our expression analyzer is currently not able to extract trivial expressions nested in a Block expression. Therefore the nested scalar multiple cannot be properly extracted.
Of course all these remarks hold for all other kind of products involving triangular or selfadjoint matrices. */ } piqp-octave/src/eigen/doc/eigen_navtree_hacks.js0000644000175100001770000001750614107270226021632 0ustar runnerdocker // generate a table of contents in the side-nav based on the h1/h2 tags of the current page. function generate_autotoc() { var headers = $("h1, h2"); if(headers.length > 1) { var toc = $("#side-nav").append(''); toc = $("#nav-toc"); var footer = $("#nav-path"); var footerHeight = footer.height(); toc = toc.append('
    '); toc = toc.find('ul'); var indices = new Array(); indices[0] = 0; indices[1] = 0; var h1counts = $("h1").length; headers.each(function(i) { var current = $(this); var levelTag = current[0].tagName.charAt(1); if(h1counts==0) levelTag--; var cur_id = current.attr("id"); indices[levelTag-1]+=1; var prefix = indices[0]; if (levelTag >1) { prefix+="."+indices[1]; } // Uncomment to add number prefixes // current.html(prefix + " " + current.html()); for(var l = levelTag; l < 2; ++l){ indices[l] = 0; } if(cur_id == undefined) { current.attr('id', 'title' + i); current.addClass('anchor'); toc.append("
  • " + current.text() + "
  • "); } else { toc.append("
  • " + current.text() + "
  • "); } }); resizeHeight(); } } var global_navtree_object; // Overloaded to remove links to sections/subsections function getNode(o, po) { po.childrenVisited = true; var l = po.childrenData.length-1; for (var i in po.childrenData) { var nodeData = po.childrenData[i]; if((!nodeData[1]) || (nodeData[1].indexOf('#')==-1)) // <- we added this line po.children[i] = newNode(o, po, nodeData[0], nodeData[1], nodeData[2], i==l); } } // Overloaded to adjust the size of the navtree wrt the toc function resizeHeight() { var header = $("#top"); var sidenav = $("#side-nav"); var content = $("#doc-content"); var navtree = $("#nav-tree"); var footer = $("#nav-path"); var toc = $("#nav-toc"); var headerHeight = header.outerHeight(); var footerHeight = footer.outerHeight(); var tocHeight = toc.height(); var windowHeight = $(window).height() - headerHeight - footerHeight; content.css({height:windowHeight + "px"}); navtree.css({height:(windowHeight-tocHeight) + "px"}); sidenav.css({height:windowHeight + "px"}); } // Overloaded to save the root node into global_navtree_object function initNavTree(toroot,relpath) { var o = new Object(); global_navtree_object = o; // <- we added this line o.toroot = toroot; o.node = new Object(); o.node.li = document.getElementById("nav-tree-contents"); o.node.childrenData = NAVTREE; o.node.children = new Array(); o.node.childrenUL = document.createElement("ul"); o.node.getChildrenUL = function() { return o.node.childrenUL; }; o.node.li.appendChild(o.node.childrenUL); o.node.depth = 0; o.node.relpath = relpath; o.node.expanded = false; o.node.isLast = true; o.node.plus_img = document.createElement("img"); o.node.plus_img.src = relpath+"ftv2pnode.png"; o.node.plus_img.width = 16; o.node.plus_img.height = 22; if (localStorageSupported()) { var navSync = $('#nav-sync'); if (cachedLink()) { showSyncOff(navSync,relpath); navSync.removeClass('sync'); } else { showSyncOn(navSync,relpath); } navSync.click(function(){ toggleSyncButton(relpath); }); } navTo(o,toroot,window.location.hash,relpath); $(window).bind('hashchange', function(){ if (window.location.hash && window.location.hash.length>1){ var a; if ($(location).attr('hash')){ var clslink=stripPath($(location).attr('pathname'))+':'+ $(location).attr('hash').substring(1); a=$('.item a[class$="'+clslink+'"]'); } if (a==null || !$(a).parent().parent().hasClass('selected')){ $('.item').removeClass('selected'); $('.item').removeAttr('id'); } var link=stripPath2($(location).attr('pathname')); navTo(o,link,$(location).attr('hash'),relpath); } else if (!animationInProgress) { $('#doc-content').scrollTop(0); $('.item').removeClass('selected'); $('.item').removeAttr('id'); navTo(o,toroot,window.location.hash,relpath); } }) $(window).on("load", showRoot); } // return false if the the node has no children at all, or has only section/subsection children function checkChildrenData(node) { if (!(typeof(node.childrenData)==='string')) { for (var i in node.childrenData) { var url = node.childrenData[i][1]; if(url.indexOf("#")==-1) return true; } return false; } return (node.childrenData); } // Modified to: // 1 - remove the root node // 2 - remove the section/subsection children function createIndent(o,domNode,node,level) { var level=-2; // <- we replaced level=-1 by level=-2 var n = node; while (n.parentNode) { level++; n=n.parentNode; } if (checkChildrenData(node)) { // <- we modified this line to use checkChildrenData(node) instead of node.childrenData var imgNode = document.createElement("span"); imgNode.className = 'arrow'; imgNode.style.paddingLeft=(16*level).toString()+'px'; imgNode.innerHTML=arrowRight; node.plus_img = imgNode; node.expandToggle = document.createElement("a"); node.expandToggle.href = "javascript:void(0)"; node.expandToggle.onclick = function() { if (node.expanded) { $(node.getChildrenUL()).slideUp("fast"); node.plus_img.innerHTML=arrowRight; node.expanded = false; } else { expandNode(o, node, false, false); } } node.expandToggle.appendChild(imgNode); domNode.appendChild(node.expandToggle); } else { var span = document.createElement("span"); span.className = 'arrow'; span.style.width = 16*(level+1)+'px'; span.innerHTML = ' '; domNode.appendChild(span); } } // Overloaded to automatically expand the selected node function selectAndHighlight(hash,n) { var a; if (hash) { var link=stripPath($(location).attr('pathname'))+':'+hash.substring(1); a=$('.item a[class$="'+link+'"]'); } if (a && a.length) { a.parent().parent().addClass('selected'); a.parent().parent().attr('id','selected'); highlightAnchor(); } else if (n) { $(n.itemDiv).addClass('selected'); $(n.itemDiv).attr('id','selected'); } if ($('#nav-tree-contents .item:first').hasClass('selected')) { $('#nav-sync').css('top','30px'); } else { $('#nav-sync').css('top','5px'); } expandNode(global_navtree_object, n, true, true); // <- we added this line showRoot(); } $(document).ready(function() { generate_autotoc(); (function (){ // wait until the first "selected" element has been created try { // this line will triger an exception if there is no #selected element, i.e., before the tree structure is complete. document.getElementById("selected").className = "item selected"; // ok, the default tree has been created, we can keep going... // expand the "Chapters" node if(window.location.href.indexOf('unsupported')==-1) expandNode(global_navtree_object, global_navtree_object.node.children[0].children[2], true, true); else expandNode(global_navtree_object, global_navtree_object.node.children[0].children[1], true, true); // Hide the root node "Eigen" $(document.getElementsByClassName('index.html')[0]).parent().parent().css({display:"none"}); } catch (err) { setTimeout(arguments.callee, 10); } })(); $(window).on("load", resizeHeight); }); piqp-octave/src/eigen/doc/UnalignedArrayAssert.dox0000644000175100001770000002104114107270226022100 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicUnalignedArrayAssert Explanation of the assertion on unaligned arrays Hello! You are seeing this webpage because your program terminated on an assertion failure like this one:
    my_program: path/to/eigen/Eigen/src/Core/DenseStorage.h:44:
    Eigen::internal::matrix_array::internal::matrix_array()
    [with T = double, int Size = 2, int MatrixOptions = 2, bool Align = true]:
    Assertion `(reinterpret_cast(array) & (sizemask)) == 0 && "this assertion
    is explained here: http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html
    **** READ THIS WEB PAGE !!! ****"' failed.
    
    There are 4 known causes for this issue. If you can target \cpp17 only with a recent compiler (e.g., GCC>=7, clang>=5, MSVC>=19.12), then you're lucky: enabling c++17 should be enough (if not, please report to us). Otherwise, please read on to understand those issues and learn how to fix them. \eigenAutoToc \section where Where in my own code is the cause of the problem? First of all, you need to find out where in your own code this assertion was triggered from. At first glance, the error message doesn't look helpful, as it refers to a file inside Eigen! However, since your program crashed, if you can reproduce the crash, you can get a backtrace using any debugger. For example, if you're using GCC, you can use the GDB debugger as follows: \code $ gdb ./my_program # Start GDB on your program > run # Start running your program ... # Now reproduce the crash! > bt # Obtain the backtrace \endcode Now that you know precisely where in your own code the problem is happening, read on to understand what you need to change. \section c1 Cause 1: Structures having Eigen objects as members If you have code like this, \code class Foo { //... Eigen::Vector4d v; //... }; //... Foo *foo = new Foo; \endcode then you need to read this separate page: \ref TopicStructHavingEigenMembers "Structures Having Eigen Members". Note that here, Eigen::Vector4d is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types". \section c2 Cause 2: STL Containers or manual memory allocation If you use STL Containers such as std::vector, std::map, ..., with %Eigen objects, or with classes containing %Eigen objects, like this, \code std::vector my_vector; struct my_class { ... Eigen::Matrix2d m; ... }; std::map my_map; \endcode then you need to read this separate page: \ref TopicStlContainers "Using STL Containers with Eigen". Note that here, Eigen::Matrix2d is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" and \ref TopicStructHavingEigenMembers "structures having such Eigen objects as member". The same issue will be exhibited by any classes/functions by-passing operator new to allocate memory, that is, by performing custom memory allocation followed by calls to the placement new operator. This is for instance typically the case of \c `std::make_shared` or `std::allocate_shared` for which is the solution is to use an \ref aligned_allocator "aligned allocator" as detailed in the \ref TopicStlContainers "solution for STL containers". \section c3 Cause 3: Passing Eigen objects by value If some function in your code is getting an %Eigen object passed by value, like this, \code void func(Eigen::Vector4d v); \endcode then you need to read this separate page: \ref TopicPassingByValue "Passing Eigen objects by value to functions". Note that here, Eigen::Vector4d is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types". \section c4 Cause 4: Compiler making a wrong assumption on stack alignment (for instance GCC on Windows) This is a must-read for people using GCC on Windows (like MinGW or TDM-GCC). If you have this assertion failure in an innocent function declaring a local variable like this: \code void foo() { Eigen::Quaternionf q; //... } \endcode then you need to read this separate page: \ref TopicWrongStackAlignment "Compiler making a wrong assumption on stack alignment". Note that here, Eigen::Quaternionf is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types". \section explanation General explanation of this assertion \ref TopicFixedSizeVectorizable "Fixed-size vectorizable Eigen objects" must absolutely be created at properly aligned locations, otherwise SIMD instructions addressing them will crash. For instance, SSE/NEON/MSA/Altivec/VSX targets will require 16-byte-alignment, whereas AVX and AVX512 targets may require up to 32 and 64 byte alignment respectively. %Eigen normally takes care of these alignment issues for you, by setting an alignment attribute on them and by overloading their `operator new`. However there are a few corner cases where these alignment settings get overridden: they are the possible causes for this assertion. \section getrid I don't care about optimal vectorization, how do I get rid of that stuff? Three possibilities:
    • Use the \c DontAlign option to Matrix, Array, Quaternion, etc. objects that gives you trouble. This way %Eigen won't try to over-align them, and thus won"t assume any special alignment. On the down side, you will pay the cost of unaligned loads/stores for them, but on modern CPUs, the overhead is either null or marginal. See \link StructHavingEigenMembers_othersolutions here \endlink for an example.
    • Define \link TopicPreprocessorDirectivesPerformance EIGEN_MAX_STATIC_ALIGN_BYTES \endlink to 0. That disables all 16-byte (and above) static alignment code, while keeping 16-byte (or above) heap alignment. This has the effect of vectorizing fixed-size objects (like Matrix4d) through unaligned stores (as controlled by \link TopicPreprocessorDirectivesPerformance EIGEN_UNALIGNED_VECTORIZE \endlink), while keeping unchanged the vectorization of dynamic-size objects (like MatrixXd). On 64 bytes systems, you might also define it 16 to disable only 32 and 64 bytes of over-alignment. But do note that this breaks ABI compatibility with the default behavior of static alignment.
    • Or define both \link TopicPreprocessorDirectivesPerformance EIGEN_DONT_VECTORIZE \endlink and `EIGEN_DISABLE_UNALIGNED_ARRAY_ASSERT`. This keeps the 16-byte (or above) alignment code and thus preserves ABI compatibility, but completely disables vectorization.
    If you want to know why defining `EIGEN_DONT_VECTORIZE` does not by itself disable 16-byte (or above) alignment and the assertion, here's the explanation: It doesn't disable the assertion, because otherwise code that runs fine without vectorization would suddenly crash when enabling vectorization. It doesn't disable 16-byte (or above) alignment, because that would mean that vectorized and non-vectorized code are not mutually ABI-compatible. This ABI compatibility is very important, even for people who develop only an in-house application, as for instance one may want to have in the same application a vectorized path and a non-vectorized path. \section checkmycode How can I check my code is safe regarding alignment issues? Unfortunately, there is no possibility in c++ to detect any of the aforementioned shortcoming at compile time (though static analyzers are becoming more and more powerful and could detect some of them). Even at runtime, all we can do is to catch invalid unaligned allocation and trigger the explicit assertion mentioned at the beginning of this page. Therefore, if your program runs fine on a given system with some given compilation flags, then this does not guarantee that your code is safe. For instance, on most 64 bits systems buffer are aligned on 16 bytes boundary and so, if you do not enable AVX instruction set, then your code will run fine. On the other hand, the same code may assert if moving to a more exotic platform, or enabling AVX instructions that required 32 bytes alignment by default. The situation is not hopeless though. Assuming your code is well covered by unit test, then you can check its alignment safety by linking it to a custom malloc library returning 8 bytes aligned buffers only. This way all alignment shortcomings should pop-up. To this end, you must also compile your program with \link TopicPreprocessorDirectivesPerformance EIGEN_MALLOC_ALREADY_ALIGNED=0 \endlink. */ } piqp-octave/src/eigen/doc/TutorialBlockOperations.dox0000644000175100001770000002014014107270226022632 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialBlockOperations Block operations This page explains the essentials of block operations. A block is a rectangular part of a matrix or array. Blocks expressions can be used both as rvalues and as lvalues. As usual with Eigen expressions, this abstraction has zero runtime cost provided that you let your compiler optimize. \eigenAutoToc \section TutorialBlockOperationsUsing Using block operations The most general block operation in Eigen is called \link DenseBase::block() .block() \endlink. There are two versions, whose syntax is as follows:
    \b %Block \b operation Version constructing a \n dynamic-size block expression Version constructing a \n fixed-size block expression
    %Block of size (p,q), starting at (i,j) \code matrix.block(i,j,p,q);\endcode \code matrix.block(i,j);\endcode
    As always in Eigen, indices start at 0. Both versions can be used on fixed-size and dynamic-size matrices and arrays. These two expressions are semantically equivalent. The only difference is that the fixed-size version will typically give you faster code if the block size is small, but requires this size to be known at compile time. The following program uses the dynamic-size and fixed-size versions to print the values of several blocks inside a matrix.
    Example:Output:
    \include Tutorial_BlockOperations_print_block.cpp \verbinclude Tutorial_BlockOperations_print_block.out
    In the above example the \link DenseBase::block() .block() \endlink function was employed as a \em rvalue, i.e. it was only read from. However, blocks can also be used as \em lvalues, meaning that you can assign to a block. This is illustrated in the following example. This example also demonstrates blocks in arrays, which works exactly like the above-demonstrated blocks in matrices.
    Example:Output:
    \include Tutorial_BlockOperations_block_assignment.cpp \verbinclude Tutorial_BlockOperations_block_assignment.out
    While the \link DenseBase::block() .block() \endlink method can be used for any block operation, there are other methods for special cases, providing more specialized API and/or better performance. On the topic of performance, all what matters is that you give Eigen as much information as possible at compile time. For example, if your block is a single whole column in a matrix, using the specialized \link DenseBase::col() .col() \endlink function described below lets Eigen know that, which can give it optimization opportunities. The rest of this page describes these specialized methods. \section TutorialBlockOperationsSyntaxColumnRows Columns and rows Individual columns and rows are special cases of blocks. Eigen provides methods to easily address them: \link DenseBase::col() .col() \endlink and \link DenseBase::row() .row()\endlink.
    %Block operation Method
    ith row \link DenseBase::row() * \endlink \code matrix.row(i);\endcode
    jth column \link DenseBase::col() * \endlink \code matrix.col(j);\endcode
    The argument for \p col() and \p row() is the index of the column or row to be accessed. As always in Eigen, indices start at 0.
    Example:Output:
    \include Tutorial_BlockOperations_colrow.cpp \verbinclude Tutorial_BlockOperations_colrow.out
    That example also demonstrates that block expressions (here columns) can be used in arithmetic like any other expression. \section TutorialBlockOperationsSyntaxCorners Corner-related operations Eigen also provides special methods for blocks that are flushed against one of the corners or sides of a matrix or array. For instance, \link DenseBase::topLeftCorner() .topLeftCorner() \endlink can be used to refer to a block in the top-left corner of a matrix. The different possibilities are summarized in the following table:
    %Block \b operation Version constructing a \n dynamic-size block expression Version constructing a \n fixed-size block expression
    Top-left p by q block \link DenseBase::topLeftCorner() * \endlink \code matrix.topLeftCorner(p,q);\endcode \code matrix.topLeftCorner();\endcode
    Bottom-left p by q block \link DenseBase::bottomLeftCorner() * \endlink \code matrix.bottomLeftCorner(p,q);\endcode \code matrix.bottomLeftCorner();\endcode
    Top-right p by q block \link DenseBase::topRightCorner() * \endlink \code matrix.topRightCorner(p,q);\endcode \code matrix.topRightCorner();\endcode
    Bottom-right p by q block \link DenseBase::bottomRightCorner() * \endlink \code matrix.bottomRightCorner(p,q);\endcode \code matrix.bottomRightCorner();\endcode
    %Block containing the first q rows \link DenseBase::topRows() * \endlink \code matrix.topRows(q);\endcode \code matrix.topRows();\endcode
    %Block containing the last q rows \link DenseBase::bottomRows() * \endlink \code matrix.bottomRows(q);\endcode \code matrix.bottomRows();\endcode
    %Block containing the first p columns \link DenseBase::leftCols() * \endlink \code matrix.leftCols(p);\endcode \code matrix.leftCols

    ();\endcode

    %Block containing the last q columns \link DenseBase::rightCols() * \endlink \code matrix.rightCols(q);\endcode \code matrix.rightCols();\endcode
    %Block containing the q columns starting from i \link DenseBase::middleCols() * \endlink \code matrix.middleCols(i,q);\endcode \code matrix.middleCols(i);\endcode
    %Block containing the q rows starting from i \link DenseBase::middleRows() * \endlink \code matrix.middleRows(i,q);\endcode \code matrix.middleRows(i);\endcode
    Here is a simple example illustrating the use of the operations presented above:
    Example:Output:
    \include Tutorial_BlockOperations_corner.cpp \verbinclude Tutorial_BlockOperations_corner.out
    \section TutorialBlockOperationsSyntaxVectors Block operations for vectors Eigen also provides a set of block operations designed specifically for the special case of vectors and one-dimensional arrays:
    %Block operation Version constructing a \n dynamic-size block expression Version constructing a \n fixed-size block expression
    %Block containing the first \p n elements \link DenseBase::head() * \endlink \code vector.head(n);\endcode \code vector.head();\endcode
    %Block containing the last \p n elements \link DenseBase::tail() * \endlink \code vector.tail(n);\endcode \code vector.tail();\endcode
    %Block containing \p n elements, starting at position \p i \link DenseBase::segment() * \endlink \code vector.segment(i,n);\endcode \code vector.segment(i);\endcode
    An example is presented below:
    Example:Output:
    \include Tutorial_BlockOperations_vector.cpp \verbinclude Tutorial_BlockOperations_vector.out
    */ } piqp-octave/src/eigen/doc/InplaceDecomposition.dox0000644000175100001770000000732514107270226022132 0ustar runnerdockernamespace Eigen { /** \eigenManualPage InplaceDecomposition Inplace matrix decompositions Starting from %Eigen 3.3, the LU, Cholesky, and QR decompositions can operate \em inplace, that is, directly within the given input matrix. This feature is especially useful when dealing with huge matrices, and or when the available memory is very limited (embedded systems). To this end, the respective decomposition class must be instantiated with a Ref<> matrix type, and the decomposition object must be constructed with the input matrix as argument. As an example, let us consider an inplace LU decomposition with partial pivoting. Let's start with the basic inclusions, and declaration of a 2x2 matrix \c A:
    codeoutput
    \snippet TutorialInplaceLU.cpp init \snippet TutorialInplaceLU.out init
    No surprise here! Then, let's declare our inplace LU object \c lu, and check the content of the matrix \c A:
    \snippet TutorialInplaceLU.cpp declaration \snippet TutorialInplaceLU.out declaration
    Here, the \c lu object computes and stores the \c L and \c U factors within the memory held by the matrix \c A. The coefficients of \c A have thus been destroyed during the factorization, and replaced by the L and U factors as one can verify:
    \snippet TutorialInplaceLU.cpp matrixLU \snippet TutorialInplaceLU.out matrixLU
    Then, one can use the \c lu object as usual, for instance to solve the Ax=b problem:
    \snippet TutorialInplaceLU.cpp solve \snippet TutorialInplaceLU.out solve
    Here, since the content of the original matrix \c A has been lost, we had to declared a new matrix \c A0 to verify the result. Since the memory is shared between \c A and \c lu, modifying the matrix \c A will make \c lu invalid. This can easily be verified by modifying the content of \c A and trying to solve the initial problem again:
    \snippet TutorialInplaceLU.cpp modifyA \snippet TutorialInplaceLU.out modifyA
    Note that there is no shared pointer under the hood, it is the \b responsibility \b of \b the \b user to keep the input matrix \c A in life as long as \c lu is living. If one wants to update the factorization with the modified A, one has to call the compute method as usual:
    \snippet TutorialInplaceLU.cpp recompute \snippet TutorialInplaceLU.out recompute
    Note that calling compute does not change the memory which is referenced by the \c lu object. Therefore, if the compute method is called with another matrix \c A1 different than \c A, then the content of \c A1 won't be modified. This is still the content of \c A that will be used to store the L and U factors of the matrix \c A1. This can easily be verified as follows:
    \snippet TutorialInplaceLU.cpp recompute_bis0 \snippet TutorialInplaceLU.out recompute_bis0
    The matrix \c A1 is unchanged, and one can thus solve A1*x=b, and directly check the residual without any copy of \c A1:
    \snippet TutorialInplaceLU.cpp recompute_bis1 \snippet TutorialInplaceLU.out recompute_bis1
    Here is the list of matrix decompositions supporting this inplace mechanism: - class LLT - class LDLT - class PartialPivLU - class FullPivLU - class HouseholderQR - class ColPivHouseholderQR - class FullPivHouseholderQR - class CompleteOrthogonalDecomposition */ }piqp-octave/src/eigen/doc/TopicVectorization.dox0000644000175100001770000000014114107270226021646 0ustar runnerdockernamespace Eigen { /** \page TopicVectorization Vectorization TODO: write this dox page! */ } piqp-octave/src/eigen/doc/Overview.dox0000644000175100001770000000361114107270226017622 0ustar runnerdockernamespace Eigen { /** \mainpage notitle This is the API documentation for Eigen3. You can download it as a tgz archive for offline reading. For a first contact with Eigen, the best place is to have a look at the \link GettingStarted getting started \endlink page that show you how to write and compile your first program with Eigen. Then, the \b quick \b reference \b pages give you a quite complete description of the API in a very condensed format that is specially useful to recall the syntax of a particular feature, or to have a quick look at the API. They currently cover the two following feature sets, and more will come in the future: - \link QuickRefPage [QuickRef] Dense matrix and array manipulations \endlink - \link SparseQuickRefPage [QuickRef] Sparse linear algebra \endlink You're a MatLab user? There is also a short ASCII reference with Matlab translations. The \b main \b documentation is organized into \em chapters covering different domains of features. They are themselves composed of \em user \em manual pages describing the different features in a comprehensive way, and \em reference pages that gives you access to the API documentation through the related Eigen's \em modules and \em classes. Under the \subpage UserManual_CustomizingEigen section, you will find discussions and examples on extending %Eigen's features and supporting custom scalar types. Under the \subpage UserManual_Generalities section, you will find documentation on more general topics such as preprocessor directives, controlling assertions, multi-threading, MKL support, some Eigen's internal insights, and much more... Finally, do not miss the search engine, useful to quickly get to the documentation of a given class or function. Want more? Checkout the \em unsupported \em modules documentation. */ } piqp-octave/src/eigen/doc/UsingBlasLapackBackends.dox0000644000175100001770000001475114107270226022461 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Copyright (C) 2011-2016 Gael Guennebaud Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Documentation on the use of BLAS/LAPACK libraries through Eigen ******************************************************************************** */ namespace Eigen { /** \page TopicUsingBlasLapack Using BLAS/LAPACK from %Eigen Since %Eigen version 3.3 and later, any F77 compatible BLAS or LAPACK libraries can be used as backends for dense matrix products and dense matrix decompositions. For instance, one can use Intel® MKL, Apple's Accelerate framework on OSX, OpenBLAS, Netlib LAPACK, etc. Do not miss this \link TopicUsingIntelMKL page \endlink for further discussions on the specific use of Intel® MKL (also includes VML, PARDISO, etc.) In order to use an external BLAS and/or LAPACK library, you must link you own application to the respective libraries and their dependencies. For LAPACK, you must also link to the standard Lapacke library, which is used as a convenient think layer between %Eigen's C++ code and LAPACK F77 interface. Then you must activate their usage by defining one or multiple of the following macros (\b before including any %Eigen's header): \note For Mac users, in order to use the lapack version shipped with the Accelerate framework, you also need the lapacke library. Using MacPorts, this is as easy as: \code sudo port install lapack \endcode and then use the following link flags: \c -framework \c Accelerate \c /opt/local/lib/lapack/liblapacke.dylib
    \c EIGEN_USE_BLAS Enables the use of external BLAS level 2 and 3 routines (compatible with any F77 BLAS interface)
    \c EIGEN_USE_LAPACKE Enables the use of external Lapack routines via the Lapacke C interface to Lapack (compatible with any F77 LAPACK interface)
    \c EIGEN_USE_LAPACKE_STRICT Same as \c EIGEN_USE_LAPACKE but algorithms of lower numerical robustness are disabled. \n This currently concerns only JacobiSVD which otherwise would be replaced by \c gesvd that is less robust than Jacobi rotations.
    When doing so, a number of %Eigen's algorithms are silently substituted with calls to BLAS or LAPACK routines. These substitutions apply only for \b Dynamic \b or \b large enough objects with one of the following four standard scalar types: \c float, \c double, \c complex, and \c complex. Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms. The breadth of %Eigen functionality that can be substituted is listed in the table below.
    Functional domainCode exampleBLAS/LAPACK routines
    Matrix-matrix operations \n \c EIGEN_USE_BLAS \code m1*m2.transpose(); m1.selfadjointView()*m2; m1*m2.triangularView(); m1.selfadjointView().rankUpdate(m2,1.0); \endcode\code ?gemm ?symm/?hemm ?trmm dsyrk/ssyrk \endcode
    Matrix-vector operations \n \c EIGEN_USE_BLAS \code m1.adjoint()*b; m1.selfadjointView()*b; m1.triangularView()*b; \endcode\code ?gemv ?symv/?hemv ?trmv \endcode
    LU decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT \code v1 = m1.lu().solve(v2); \endcode\code ?getrf \endcode
    Cholesky decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT \code v1 = m2.selfadjointView().llt().solve(v2); \endcode\code ?potrf \endcode
    QR decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT \code m1.householderQr(); m1.colPivHouseholderQr(); \endcode\code ?geqrf ?geqp3 \endcode
    Singular value decomposition \n \c EIGEN_USE_LAPACKE \code JacobiSVD svd; svd.compute(m1, ComputeThinV); \endcode\code ?gesvd \endcode
    Eigen-value decompositions \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT \code EigenSolver es(m1); ComplexEigenSolver ces(m1); SelfAdjointEigenSolver saes(m1+m1.transpose()); GeneralizedSelfAdjointEigenSolver gsaes(m1+m1.transpose(),m2+m2.transpose()); \endcode\code ?gees ?gees ?syev/?heev ?syev/?heev, ?potrf \endcode
    Schur decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT \code RealSchur schurR(m1); ComplexSchur schurC(m1); \endcode\code ?gees \endcode
    In the examples, m1 and m2 are dense matrices and v1 and v2 are dense vectors. */ } piqp-octave/src/eigen/doc/TutorialSTL.dox0000644000175100001770000000413614107270226020205 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialSTL STL iterators and algorithms Since the version 3.4, %Eigen's dense matrices and arrays provide STL compatible iterators. As demonstrated below, this makes them naturally compatible with range-for-loops and STL's algorithms. \eigenAutoToc \section TutorialSTLVectors Iterating over 1D arrays and vectors Any dense 1D expressions exposes the pair of `begin()/end()` methods to iterate over them. This directly enables c++11 range for loops:
    Example:Output:
    \include Tutorial_range_for_loop_1d_cxx11.cpp \verbinclude Tutorial_range_for_loop_1d_cxx11.out
    One dimensional expressions can also easily be passed to STL algorithms:
    Example:Output:
    \include Tutorial_std_sort.cpp \verbinclude Tutorial_std_sort.out
    Similar to `std::vector`, 1D expressions also exposes the pair of `cbegin()/cend()` methods to conveniently get const iterators on non-const object. \section TutorialSTLMatrices Iterating over coefficients of 2D arrays and matrices STL iterators are intrinsically designed to iterate over 1D structures. This is why `begin()/end()` methods are disabled for 2D expressions. Iterating over all coefficients of a 2D expressions is still easily accomplished by creating a 1D linear view through `reshaped()`:
    Example:Output:
    \include Tutorial_range_for_loop_2d_cxx11.cpp \verbinclude Tutorial_range_for_loop_2d_cxx11.out
    \section TutorialSTLRowsColumns Iterating over rows or columns of 2D arrays and matrices It is also possible to get iterators over rows or columns of 2D expressions. Those are available through the `rowwise()` and `colwise()` proxies. Here is an example sorting each row of a matrix:
    Example:Output:
    \include Tutorial_std_sort_rows_cxx11.cpp \verbinclude Tutorial_std_sort_rows_cxx11.out
    */ } piqp-octave/src/eigen/doc/TutorialReshape.dox0000644000175100001770000000564514107270226021140 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialReshape Reshape Since the version 3.4, %Eigen exposes convenient methods to reshape a matrix to another matrix of different sizes or vector. All cases are handled via the DenseBase::reshaped(NRowsType,NColsType) and DenseBase::reshaped() functions. Those functions do not perform in-place reshaping, but instead return a view on the input expression. \eigenAutoToc \section TutorialReshapeMat2Mat Reshaped 2D views The more general reshaping transformation is handled via: `reshaped(nrows,ncols)`. Here is an example reshaping a 4x4 matrix to a 2x8 one:
    Example:Output:
    \include MatrixBase_reshaped_int_int.cpp \verbinclude MatrixBase_reshaped_int_int.out
    By default, the input coefficients are always interpreted in column-major order regardless of the storage order of the input expression. For more control on ordering, compile-time sizes, and automatic size deduction, please see de documentation of DenseBase::reshaped(NRowsType,NColsType) that contains all the details with many examples. \section TutorialReshapeMat2Vec 1D linear views A very common usage of reshaping is to create a 1D linear view over a given 2D matrix or expression. In this case, sizes can be deduced and thus omitted as in the following example:
    Example:
    \include MatrixBase_reshaped_to_vector.cpp
    Output:
    \verbinclude MatrixBase_reshaped_to_vector.out
    This shortcut always returns a column vector and by default input coefficients are always interpreted in column-major order. Again, see the documentation of DenseBase::reshaped() for more control on the ordering. \section TutorialReshapeInPlace The above examples create reshaped views, but what about reshaping inplace a given matrix? Of course this task in only conceivable for matrix and arrays having runtime dimensions. In many cases, this can be accomplished via PlainObjectBase::resize(Index,Index):
    Example:
    \include Tutorial_reshaped_vs_resize_1.cpp
    Output:
    \verbinclude Tutorial_reshaped_vs_resize_1.out
    However beware that unlike \c reshaped, the result of \c resize depends on the input storage order. It thus behaves similarly to `reshaped`:
    Example:
    \include Tutorial_reshaped_vs_resize_2.cpp
    Output:
    \verbinclude Tutorial_reshaped_vs_resize_2.out
    Finally, assigning a reshaped matrix to itself is currently not supported and will result to undefined-behavior because of \link TopicAliasing aliasing \endlink. The following is forbidden: \code A = A.reshaped(2,8); \endcode This is OK: \code A = A.reshaped(2,8).eval(); \endcode */ } piqp-octave/src/eigen/doc/TutorialMatrixClass.dox0000644000175100001770000003256014107270226021777 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialMatrixClass The Matrix class \eigenAutoToc In Eigen, all matrices and vectors are objects of the Matrix template class. Vectors are just a special case of matrices, with either 1 row or 1 column. \section TutorialMatrixFirst3Params The first three template parameters of Matrix The Matrix class takes six template parameters, but for now it's enough to learn about the first three first parameters. The three remaining parameters have default values, which for now we will leave untouched, and which we \ref TutorialMatrixOptTemplParams "discuss below". The three mandatory template parameters of Matrix are: \code Matrix \endcode \li \c Scalar is the scalar type, i.e. the type of the coefficients. That is, if you want a matrix of floats, choose \c float here. See \ref TopicScalarTypes "Scalar types" for a list of all supported scalar types and for how to extend support to new types. \li \c RowsAtCompileTime and \c ColsAtCompileTime are the number of rows and columns of the matrix as known at compile time (see \ref TutorialMatrixDynamic "below" for what to do if the number is not known at compile time). We offer a lot of convenience typedefs to cover the usual cases. For example, \c Matrix4f is a 4x4 matrix of floats. Here is how it is defined by Eigen: \code typedef Matrix Matrix4f; \endcode We discuss \ref TutorialMatrixTypedefs "below" these convenience typedefs. \section TutorialMatrixVectors Vectors As mentioned above, in Eigen, vectors are just a special case of matrices, with either 1 row or 1 column. The case where they have 1 column is the most common; such vectors are called column-vectors, often abbreviated as just vectors. In the other case where they have 1 row, they are called row-vectors. For example, the convenience typedef \c Vector3f is a (column) vector of 3 floats. It is defined as follows by Eigen: \code typedef Matrix Vector3f; \endcode We also offer convenience typedefs for row-vectors, for example: \code typedef Matrix RowVector2i; \endcode \section TutorialMatrixDynamic The special value Dynamic Of course, Eigen is not limited to matrices whose dimensions are known at compile time. The \c RowsAtCompileTime and \c ColsAtCompileTime template parameters can take the special value \c Dynamic which indicates that the size is unknown at compile time, so must be handled as a run-time variable. In Eigen terminology, such a size is referred to as a \em dynamic \em size; while a size that is known at compile time is called a \em fixed \em size. For example, the convenience typedef \c MatrixXd, meaning a matrix of doubles with dynamic size, is defined as follows: \code typedef Matrix MatrixXd; \endcode And similarly, we define a self-explanatory typedef \c VectorXi as follows: \code typedef Matrix VectorXi; \endcode You can perfectly have e.g. a fixed number of rows with a dynamic number of columns, as in: \code Matrix \endcode \section TutorialMatrixConstructors Constructors A default constructor is always available, never performs any dynamic memory allocation, and never initializes the matrix coefficients. You can do: \code Matrix3f a; MatrixXf b; \endcode Here, \li \c a is a 3-by-3 matrix, with a plain float[9] array of uninitialized coefficients, \li \c b is a dynamic-size matrix whose size is currently 0-by-0, and whose array of coefficients hasn't yet been allocated at all. Constructors taking sizes are also available. For matrices, the number of rows is always passed first. For vectors, just pass the vector size. They allocate the array of coefficients with the given size, but don't initialize the coefficients themselves: \code MatrixXf a(10,15); VectorXf b(30); \endcode Here, \li \c a is a 10x15 dynamic-size matrix, with allocated but currently uninitialized coefficients. \li \c b is a dynamic-size vector of size 30, with allocated but currently uninitialized coefficients. In order to offer a uniform API across fixed-size and dynamic-size matrices, it is legal to use these constructors on fixed-size matrices, even if passing the sizes is useless in this case. So this is legal: \code Matrix3f a(3,3); \endcode and is a no-operation. Matrices and vectors can also be initialized from lists of coefficients. Prior to C++11, this feature is limited to small fixed-size column or vectors up to size 4: \code Vector2d a(5.0, 6.0); Vector3d b(5.0, 6.0, 7.0); Vector4d c(5.0, 6.0, 7.0, 8.0); \endcode If C++11 is enabled, fixed-size column or row vectors of arbitrary size can be initialized by passing an arbitrary number of coefficients: \code Vector2i a(1, 2); // A column vector containing the elements {1, 2} Matrix b {1, 2, 3, 4, 5}; // A row-vector containing the elements {1, 2, 3, 4, 5} Matrix c = {1, 2, 3, 4, 5}; // A column vector containing the elements {1, 2, 3, 4, 5} \endcode In the general case of matrices and vectors with either fixed or runtime sizes, coefficients have to be grouped by rows and passed as an initializer list of initializer list (\link Matrix::Matrix(const std::initializer_list>&) details \endlink): \code MatrixXi a { // construct a 2x2 matrix {1, 2}, // first row {3, 4} // second row }; Matrix b { {2, 3, 4}, {5, 6, 7}, }; \endcode For column or row vectors, implicit transposition is allowed. This means that a column vector can be initialized from a single row: \code VectorXd a {{1.5, 2.5, 3.5}}; // A column-vector with 3 coefficients RowVectorXd b {{1.0, 2.0, 3.0, 4.0}}; // A row-vector with 4 coefficients \endcode \section TutorialMatrixCoeffAccessors Coefficient accessors The primary coefficient accessors and mutators in Eigen are the overloaded parenthesis operators. For matrices, the row index is always passed first. For vectors, just pass one index. The numbering starts at 0. This example is self-explanatory:
    Example:Output:
    \include tut_matrix_coefficient_accessors.cpp \verbinclude tut_matrix_coefficient_accessors.out
    Note that the syntax m(index) is not restricted to vectors, it is also available for general matrices, meaning index-based access in the array of coefficients. This however depends on the matrix's storage order. All Eigen matrices default to column-major storage order, but this can be changed to row-major, see \ref TopicStorageOrders "Storage orders". The operator[] is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow operator[] to take more than one argument. We restrict operator[] to vectors, because an awkwardness in the C++ language would make matrix[i,j] compile to the same thing as matrix[j] ! \section TutorialMatrixCommaInitializer Comma-initialization %Matrix and vector coefficients can be conveniently set using the so-called \em comma-initializer syntax. For now, it is enough to know this example:
    Example:Output:
    \include Tutorial_commainit_01.cpp \verbinclude Tutorial_commainit_01.out
    The right-hand side can also contain matrix expressions as discussed in \ref TutorialAdvancedInitialization "this page". \section TutorialMatrixSizesResizing Resizing The current size of a matrix can be retrieved by \link EigenBase::rows() rows()\endlink, \link EigenBase::cols() cols() \endlink and \link EigenBase::size() size()\endlink. These methods return the number of rows, the number of columns and the number of coefficients, respectively. Resizing a dynamic-size matrix is done by the \link PlainObjectBase::resize(Index,Index) resize() \endlink method.
    Example:Output:
    \include tut_matrix_resize.cpp \verbinclude tut_matrix_resize.out
    The resize() method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change. If you want a conservative variant of resize() which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details. All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually resize a fixed-size matrix. Trying to change a fixed size to an actually different value will trigger an assertion failure; but the following code is legal:
    Example:Output:
    \include tut_matrix_resize_fixed_size.cpp \verbinclude tut_matrix_resize_fixed_size.out
    \section TutorialMatrixAssignment Assignment and resizing Assignment is the action of copying a matrix into another, using \c operator=. Eigen resizes the matrix on the left-hand side automatically so that it matches the size of the matrix on the right-hand size. For example:
    Example:Output:
    \include tut_matrix_assignment_resizing.cpp \verbinclude tut_matrix_assignment_resizing.out
    Of course, if the left-hand side is of fixed size, resizing it is not allowed. If you do not want this automatic resizing to happen (for example for debugging purposes), you can disable it, see \ref TopicResizing "this page". \section TutorialMatrixFixedVsDynamic Fixed vs. Dynamic size When should one use fixed sizes (e.g. \c Matrix4f), and when should one prefer dynamic sizes (e.g. \c MatrixXf)? The simple answer is: use fixed sizes for very small sizes where you can, and use dynamic sizes for larger sizes or where you have to. For small sizes, especially for sizes smaller than (roughly) 16, using fixed sizes is hugely beneficial to performance, as it allows Eigen to avoid dynamic memory allocation and to unroll loops. Internally, a fixed-size Eigen matrix is just a plain array, i.e. doing \code Matrix4f mymatrix; \endcode really amounts to just doing \code float mymatrix[16]; \endcode so this really has zero runtime cost. By contrast, the array of a dynamic-size matrix is always allocated on the heap, so doing \code MatrixXf mymatrix(rows,columns); \endcode amounts to doing \code float *mymatrix = new float[rows*columns]; \endcode and in addition to that, the MatrixXf object stores its number of rows and columns as member variables. The limitation of using fixed sizes, of course, is that this is only possible when you know the sizes at compile time. Also, for large enough sizes, say for sizes greater than (roughly) 32, the performance benefit of using fixed sizes becomes negligible. Worse, trying to create a very large matrix using fixed sizes inside a function could result in a stack overflow, since Eigen will try to allocate the array automatically as a local variable, and this is normally done on the stack. Finally, depending on circumstances, Eigen can also be more aggressive trying to vectorize (use SIMD instructions) when dynamic sizes are used, see \ref TopicVectorization "Vectorization". \section TutorialMatrixOptTemplParams Optional template parameters We mentioned at the beginning of this page that the Matrix class takes six template parameters, but so far we only discussed the first three. The remaining three parameters are optional. Here is the complete list of template parameters: \code Matrix \endcode \li \c Options is a bit field. Here, we discuss only one bit: \c RowMajor. It specifies that the matrices of this type use row-major storage order; by default, the storage order is column-major. See the page on \ref TopicStorageOrders "storage orders". For example, this type means row-major 3x3 matrices: \code Matrix \endcode \li \c MaxRowsAtCompileTime and \c MaxColsAtCompileTime are useful when you want to specify that, even though the exact sizes of your matrices are not known at compile time, a fixed upper bound is known at compile time. The biggest reason why you might want to do that is to avoid dynamic memory allocation. For example the following matrix type uses a plain array of 12 floats, without dynamic memory allocation: \code Matrix \endcode \section TutorialMatrixTypedefs Convenience typedefs Eigen defines the following Matrix typedefs: \li MatrixNt for Matrix. For example, MatrixXi for Matrix. \li VectorNt for Matrix. For example, Vector2f for Matrix. \li RowVectorNt for Matrix. For example, RowVector3d for Matrix. Where: \li N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic). \li t can be any one of \c i (meaning int), \c f (meaning float), \c d (meaning double), \c cf (meaning complex), or \c cd (meaning complex). The fact that typedefs are only defined for these five types doesn't mean that they are the only supported scalar types. For example, all standard integer types are supported, see \ref TopicScalarTypes "Scalar types". */ } piqp-octave/src/eigen/doc/ftv2pnode.png0000644000175100001770000000034514107270226017716 0ustar runnerdocker‰PNG  IHDRɪ|¬IDATxí=QF‘Ø¥D«ÔkÄ:‰F©PK؃=V@§Õ³ Õ8SHxñÌ0bnrróŠ{ò½¿¾’$ ÀÏTŠP  ö¼X¬OÛd6êìòð"°²S´±O¥B€(¡àQé)š+YĈ ÒªËRÉÐ>VtÉsˆm9(ê„䜥k‚-@ȧ-Ü$ó b Ò[he ¿Kp-ôl|CùÿApRG'rÍ­aIEND®B`‚piqp-octave/src/eigen/doc/SparseLinearSystems.dox0000644000175100001770000004667614107270226022016 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicSparseSystems Solving Sparse Linear Systems In Eigen, there are several methods available to solve linear systems when the coefficient matrix is sparse. Because of the special representation of this class of matrices, special care should be taken in order to get a good performance. See \ref TutorialSparse for a detailed introduction about sparse matrices in Eigen. This page lists the sparse solvers available in Eigen. The main steps that are common to all these linear solvers are introduced as well. Depending on the properties of the matrix, the desired accuracy, the end-user is able to tune those steps in order to improve the performance of its code. Note that it is not required to know deeply what's hiding behind these steps: the last section presents a benchmark routine that can be easily used to get an insight on the performance of all the available solvers. \eigenAutoToc \section TutorialSparseSolverList List of sparse solvers %Eigen currently provides a wide set of built-in solvers, as well as wrappers to external solver libraries. They are summarized in the following tables: \subsection TutorialSparseSolverList_Direct Built-in direct solvers
    ClassSolver kindMatrix kindFeatures related to performance License

    Notes

    SimplicialLLT \n \#includeDirect LLt factorizationSPDFill-in reducing LGPL SimplicialLDLT is often preferable
    SimplicialLDLT \n \#includeDirect LDLt factorizationSPDFill-in reducing LGPL Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)
    SparseLU \n \#include LU factorization Square Fill-in reducing, Leverage fast dense algebra MPL2 optimized for small and large problems with irregular patterns
    SparseQR \n \#include QR factorization Any, rectangular Fill-in reducing MPL2 recommended for least-square problems, has a basic rank-revealing feature
    \subsection TutorialSparseSolverList_Iterative Built-in iterative solvers
    ClassSolver kindMatrix kindSupported preconditioners, [default] License

    Notes

    ConjugateGradient \n \#include Classic iterative CGSPD IdentityPreconditioner, [DiagonalPreconditioner], IncompleteCholesky MPL2 Recommended for large symmetric problems (e.g., 3D Poisson eq.)
    LeastSquaresConjugateGradient \n \#includeCG for rectangular least-square problemRectangular IdentityPreconditioner, [LeastSquareDiagonalPreconditioner] MPL2 Solve for min |A'Ax-b|^2 without forming A'A
    BiCGSTAB \n \#includeIterative stabilized bi-conjugate gradientSquare IdentityPreconditioner, [DiagonalPreconditioner], IncompleteLUT MPL2 To speedup the convergence, try it with the \ref IncompleteLUT preconditioner.
    \subsection TutorialSparseSolverList_Wrapper Wrappers to external solvers
    ClassModuleSolver kindMatrix kindFeatures related to performance Dependencies,License

    Notes

    PastixLLT \n PastixLDLT \n PastixLU\link PaStiXSupport_Module PaStiXSupport \endlinkDirect LLt, LDLt, LU factorizationsSPD \n SPD \n SquareFill-in reducing, Leverage fast dense algebra, Multithreading Requires the PaStiX package, \b CeCILL-C optimized for tough problems and symmetric patterns
    CholmodSupernodalLLT\link CholmodSupport_Module CholmodSupport \endlinkDirect LLt factorizationSPDFill-in reducing, Leverage fast dense algebra Requires the SuiteSparse package, \b GPL
    UmfPackLU\link UmfPackSupport_Module UmfPackSupport \endlinkDirect LU factorizationSquareFill-in reducing, Leverage fast dense algebra Requires the SuiteSparse package, \b GPL
    KLU\link KLUSupport_Module KLUSupport \endlinkDirect LU factorizationSquareFill-in reducing, suitted for circuit simulation Requires the SuiteSparse package, \b GPL
    SuperLU\link SuperLUSupport_Module SuperLUSupport \endlinkDirect LU factorizationSquareFill-in reducing, Leverage fast dense algebra Requires the SuperLU library, (BSD-like)
    SPQR\link SPQRSupport_Module SPQRSupport \endlink QR factorization Any, rectangularfill-in reducing, multithreaded, fast dense algebra requires the SuiteSparse package, \b GPL recommended for linear least-squares problems, has a rank-revealing feature
    PardisoLLT \n PardisoLDLT \n PardisoLU\link PardisoSupport_Module PardisoSupport \endlinkDirect LLt, LDLt, LU factorizationsSPD \n SPD \n SquareFill-in reducing, Leverage fast dense algebra, Multithreading Requires the Intel MKL package, \b Proprietary optimized for tough problems patterns, see also \link TopicUsingIntelMKL using MKL with Eigen \endlink
    Here \c SPD means symmetric positive definite. \section TutorialSparseSolverConcept Sparse solver concept All these solvers follow the same general concept. Here is a typical and general example: \code #include // ... SparseMatrix A; // fill A VectorXd b, x; // fill b // solve Ax = b SolverClassName > solver; solver.compute(A); if(solver.info()!=Success) { // decomposition failed return; } x = solver.solve(b); if(solver.info()!=Success) { // solving failed return; } // solve for another right hand side: x1 = solver.solve(b1); \endcode For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.: \code #include ConjugateGradient, Eigen::Upper> solver; x = solver.compute(A).solve(b); \endcode In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values. In the case where multiple problems with the same sparsity pattern have to be solved, then the "compute" step can be decomposed as follow: \code SolverClassName > solver; solver.analyzePattern(A); // for this step the numerical values of A are not used solver.factorize(A); x1 = solver.solve(b1); x2 = solver.solve(b2); ... A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged solver.factorize(A); x1 = solver.solve(b1); x2 = solver.solve(b2); ... \endcode The compute() method is equivalent to calling both analyzePattern() and factorize(). Each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on. More details are available in the documentations of the respective classes. Finally, most of the iterative solvers, can also be used in a \b matrix-free context, see the following \link MatrixfreeSolverExample example \endlink. \section TheSparseCompute The Compute Step In the compute() function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into analyzePattern() and factorize(). The goal of analyzePattern() is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices. Eigen provides a limited set of methods to reorder the matrix in this step, either built-in (COLAMD, AMD) or external (METIS). These methods are set in template parameter list of the solver : \code DirectSolverClassName, OrderingMethod > solver; \endcode See the \link OrderingMethods_Module OrderingMethods module \endlink for the list of available methods and the associated options. In factorize(), the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls. For iterative solvers, the compute step is used to eventually setup a preconditioner. For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object. \code IterativeSolverClassName, PreconditionerName > solver; \endcode The member function preconditioner() returns a read-write reference to the preconditioner to directly interact with it. See the \link IterativeLinearSolvers_Module Iterative solvers module \endlink and the documentation of each class for the list of available methods. \section TheSparseSolve The Solve step The solve() function computes the solution of the linear systems with one or many right hand sides. \code X = solver.solve(B); \endcode Here, B can be a vector or a matrix where the columns form the different right hand sides. The solve() function can be called several times as well, for instance when all the right hand sides are not available at once. \code x1 = solver.solve(b1); // Get the second right hand side b2 x2 = solver.solve(b2); // ... \endcode For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using \b setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink. \section BenchmarkRoutine Most of the time, all you need is to know how much time it will take to solve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to bench/spbench and compile the routine by typing \b make \e spbenchsolver. Run it with --help option to get the list of all available options. Basically, the matrices to test should be in MatrixMarket Coordinate format, and the routine returns the statistics from all available solvers in Eigen. To export your matrices and right-hand-side vectors in the matrix-market format, you can the the unsupported SparseExtra module: \code #include ... Eigen::saveMarket(A, "filename.mtx"); Eigen::saveMarket(A, "filename_SPD.mtx", Eigen::Symmetric); // if A is symmetric-positive-definite Eigen::saveMarketVector(B, "filename_b.mtx"); \endcode The following table gives an example of XML statistics from several Eigen built-in and external solvers.
    Matrix N NNZ UMFPACK SUPERLU PASTIX LU BiCGSTAB BiCGSTAB+ILUT GMRES+ILUT LDLT CHOLMOD LDLT PASTIX LDLT LLT CHOLMOD SP LLT CHOLMOD LLT PASTIX LLT CG
    vector_graphics 12855 72069 Compute Time 0.02545490.02156770.07018270.0001533880.01401070.01537090.01016010.009305020.0649689
    Solve Time 0.003378350.0009518260.004843730.03748860.00464450.008477540.0005418130.0002936960.00485376
    Total Time 0.02883330.02251950.07502650.0376420.01865520.02384840.01070190.009598710.0698227
    Error(Iter) 1.299e-16 2.04207e-16 4.83393e-15 3.94856e-11 (80) 1.03861e-12 (3) 5.81088e-14 (6) 1.97578e-16 1.83927e-16 4.24115e-15
    poisson_SPD 19788 308232 Compute Time 0.4250261.823780.6173670.0004789211.340011.334710.7964190.8575730.4730070.8148260.1847190.8615550.4705590.000458188
    Solve Time 0.02800530.01944020.02687470.2494370.05484440.09269910.008502040.00531710.02589320.008746030.005781550.005303610.02489420.239093
    Total Time 0.4530311.843220.6442410.2499161.394861.427410.8049210.8628910.49890.8235720.1905010.8668590.4954530.239551
    Error(Iter) 4.67146e-16 1.068e-15 1.3397e-15 6.29233e-11 (201) 3.68527e-11 (6) 3.3168e-15 (16) 1.86376e-15 1.31518e-16 1.42593e-15 3.45361e-15 3.14575e-16 2.21723e-15 7.21058e-16 9.06435e-12 (261)
    sherman2 1080 23094 Compute Time 0.006317540.0150520.0247514 -0.02144250.0217988
    Solve Time 0.0004784240.0003379980.0010291 -0.002431520.00246152
    Total Time 0.006795970.015390.0257805 -0.0238740.0242603
    Error(Iter) 1.83099e-15 8.19351e-15 2.625e-14 1.3678e+69 (1080) 4.1911e-12 (7) 5.0299e-13 (12)
    bcsstk01_SPD 48 400 Compute Time 0.0001690790.000107890.0005725381.425e-069.1612e-058.3985e-055.6489e-057.0913e-050.0004682515.7389e-058.0212e-055.8394e-050.0004630171.333e-06
    Solve Time 1.2288e-051.1124e-050.0002863878.5896e-051.6381e-051.6984e-053.095e-064.115e-060.0003254383.504e-067.369e-063.454e-060.0002940956.0516e-05
    Total Time 0.0001813670.0001190140.0008589258.7321e-050.0001079930.0001009695.9584e-057.5028e-050.0007936896.0893e-058.7581e-056.1848e-050.0007571126.1849e-05
    Error(Iter) 1.03474e-16 2.23046e-16 2.01273e-16 4.87455e-07 (48) 1.03553e-16 (2) 3.55965e-16 (2) 2.48189e-16 1.88808e-16 1.97976e-16 2.37248e-16 1.82701e-16 2.71474e-16 2.11322e-16 3.547e-09 (48)
    sherman1 1000 3750 Compute Time 0.002288050.002092310.005282689.846e-060.001635220.001621550.0007892590.0008044950.00438269
    Solve Time 0.0002137889.7983e-050.0009388310.006298350.0003617640.000787944.3989e-052.5331e-050.000917166
    Total Time 0.002501840.002190290.006221510.00630820.001996980.002409490.0008332480.0008298260.00529986
    Error(Iter) 1.16839e-16 2.25968e-16 2.59116e-16 3.76779e-11 (248) 4.13343e-11 (4) 2.22347e-14 (10) 2.05861e-16 1.83555e-16 1.02917e-15
    young1c 841 4089 Compute Time 0.002358430.002172280.005680751.2735e-050.002648660.00258236
    Solve Time 0.0003295990.0001686340.000801180.05347380.001871930.00450211
    Total Time 0.002688030.002340910.006481930.05348650.004520590.00708447
    Error(Iter) 1.27029e-16 2.81321e-16 5.0492e-15 8.0507e-11 (706) 3.00447e-12 (8) 1.46532e-12 (16)
    mhd1280b 1280 22778 Compute Time 0.002348980.002070790.005709182.5976e-050.003025630.002980360.001445250.0009199220.00426444
    Solve Time 0.001033920.0002119110.001050.01104320.0006282870.003920890.0001383036.2446e-050.00097564
    Total Time 0.00338290.00228270.006759180.01106920.003653920.006901240.001583550.0009823680.00524008
    Error(Iter) 1.32953e-16 3.08646e-16 6.734e-16 8.83132e-11 (40) 1.51153e-16 (1) 6.08556e-16 (8) 1.89264e-16 1.97477e-16 6.68126e-09
    crashbasis 160000 1750416 Compute Time 3.20195.789215.75730.003835153.10063.09921
    Solve Time 0.2619150.1062250.4021411.490890.248880.443673
    Total Time 3.463815.8954216.15941.494733.349483.54288
    Error(Iter) 1.76348e-16 4.58395e-16 1.67982e-14 8.64144e-11 (61) 8.5996e-12 (2) 6.04042e-14 (5)
    */ } piqp-octave/src/eigen/doc/CoeffwiseMathFunctionsTable.dox0000644000175100001770000004440414107270226023406 0ustar runnerdockernamespace Eigen { /** \eigenManualPage CoeffwiseMathFunctions Catalog of coefficient-wise math functions This table presents a catalog of the coefficient-wise math functions supported by %Eigen. In this table, \c a, \c b, refer to Array objects or expressions, and \c m refers to a linear algebra Matrix/Vector object. Standard scalar types are abbreviated as follows: - \c int: \c i32 - \c float: \c f - \c double: \c d - \c std::complex: \c cf - \c std::complex: \c cd For each row, the first column list the equivalent calls for arrays, and matrices when supported. Of course, all functions are available for matrices by first casting it as an array: \c m.array(). The third column gives some hints in the underlying scalar implementation. In most cases, %Eigen does not implement itself the math function but relies on the STL for standard scalar types, or user-provided functions for custom scalar types. For instance, some simply calls the respective function of the STL while preserving argument-dependent lookup for custom types. The following: \code using std::foo; foo(a[i]); \endcode means that the STL's function \c std::foo will be potentially called if it is compatible with the underlying scalar type. If not, then the user must ensure that an overload of the function foo is available for the given scalar type (usually defined in the same namespace as the given scalar type). This also means that, unless specified, if the function \c std::foo is available only in some recent c++ versions (e.g., c++11), then the respective %Eigen's function/method will be usable on standard types only if the compiler support the required c++ version.
    APIDescriptionDefault scalar implementationSIMD
    Basic operations
    \anchor cwisetable_abs a.\link ArrayBase::abs abs\endlink(); \n \link Eigen::abs abs\endlink(a); \n m.\link MatrixBase::cwiseAbs cwiseAbs\endlink(); absolute value (\f$ |a_i| \f$) using std::abs; \n abs(a[i]); SSE2, AVX (i32,f,d)
    \anchor cwisetable_inverse a.\link ArrayBase::inverse inverse\endlink(); \n \link Eigen::inverse inverse\endlink(a); \n m.\link MatrixBase::cwiseInverse cwiseInverse\endlink(); inverse value (\f$ 1/a_i \f$) 1/a[i]; All engines (f,d,fc,fd)
    \anchor cwisetable_conj a.\link ArrayBase::conjugate conjugate\endlink(); \n \link Eigen::conj conj\endlink(a); \n m.\link MatrixBase::conjugate conjugate\endlink(); complex conjugate (\f$ \bar{a_i} \f$),\n no-op for real using std::conj; \n conj(a[i]); All engines (fc,fd)
    \anchor cwisetable_arg a.\link ArrayBase::arg arg\endlink(); \n \link Eigen::arg arg\endlink(a); \n m.\link MatrixBase::cwiseArg cwiseArg\endlink(); phase angle of complex number using std::arg; \n arg(a[i]); All engines (fc,fd)
    Exponential functions
    \anchor cwisetable_exp a.\link ArrayBase::exp exp\endlink(); \n \link Eigen::exp exp\endlink(a); \f$ e \f$ raised to the given power (\f$ e^{a_i} \f$) using std::exp; \n exp(a[i]); SSE2, AVX (f,d)
    \anchor cwisetable_log a.\link ArrayBase::log log\endlink(); \n \link Eigen::log log\endlink(a); natural (base \f$ e \f$) logarithm (\f$ \ln({a_i}) \f$) using std::log; \n log(a[i]); SSE2, AVX (f)
    \anchor cwisetable_log1p a.\link ArrayBase::log1p log1p\endlink(); \n \link Eigen::log1p log1p\endlink(a); natural (base \f$ e \f$) logarithm of 1 plus \n the given number (\f$ \ln({1+a_i}) \f$) built-in generic implementation based on \c log,\n plus \c using \c std::log1p ; \cpp11
    \anchor cwisetable_log10 a.\link ArrayBase::log10 log10\endlink(); \n \link Eigen::log10 log10\endlink(a); base 10 logarithm (\f$ \log_{10}({a_i}) \f$) using std::log10; \n log10(a[i]);
    Power functions
    \anchor cwisetable_pow a.\link ArrayBase::pow pow\endlink(b); \n \link ArrayBase::pow(const Eigen::ArrayBase< Derived > &x, const Eigen::ArrayBase< ExponentDerived > &exponents) pow\endlink(a,b); raises a number to the given power (\f$ a_i ^ {b_i} \f$) \n \c a and \c b can be either an array or scalar. using std::pow; \n pow(a[i],b[i]);\n (plus builtin for integer types)
    \anchor cwisetable_sqrt a.\link ArrayBase::sqrt sqrt\endlink(); \n \link Eigen::sqrt sqrt\endlink(a);\n m.\link MatrixBase::cwiseSqrt cwiseSqrt\endlink(); computes square root (\f$ \sqrt a_i \f$) using std::sqrt; \n sqrt(a[i]); SSE2, AVX (f,d)
    \anchor cwisetable_rsqrt a.\link ArrayBase::rsqrt rsqrt\endlink(); \n \link Eigen::rsqrt rsqrt\endlink(a); reciprocal square root (\f$ 1/{\sqrt a_i} \f$) using std::sqrt; \n 1/sqrt(a[i]); \n SSE2, AVX, AltiVec, ZVector (f,d)\n (approx + 1 Newton iteration)
    \anchor cwisetable_square a.\link ArrayBase::square square\endlink(); \n \link Eigen::square square\endlink(a); computes square power (\f$ a_i^2 \f$) a[i]*a[i] All (i32,f,d,cf,cd)
    \anchor cwisetable_cube a.\link ArrayBase::cube cube\endlink(); \n \link Eigen::cube cube\endlink(a); computes cubic power (\f$ a_i^3 \f$) a[i]*a[i]*a[i] All (i32,f,d,cf,cd)
    \anchor cwisetable_abs2 a.\link ArrayBase::abs2 abs2\endlink(); \n \link Eigen::abs2 abs2\endlink(a);\n m.\link MatrixBase::cwiseAbs2 cwiseAbs2\endlink(); computes the squared absolute value (\f$ |a_i|^2 \f$) real: a[i]*a[i] \n complex: real(a[i])*real(a[i]) \n        + imag(a[i])*imag(a[i]) All (i32,f,d)
    Trigonometric functions
    \anchor cwisetable_sin a.\link ArrayBase::sin sin\endlink(); \n \link Eigen::sin sin\endlink(a); computes sine using std::sin; \n sin(a[i]); SSE2, AVX (f)
    \anchor cwisetable_cos a.\link ArrayBase::cos cos\endlink(); \n \link Eigen::cos cos\endlink(a); computes cosine using std::cos; \n cos(a[i]); SSE2, AVX (f)
    \anchor cwisetable_tan a.\link ArrayBase::tan tan\endlink(); \n \link Eigen::tan tan\endlink(a); computes tangent using std::tan; \n tan(a[i]);
    \anchor cwisetable_asin a.\link ArrayBase::asin asin\endlink(); \n \link Eigen::asin asin\endlink(a); computes arc sine (\f$ \sin^{-1} a_i \f$) using std::asin; \n asin(a[i]);
    \anchor cwisetable_acos a.\link ArrayBase::acos acos\endlink(); \n \link Eigen::acos acos\endlink(a); computes arc cosine (\f$ \cos^{-1} a_i \f$) using std::acos; \n acos(a[i]);
    \anchor cwisetable_atan a.\link ArrayBase::atan atan\endlink(); \n \link Eigen::atan atan\endlink(a); computes arc tangent (\f$ \tan^{-1} a_i \f$) using std::atan; \n atan(a[i]);
    Hyperbolic functions
    \anchor cwisetable_sinh a.\link ArrayBase::sinh sinh\endlink(); \n \link Eigen::sinh sinh\endlink(a); computes hyperbolic sine using std::sinh; \n sinh(a[i]);
    \anchor cwisetable_cosh a.\link ArrayBase::cosh cohs\endlink(); \n \link Eigen::cosh cosh\endlink(a); computes hyperbolic cosine using std::cosh; \n cosh(a[i]);
    \anchor cwisetable_tanh a.\link ArrayBase::tanh tanh\endlink(); \n \link Eigen::tanh tanh\endlink(a); computes hyperbolic tangent using std::tanh; \n tanh(a[i]);
    \anchor cwisetable_asinh a.\link ArrayBase::asinh asinh\endlink(); \n \link Eigen::asinh asinh\endlink(a); computes inverse hyperbolic sine using std::asinh; \n asinh(a[i]);
    \anchor cwisetable_acosh a.\link ArrayBase::acosh cohs\endlink(); \n \link Eigen::acosh acosh\endlink(a); computes hyperbolic cosine using std::acosh; \n acosh(a[i]);
    \anchor cwisetable_atanh a.\link ArrayBase::atanh atanh\endlink(); \n \link Eigen::atanh atanh\endlink(a); computes hyperbolic tangent using std::atanh; \n atanh(a[i]);
    Nearest integer floating point operations
    \anchor cwisetable_ceil a.\link ArrayBase::ceil ceil\endlink(); \n \link Eigen::ceil ceil\endlink(a); nearest integer not less than the given value using std::ceil; \n ceil(a[i]); SSE4,AVX,ZVector (f,d)
    \anchor cwisetable_floor a.\link ArrayBase::floor floor\endlink(); \n \link Eigen::floor floor\endlink(a); nearest integer not greater than the given value using std::floor; \n floor(a[i]); SSE4,AVX,ZVector (f,d)
    \anchor cwisetable_round a.\link ArrayBase::round round\endlink(); \n \link Eigen::round round\endlink(a); nearest integer, \n rounding away from zero in halfway cases built-in generic implementation \n based on \c floor and \c ceil,\n plus \c using \c std::round ; \cpp11 SSE4,AVX,ZVector (f,d)
    \anchor cwisetable_rint a.\link ArrayBase::rint rint\endlink(); \n \link Eigen::rint rint\endlink(a); nearest integer, \n rounding to nearest even in halfway cases built-in generic implementation using \c std::rint ; \cpp11 or \c rintf ; SSE4,AVX (f,d)
    Floating point manipulation functions
    Classification and comparison
    \anchor cwisetable_isfinite a.\link ArrayBase::isFinite isFinite\endlink(); \n \link Eigen::isfinite isfinite\endlink(a); checks if the given number has finite value built-in generic implementation,\n plus \c using \c std::isfinite ; \cpp11
    \anchor cwisetable_isinf a.\link ArrayBase::isInf isInf\endlink(); \n \link Eigen::isinf isinf\endlink(a); checks if the given number is infinite built-in generic implementation,\n plus \c using \c std::isinf ; \cpp11
    \anchor cwisetable_isnan a.\link ArrayBase::isNaN isNaN\endlink(); \n \link Eigen::isnan isnan\endlink(a); checks if the given number is not a number built-in generic implementation,\n plus \c using \c std::isnan ; \cpp11
    Error and gamma functions
    Require \c \#include \c
    \anchor cwisetable_erf a.\link ArrayBase::erf erf\endlink(); \n \link Eigen::erf erf\endlink(a); error function using std::erf; \cpp11 \n erf(a[i]);
    \anchor cwisetable_erfc a.\link ArrayBase::erfc erfc\endlink(); \n \link Eigen::erfc erfc\endlink(a); complementary error function using std::erfc; \cpp11 \n erfc(a[i]);
    \anchor cwisetable_lgamma a.\link ArrayBase::lgamma lgamma\endlink(); \n \link Eigen::lgamma lgamma\endlink(a); natural logarithm of the gamma function using std::lgamma; \cpp11 \n lgamma(a[i]);
    \anchor cwisetable_digamma a.\link ArrayBase::digamma digamma\endlink(); \n \link Eigen::digamma digamma\endlink(a); logarithmic derivative of the gamma function built-in for float and double
    \anchor cwisetable_igamma \link Eigen::igamma igamma\endlink(a,x); lower incomplete gamma integral \n \f$ \gamma(a_i,x_i)= \frac{1}{|a_i|} \int_{0}^{x_i}e^{\text{-}t} t^{a_i-1} \mathrm{d} t \f$ built-in for float and double,\n but requires \cpp11
    \anchor cwisetable_igammac \link Eigen::igammac igammac\endlink(a,x); upper incomplete gamma integral \n \f$ \Gamma(a_i,x_i) = \frac{1}{|a_i|} \int_{x_i}^{\infty}e^{\text{-}t} t^{a_i-1} \mathrm{d} t \f$ built-in for float and double,\n but requires \cpp11
    Special functions
    Require \c \#include \c
    \anchor cwisetable_polygamma \link Eigen::polygamma polygamma\endlink(n,x); n-th derivative of digamma at x built-in generic based on\n \c lgamma , \c digamma and \c zeta .
    \anchor cwisetable_betainc \link Eigen::betainc betainc\endlink(a,b,x); Incomplete beta function built-in for float and double,\n but requires \cpp11
    \anchor cwisetable_zeta \link Eigen::zeta zeta\endlink(a,b); \n a.\link ArrayBase::zeta zeta\endlink(b); Hurwitz zeta function \n \f$ \zeta(a_i,b_i)=\sum_{k=0}^{\infty}(b_i+k)^{\text{-}a_i} \f$ built-in for float and double
    \anchor cwisetable_ndtri a.\link ArrayBase::ndtri ndtri\endlink(); \n \link Eigen::ndtri ndtri\endlink(a); Inverse of the CDF of the Normal distribution function built-in for float and double
    \n */ } piqp-octave/src/eigen/doc/StorageOrders.dox0000644000175100001770000001002714107270226020576 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TopicStorageOrders Storage orders There are two different storage orders for matrices and two-dimensional arrays: column-major and row-major. This page explains these storage orders and how to specify which one should be used. \eigenAutoToc \section TopicStorageOrdersIntro Column-major and row-major storage The entries of a matrix form a two-dimensional grid. However, when the matrix is stored in memory, the entries have to somehow be laid out linearly. There are two main ways to do this, by row and by column. We say that a matrix is stored in \b row-major order if it is stored row by row. The entire first row is stored first, followed by the entire second row, and so on. Consider for example the matrix \f[ A = \begin{bmatrix} 8 & 2 & 2 & 9 \\ 9 & 1 & 4 & 4 \\ 3 & 5 & 4 & 5 \end{bmatrix}. \f] If this matrix is stored in row-major order, then the entries are laid out in memory as follows: \code 8 2 2 9 9 1 4 4 3 5 4 5 \endcode On the other hand, a matrix is stored in \b column-major order if it is stored column by column, starting with the entire first column, followed by the entire second column, and so on. If the above matrix is stored in column-major order, it is laid out as follows: \code 8 9 3 2 1 5 2 4 4 9 4 5 \endcode This example is illustrated by the following Eigen code. It uses the PlainObjectBase::data() function, which returns a pointer to the memory location of the first entry of the matrix.
    ExampleOutput
    \include TopicStorageOrders_example.cpp \verbinclude TopicStorageOrders_example.out
    \section TopicStorageOrdersInEigen Storage orders in Eigen The storage order of a matrix or a two-dimensional array can be set by specifying the \c Options template parameter for Matrix or Array. As \ref TutorialMatrixClass explains, the %Matrix class template has six template parameters, of which three are compulsory (\c Scalar, \c RowsAtCompileTime and \c ColsAtCompileTime) and three are optional (\c Options, \c MaxRowsAtCompileTime and \c MaxColsAtCompileTime). If the \c Options parameter is set to \c RowMajor, then the matrix or array is stored in row-major order; if it is set to \c ColMajor, then it is stored in column-major order. This mechanism is used in the above Eigen program to specify the storage order. If the storage order is not specified, then Eigen defaults to storing the entry in column-major. This is also the case if one of the convenience typedefs (\c Matrix3f, \c ArrayXXd, etc.) is used. Matrices and arrays using one storage order can be assigned to matrices and arrays using the other storage order, as happens in the above program when \c Arowmajor is initialized using \c Acolmajor. Eigen will reorder the entries automatically. More generally, row-major and column-major matrices can be mixed in an expression as we want. \section TopicStorageOrdersWhich Which storage order to choose? So, which storage order should you use in your program? There is no simple answer to this question; it depends on your application. Here are some points to keep in mind: - Your users may expect you to use a specific storage order. Alternatively, you may use other libraries than Eigen, and these other libraries may expect a certain storage order. In these cases it may be easiest and fastest to use this storage order in your whole program. - Algorithms that traverse a matrix row by row will go faster when the matrix is stored in row-major order because of better data locality. Similarly, column-by-column traversal is faster for column-major matrices. It may be worthwhile to experiment a bit to find out what is faster for your particular application. - The default in Eigen is column-major. Naturally, most of the development and testing of the Eigen library is thus done with column-major matrices. This means that, even though we aim to support column-major and row-major storage orders transparently, the Eigen library may well work best with column-major matrices. */ } piqp-octave/src/eigen/doc/Manual.dox0000644000175100001770000001515114107270226017233 0ustar runnerdocker // This file strutures pages and modules into a convenient hierarchical structure. namespace Eigen { /** \page UserManual_CustomizingEigen Extending/Customizing Eigen %Eigen can be extended in several ways, for instance, by defining global methods, by inserting custom methods within main %Eigen's classes through the \ref TopicCustomizing_Plugins "plugin" mechanism, by adding support to \ref TopicCustomizing_CustomScalar "custom scalar types" etc. See below for the respective sub-topics. - \subpage TopicCustomizing_Plugins - \subpage TopicCustomizing_InheritingMatrix - \subpage TopicCustomizing_CustomScalar - \subpage TopicCustomizing_NullaryExpr - \subpage TopicNewExpressionType \sa \ref TopicPreprocessorDirectives */ /** \page UserManual_Generalities General topics - \subpage TopicFunctionTakingEigenTypes - \subpage TopicPreprocessorDirectives - \subpage TopicAssertions - \subpage TopicMultiThreading - \subpage TopicUsingBlasLapack - \subpage TopicUsingIntelMKL - \subpage TopicCUDA - \subpage TopicPitfalls - \subpage TopicTemplateKeyword - \subpage UserManual_UnderstandingEigen - \subpage TopicCMakeGuide */ /** \page UserManual_UnderstandingEigen Understanding Eigen - \subpage TopicInsideEigenExample - \subpage TopicClassHierarchy - \subpage TopicLazyEvaluation */ /** \page UnclassifiedPages Unclassified pages - \subpage TopicResizing - \subpage TopicVectorization - \subpage TopicEigenExpressionTemplates - \subpage TopicScalarTypes - \subpage GettingStarted - \subpage TutorialSparse_example_details - \subpage TopicWritingEfficientProductExpression - \subpage Experimental */ /** \defgroup Support_modules Support modules * Category of modules which add support for external libraries. */ /** \defgroup DenseMatrixManipulation_chapter Dense matrix and array manipulation */ /** \defgroup DenseMatrixManipulation_Alignement Alignment issues */ /** \defgroup DenseMatrixManipulation_Reference Reference */ /** \addtogroup TutorialMatrixClass \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialMatrixArithmetic \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialArrayClass \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialBlockOperations \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialSlicingIndexing \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialAdvancedInitialization \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialReductionsVisitorsBroadcasting \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialReshape \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialSTL \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TutorialMapClass \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TopicAliasing \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TopicStorageOrders \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup DenseMatrixManipulation_Alignement \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup TopicUnalignedArrayAssert \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup TopicFixedSizeVectorizable \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup TopicStructHavingEigenMembers \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup TopicStlContainers \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup TopicPassingByValue \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup TopicWrongStackAlignment \ingroup DenseMatrixManipulation_Alignement */ /** \addtogroup DenseMatrixManipulation_Reference \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup Core_Module \ingroup DenseMatrixManipulation_Reference */ /** \addtogroup Jacobi_Module \ingroup DenseMatrixManipulation_Reference */ /** \addtogroup Householder_Module \ingroup DenseMatrixManipulation_Reference */ /** \addtogroup CoeffwiseMathFunctions \ingroup DenseMatrixManipulation_chapter */ /** \addtogroup QuickRefPage \ingroup DenseMatrixManipulation_chapter */ /** \defgroup DenseLinearSolvers_chapter Dense linear problems and decompositions */ /** \defgroup DenseLinearSolvers_Reference Reference */ /** \addtogroup TutorialLinearAlgebra \ingroup DenseLinearSolvers_chapter */ /** \addtogroup TopicLinearAlgebraDecompositions \ingroup DenseLinearSolvers_chapter */ /** \addtogroup LeastSquares \ingroup DenseLinearSolvers_chapter */ /** \addtogroup InplaceDecomposition \ingroup DenseLinearSolvers_chapter */ /** \addtogroup DenseDecompositionBenchmark \ingroup DenseLinearSolvers_chapter */ /** \addtogroup DenseLinearSolvers_Reference \ingroup DenseLinearSolvers_chapter */ /** \addtogroup Cholesky_Module \ingroup DenseLinearSolvers_Reference */ /** \addtogroup LU_Module \ingroup DenseLinearSolvers_Reference */ /** \addtogroup QR_Module \ingroup DenseLinearSolvers_Reference */ /** \addtogroup SVD_Module \ingroup DenseLinearSolvers_Reference*/ /** \addtogroup Eigenvalues_Module \ingroup DenseLinearSolvers_Reference */ /** \defgroup Sparse_chapter Sparse linear algebra */ /** \defgroup Sparse_Reference Reference */ /** \addtogroup TutorialSparse \ingroup Sparse_chapter */ /** \addtogroup TopicSparseSystems \ingroup Sparse_chapter */ /** \addtogroup MatrixfreeSolverExample \ingroup Sparse_chapter */ /** \addtogroup Sparse_Reference \ingroup Sparse_chapter */ /** \addtogroup SparseCore_Module \ingroup Sparse_Reference */ /** \addtogroup OrderingMethods_Module \ingroup Sparse_Reference */ /** \addtogroup SparseCholesky_Module \ingroup Sparse_Reference */ /** \addtogroup SparseLU_Module \ingroup Sparse_Reference */ /** \addtogroup SparseQR_Module \ingroup Sparse_Reference */ /** \addtogroup IterativeLinearSolvers_Module \ingroup Sparse_Reference */ /** \addtogroup Sparse_Module \ingroup Sparse_Reference */ /** \addtogroup Support_modules \ingroup Sparse_Reference */ /** \addtogroup SparseQuickRefPage \ingroup Sparse_chapter */ /** \defgroup Geometry_chapter Geometry */ /** \defgroup Geometry_Reference Reference */ /** \addtogroup TutorialGeometry \ingroup Geometry_chapter */ /** \addtogroup Geometry_Reference \ingroup Geometry_chapter */ /** \addtogroup Geometry_Module \ingroup Geometry_Reference */ /** \addtogroup Splines_Module \ingroup Geometry_Reference */ /** \internal \brief Namespace containing low-level routines from the %Eigen library. */ namespace internal {} } piqp-octave/src/eigen/doc/TopicEigenExpressionTemplates.dox0000644000175100001770000000025514107270226024002 0ustar runnerdockernamespace Eigen { /** \page TopicEigenExpressionTemplates Expression templates in Eigen TODO: write this dox page! Is linked from the tutorial on arithmetic ops. */ } piqp-octave/src/eigen/doc/UsingIntelMKL.dox0000644000175100001770000001374114107270226020446 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Copyright (C) 2011 Gael Guennebaud Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Documentation on the use of Intel MKL through Eigen ******************************************************************************** */ namespace Eigen { /** \page TopicUsingIntelMKL Using Intel® MKL from %Eigen Since %Eigen version 3.1 and later, users can benefit from built-in Intel® Math Kernel Library (MKL) optimizations with an installed copy of Intel MKL 10.3 (or later). Intel MKL provides highly optimized multi-threaded mathematical routines for x86-compatible architectures. Intel MKL is available on Linux, Mac and Windows for both Intel64 and IA32 architectures. \note Intel® MKL is a proprietary software and it is the responsibility of users to buy or register for community (free) Intel MKL licenses for their products. Moreover, the license of the user product has to allow linking to proprietary software that excludes any unmodified versions of the GPL. Using Intel MKL through %Eigen is easy: -# define the \c EIGEN_USE_MKL_ALL macro before including any %Eigen's header -# link your program to MKL libraries (see the MKL linking advisor) -# on a 64bits system, you must use the LP64 interface (not the ILP64 one) When doing so, a number of %Eigen's algorithms are silently substituted with calls to Intel MKL routines. These substitutions apply only for \b Dynamic \b or \b large enough objects with one of the following four standard scalar types: \c float, \c double, \c complex, and \c complex. Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms. In addition you can choose which parts will be substituted by defining one or multiple of the following macros:
    \c EIGEN_USE_BLAS Enables the use of external BLAS level 2 and 3 routines
    \c EIGEN_USE_LAPACKE Enables the use of external Lapack routines via the Lapacke C interface to Lapack
    \c EIGEN_USE_LAPACKE_STRICT Same as \c EIGEN_USE_LAPACKE but algorithm of lower robustness are disabled. \n This currently concerns only JacobiSVD which otherwise would be replaced by \c gesvd that is less robust than Jacobi rotations.
    \c EIGEN_USE_MKL_VML Enables the use of Intel VML (vector operations)
    \c EIGEN_USE_MKL_ALL Defines \c EIGEN_USE_BLAS, \c EIGEN_USE_LAPACKE, and \c EIGEN_USE_MKL_VML
    The \c EIGEN_USE_BLAS and \c EIGEN_USE_LAPACKE* macros can be combined with \c EIGEN_USE_MKL to explicitly tell Eigen that the underlying BLAS/Lapack implementation is Intel MKL. The main effect is to enable MKL direct call feature (\c MKL_DIRECT_CALL). This may help to increase performance of some MKL BLAS (?GEMM, ?GEMV, ?TRSM, ?AXPY and ?DOT) and LAPACK (LU, Cholesky and QR) routines for very small matrices. MKL direct call can be disabled by defining \c EIGEN_MKL_NO_DIRECT_CALL. Note that the BLAS and LAPACKE backends can be enabled for any F77 compatible BLAS and LAPACK libraries. See this \link TopicUsingBlasLapack page \endlink for the details. Finally, the PARDISO sparse solver shipped with Intel MKL can be used through the \ref PardisoLU, \ref PardisoLLT and \ref PardisoLDLT classes of the \ref PardisoSupport_Module. The following table summarizes the list of functions covered by \c EIGEN_USE_MKL_VML:
    Code exampleMKL routines
    \code v2=v1.array().sin(); v2=v1.array().asin(); v2=v1.array().cos(); v2=v1.array().acos(); v2=v1.array().tan(); v2=v1.array().exp(); v2=v1.array().log(); v2=v1.array().sqrt(); v2=v1.array().square(); v2=v1.array().pow(1.5); \endcode\code v?Sin v?Asin v?Cos v?Acos v?Tan v?Exp v?Ln v?Sqrt v?Sqr v?Powx \endcode
    In the examples, v1 and v2 are dense vectors. \section TopicUsingIntelMKL_Links Links - Intel MKL can be purchased and downloaded here. - Intel MKL is also bundled with Intel Composer XE. */ } piqp-octave/src/eigen/doc/TutorialMatrixArithmetic.dox0000644000175100001770000002321114107270226023014 0ustar runnerdockernamespace Eigen { /** \eigenManualPage TutorialMatrixArithmetic Matrix and vector arithmetic This page aims to provide an overview and some details on how to perform arithmetic between matrices, vectors and scalars with Eigen. \eigenAutoToc \section TutorialArithmeticIntroduction Introduction Eigen offers matrix/vector arithmetic operations either through overloads of common C++ arithmetic operators such as +, -, *, or through special methods such as dot(), cross(), etc. For the Matrix class (matrices and vectors), operators are only overloaded to support linear-algebraic operations. For example, \c matrix1 \c * \c matrix2 means matrix-matrix product, and \c vector \c + \c scalar is just not allowed. If you want to perform all kinds of array operations, not linear algebra, see the \ref TutorialArrayClass "next page". \section TutorialArithmeticAddSub Addition and subtraction The left hand side and right hand side must, of course, have the same numbers of rows and of columns. They must also have the same \c Scalar type, as Eigen doesn't do automatic type promotion. The operators at hand here are: \li binary operator + as in \c a+b \li binary operator - as in \c a-b \li unary operator - as in \c -a \li compound operator += as in \c a+=b \li compound operator -= as in \c a-=b
    Example:Output:
    \include tut_arithmetic_add_sub.cpp \verbinclude tut_arithmetic_add_sub.out
    \section TutorialArithmeticScalarMulDiv Scalar multiplication and division Multiplication and division by a scalar is very simple too. The operators at hand here are: \li binary operator * as in \c matrix*scalar \li binary operator * as in \c scalar*matrix \li binary operator / as in \c matrix/scalar \li compound operator *= as in \c matrix*=scalar \li compound operator /= as in \c matrix/=scalar
    Example:Output:
    \include tut_arithmetic_scalar_mul_div.cpp \verbinclude tut_arithmetic_scalar_mul_div.out
    \section TutorialArithmeticMentionXprTemplates A note about expression templates This is an advanced topic that we explain on \ref TopicEigenExpressionTemplates "this page", but it is useful to just mention it now. In Eigen, arithmetic operators such as \c operator+ don't perform any computation by themselves, they just return an "expression object" describing the computation to be performed. The actual computation happens later, when the whole expression is evaluated, typically in \c operator=. While this might sound heavy, any modern optimizing compiler is able to optimize away that abstraction and the result is perfectly optimized code. For example, when you do: \code VectorXf a(50), b(50), c(50), d(50); ... a = 3*b + 4*c + 5*d; \endcode Eigen compiles it to just one for loop, so that the arrays are traversed only once. Simplifying (e.g. ignoring SIMD optimizations), this loop looks like this: \code for(int i = 0; i < 50; ++i) a[i] = 3*b[i] + 4*c[i] + 5*d[i]; \endcode Thus, you should not be afraid of using relatively large arithmetic expressions with Eigen: it only gives Eigen more opportunities for optimization. \section TutorialArithmeticTranspose Transposition and conjugation The transpose \f$ a^T \f$, conjugate \f$ \bar{a} \f$, and adjoint (i.e., conjugate transpose) \f$ a^* \f$ of a matrix or vector \f$ a \f$ are obtained by the member functions \link DenseBase::transpose() transpose()\endlink, \link MatrixBase::conjugate() conjugate()\endlink, and \link MatrixBase::adjoint() adjoint()\endlink, respectively.
    Example:Output:
    \include tut_arithmetic_transpose_conjugate.cpp \verbinclude tut_arithmetic_transpose_conjugate.out
    For real matrices, \c conjugate() is a no-operation, and so \c adjoint() is equivalent to \c transpose(). As for basic arithmetic operators, \c transpose() and \c adjoint() simply return a proxy object without doing the actual transposition. If you do b = a.transpose(), then the transpose is evaluated at the same time as the result is written into \c b. However, there is a complication here. If you do a = a.transpose(), then Eigen starts writing the result into \c a before the evaluation of the transpose is finished. Therefore, the instruction a = a.transpose() does not replace \c a with its transpose, as one would expect:
    Example:Output:
    \include tut_arithmetic_transpose_aliasing.cpp \verbinclude tut_arithmetic_transpose_aliasing.out
    This is the so-called \ref TopicAliasing "aliasing issue". In "debug mode", i.e., when \ref TopicAssertions "assertions" have not been disabled, such common pitfalls are automatically detected. For \em in-place transposition, as for instance in a = a.transpose(), simply use the \link DenseBase::transposeInPlace() transposeInPlace()\endlink function:
    Example:Output:
    \include tut_arithmetic_transpose_inplace.cpp \verbinclude tut_arithmetic_transpose_inplace.out
    There is also the \link MatrixBase::adjointInPlace() adjointInPlace()\endlink function for complex matrices. \section TutorialArithmeticMatrixMul Matrix-matrix and matrix-vector multiplication Matrix-matrix multiplication is again done with \c operator*. Since vectors are a special case of matrices, they are implicitly handled there too, so matrix-vector product is really just a special case of matrix-matrix product, and so is vector-vector outer product. Thus, all these cases are handled by just two operators: \li binary operator * as in \c a*b \li compound operator *= as in \c a*=b (this multiplies on the right: \c a*=b is equivalent to a = a*b)
    Example:Output:
    \include tut_arithmetic_matrix_mul.cpp \verbinclude tut_arithmetic_matrix_mul.out
    Note: if you read the above paragraph on expression templates and are worried that doing \c m=m*m might cause aliasing issues, be reassured for now: Eigen treats matrix multiplication as a special case and takes care of introducing a temporary here, so it will compile \c m=m*m as: \code tmp = m*m; m = tmp; \endcode If you know your matrix product can be safely evaluated into the destination matrix without aliasing issue, then you can use the \link MatrixBase::noalias() noalias()\endlink function to avoid the temporary, e.g.: \code c.noalias() += a * b; \endcode For more details on this topic, see the page on \ref TopicAliasing "aliasing". \b Note: for BLAS users worried about performance, expressions such as c.noalias() -= 2 * a.adjoint() * b; are fully optimized and trigger a single gemm-like function call. \section TutorialArithmeticDotAndCross Dot product and cross product For dot product and cross product, you need the \link MatrixBase::dot() dot()\endlink and \link MatrixBase::cross() cross()\endlink methods. Of course, the dot product can also be obtained as a 1x1 matrix as u.adjoint()*v.
    Example:Output:
    \include tut_arithmetic_dot_cross.cpp \verbinclude tut_arithmetic_dot_cross.out
    Remember that cross product is only for vectors of size 3. Dot product is for vectors of any sizes. When using complex numbers, Eigen's dot product is conjugate-linear in the first variable and linear in the second variable. \section TutorialArithmeticRedux Basic arithmetic reduction operations Eigen also provides some reduction operations to reduce a given matrix or vector to a single value such as the sum (computed by \link DenseBase::sum() sum()\endlink), product (\link DenseBase::prod() prod()\endlink), or the maximum (\link DenseBase::maxCoeff() maxCoeff()\endlink) and minimum (\link DenseBase::minCoeff() minCoeff()\endlink) of all its coefficients.
    Example:Output:
    \include tut_arithmetic_redux_basic.cpp \verbinclude tut_arithmetic_redux_basic.out
    The \em trace of a matrix, as returned by the function \link MatrixBase::trace() trace()\endlink, is the sum of the diagonal coefficients and can also be computed as efficiently using a.diagonal().sum(), as we will see later on. There also exist variants of the \c minCoeff and \c maxCoeff functions returning the coordinates of the respective coefficient via the arguments:
    Example:Output:
    \include tut_arithmetic_redux_minmax.cpp \verbinclude tut_arithmetic_redux_minmax.out
    \section TutorialArithmeticValidity Validity of operations Eigen checks the validity of the operations that you perform. When possible, it checks them at compile time, producing compilation errors. These error messages can be long and ugly, but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example: \code Matrix3f m; Vector4f v; v = m*v; // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES \endcode Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time. Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off. \code MatrixXf m(3,3); VectorXf v(4); v = m * v; // Run-time assertion failure here: "invalid matrix product" \endcode For more details on this topic, see \ref TopicAssertions "this page". */ } piqp-octave/src/eigen/INSTALL0000644000175100001770000000217114107270226015564 0ustar runnerdockerInstallation instructions for Eigen *********************************** Explanation before starting *************************** Eigen consists only of header files, hence there is nothing to compile before you can use it. Moreover, these header files do not depend on your platform, they are the same for everybody. Method 1. Installing without using CMake **************************************** You can use right away the headers in the Eigen/ subdirectory. In order to install, just copy this Eigen/ subdirectory to your favorite location. If you also want the unsupported features, copy the unsupported/ subdirectory too. Method 2. Installing using CMake ******************************** Let's call this directory 'source_dir' (where this INSTALL file is). Before starting, create another directory which we will call 'build_dir'. Do: cd build_dir cmake source_dir make install The "make install" step may require administrator privileges. You can adjust the installation destination (the "prefix") by passing the -DCMAKE_INSTALL_PREFIX=myprefix option to cmake, as is explained in the message that cmake prints at the end. piqp-octave/src/eigen/ci/0000755000175100001770000000000014107270226015125 5ustar runnerdockerpiqp-octave/src/eigen/ci/build.gitlab-ci.yml0000644000175100001770000001164014107270226020603 0ustar runnerdocker.build:linux:base: stage: build image: ubuntu:18.04 before_script: - apt-get update -y - apt-get install -y --no-install-recommends software-properties-common - add-apt-repository -y ppa:ubuntu-toolchain-r/test - apt-get update - apt-get install --no-install-recommends -y ${EIGEN_CI_CXX_COMPILER} ${EIGEN_CI_CC_COMPILER} cmake ninja-build script: - mkdir -p ${BUILDDIR} && cd ${BUILDDIR} - CXX=${EIGEN_CI_CXX_COMPILER} CC=${EIGEN_CI_CC_COMPILER} cmake -G ${EIGEN_CI_CMAKE_GENEATOR} -DEIGEN_TEST_CXX11=${EIGEN_TEST_CXX11} ${EIGEN_CI_ADDITIONAL_ARGS} .. - cmake --build . --target buildtests artifacts: name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME" paths: - ${BUILDDIR}/ expire_in: 5 days only: - schedules ######## x86-64 ################################################################ # GCC-4.8 (the oldest compiler we support) build:x86-64:linux:gcc-4.8:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-4.8" EIGEN_CI_CC_COMPILER: "gcc-4.8" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - x86-64 build:x86-64:linux:gcc-4.8:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-4.8" EIGEN_CI_CC_COMPILER: "gcc-4.8" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - x86-64 # GCC-9 build:x86-64:linux:gcc-9:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-9" EIGEN_CI_CC_COMPILER: "gcc-9" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - x86-64 build:x86-64:linux:gcc-9:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-9" EIGEN_CI_CC_COMPILER: "gcc-9" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - x86-64 # GCC-10 build:x86-64:linux:gcc-10:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - x86-64 build:x86-64:linux:gcc-10:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - x86-64 # Clang-10 build:x86-64:linux:clang-10:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - x86-64 build:x86-64:linux:clang-10:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - x86-64 ######## AArch64 ############################################################### # GCC-10 build:aarch64:linux:gcc-10:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - aarch64 build:aarch64:linux:gcc-10:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - aarch64 # Clang-10 build:aarch64:linux:clang-10:cxx11-off: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - aarch64 build:aarch64:linux:clang-10:cxx11-on: extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - aarch64 ######## ppc64le ############################################################### # Currently all ppc64le jobs are allowed to fail # GCC-10 build:ppc64le:linux:gcc-10:cxx11-off: allow_failure: true extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - ppc64le build:ppc64le:linux:gcc-10:cxx11-on: allow_failure: true extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - ppc64le # # Clang-10 build:ppc64le:linux:clang-10:cxx11-off: allow_failure: true extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "off" tags: - eigen-runner - linux - ppc64le build:ppc64le:linux:clang-10:cxx11-on: allow_failure: true extends: .build:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "on" tags: - eigen-runner - linux - ppc64le piqp-octave/src/eigen/ci/README.md0000644000175100001770000001145614107270226016413 0ustar runnerdocker## Eigen CI infrastructure Eigen's CI infrastructure uses two stages: A `build` stage to build the unit-test suite and a `test` stage to run the unit-tests. ### Build Stage The build stage consists of the following jobs: | Job Name | Arch | OS | Compiler | C++11 | |------------------------------------------|-----------|----------------|------------|---------| | `build:x86-64:linux:gcc-4.8:cxx11-off` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `Off` | | `build:x86-64:linux:gcc-4.8:cxx11-on` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `On` | | `build:x86-64:linux:gcc-9:cxx11-off` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `Off` | | `build:x86-64:linux:gcc-9:cxx11-on` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `On` | | `build:x86-64:linux:gcc-10:cxx11-off` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `Off` | | `build:x86-64:linux:gcc-10:cxx11-on` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `On` | | `build:x86-64:linux:clang-10:cxx11-off` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `Off` | | `build:x86-64:linux:clang-10:cxx11-on` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `On` | | `build:aarch64:linux:gcc-10:cxx11-off` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `Off` | | `build:aarch64:linux:gcc-10:cxx11-on` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `On` | | `build:aarch64:linux:clang-10:cxx11-off` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `Off` | | `build:aarch64:linux:clang-10:cxx11-on` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `On` | ### Test stage In principle every build-job has a corresponding test-job, however testing supported and unsupported modules is divided into separate jobs. The test jobs in detail: ### Job dependecies | Job Name | Arch | OS | Compiler | C++11 | Module |-----------------------------------------------------|-----------|----------------|------------|---------|-------- | `test:x86-64:linux:gcc-4.8:cxx11-off:official` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `Off` | `Official` | `test:x86-64:linux:gcc-4.8:cxx11-off:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `Off` | `Unsupported` | `test:x86-64:linux:gcc-4.8:cxx11-on:official` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `On` | `Official` | `test:x86-64:linux:gcc-4.8:cxx11-on:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-4.8` | `On` | `Unsupported` | `test:x86-64:linux:gcc-9:cxx11-off:official` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `Off` | `Official` | `test:x86-64:linux:gcc-9:cxx11-off:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `Off` | `Unsupported` | `test:x86-64:linux:gcc-9:cxx11-on:official` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `On` | `Official` | `test:x86-64:linux:gcc-9:cxx11-on:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-9` | `On` | `Unsupported` | `test:x86-64:linux:gcc-10:cxx11-off:official` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `Off` | `Official` | `test:x86-64:linux:gcc-10:cxx11-off:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `Off` | `Unsupported` | `test:x86-64:linux:gcc-10:cxx11-on:official` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `On` | `Official` | `test:x86-64:linux:gcc-10:cxx11-on:unsupported` | `x86-64` | `Ubuntu 18.04` | `GCC-10` | `On` | `Unsupported` | `test:x86-64:linux:clang-10:cxx11-off:official` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `Off` | `Official` | `test:x86-64:linux:clang-10:cxx11-off:unsupported` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `Off` | `Unsupported` | `test:x86-64:linux:clang-10:cxx11-on:official` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `On` | `Official` | `test:x86-64:linux:clang-10:cxx11-on:unsupported` | `x86-64` | `Ubuntu 18.04` | `Clang-10` | `On` | `Unsupported` | `test:aarch64:linux:gcc-10:cxx11-off:official` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `Off` | `Official` | `test:aarch64:linux:gcc-10:cxx11-off:unsupported` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `Off` | `Unsupported` | `test:aarch64:linux:gcc-10:cxx11-on:official` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `On` | `Official` | `test:aarch64:linux:gcc-10:cxx11-on:unsupported` | `AArch64` | `Ubuntu 18.04` | `GCC-10` | `On` | `Unsupported` | `test:aarch64:linux:clang-10:cxx11-off:official` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `Off` | `Official` | `test:aarch64:linux:clang-10:cxx11-off:unsupported` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `Off` | `Unsupported` | `test:aarch64:linux:clang-10:cxx11-on:official` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `On` | `Official` | `test:aarch64:linux:clang-10:cxx11-on:unsupported` | `AArch64` | `Ubuntu 18.04` | `Clang-10` | `On` | `Unsupported` piqp-octave/src/eigen/ci/test.gitlab-ci.yml0000644000175100001770000002415514107270226020470 0ustar runnerdocker.test:linux:base: stage: test image: ubuntu:18.04 retry: 2 before_script: - apt-get update -y - apt-get install -y --no-install-recommends software-properties-common - add-apt-repository -y ppa:ubuntu-toolchain-r/test - apt-get update - apt-get install --no-install-recommends -y ${EIGEN_CI_CXX_COMPILER} ${EIGEN_CI_CC_COMPILER} cmake ninja-build xsltproc script: - export CXX=${EIGEN_CI_CXX_COMPILER} - export CC=${EIGEN_CI_CC_COMPILER} - cd ${BUILDDIR} && ctest --output-on-failure --no-compress-output --build-no-clean -T test -L ${EIGEN_CI_TEST_LABEL} after_script: - apt-get update -y - apt-get install --no-install-recommends -y xsltproc - cd ${BUILDDIR} - xsltproc ../ci/CTest2JUnit.xsl Testing/`head -n 1 < Testing/TAG`/Test.xml > "JUnitTestResults_$CI_JOB_ID.xml" artifacts: reports: junit: - ${BUILDDIR}/JUnitTestResults_$CI_JOB_ID.xml expire_in: 5 days only: - schedules ##### x86-64 ################################################################### # GCC-4.8 .test:x86-64:linux:gcc-4.8:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-4.8 EIGEN_CI_CC_COMPILER: gcc-4.8 needs: [ "build:x86-64:linux:gcc-4.8:cxx11-off" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-4.8:cxx11-off:official: extends: .test:x86-64:linux:gcc-4.8:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-4.8:cxx11-off:unsupported: extends: .test:x86-64:linux:gcc-4.8:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:x86-64:linux:gcc-4.8:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-4.8 EIGEN_CI_CC_COMPILER: gcc-4.8 needs: [ "build:x86-64:linux:gcc-4.8:cxx11-on" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-4.8:cxx11-on:official: extends: .test:x86-64:linux:gcc-4.8:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-4.8:cxx11-on:unsupported: extends: .test:x86-64:linux:gcc-4.8:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" # GCC-9 .test:x86-64:linux:gcc-9:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-9 EIGEN_CI_CC_COMPILER: gcc-9 needs: [ "build:x86-64:linux:gcc-9:cxx11-off" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-9:cxx11-off:official: extends: .test:x86-64:linux:gcc-9:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-9:cxx11-off:unsupported: extends: .test:x86-64:linux:gcc-9:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:x86-64:linux:gcc-9:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-9 EIGEN_CI_CC_COMPILER: gcc-9 needs: [ "build:x86-64:linux:gcc-9:cxx11-on" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-9:cxx11-on:official: extends: .test:x86-64:linux:gcc-9:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-9:cxx11-on:unsupported: extends: .test:x86-64:linux:gcc-9:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" # GCC-10 .test:x86-64:linux:gcc-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:x86-64:linux:gcc-10:cxx11-off" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-10:cxx11-off:official: extends: .test:x86-64:linux:gcc-10:cxx11-off allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-10:cxx11-off:unsupported: extends: .test:x86-64:linux:gcc-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:x86-64:linux:gcc-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:x86-64:linux:gcc-10:cxx11-on" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:gcc-10:cxx11-on:official: extends: .test:x86-64:linux:gcc-10:cxx11-on allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:gcc-10:cxx11-on:unsupported: extends: .test:x86-64:linux:gcc-10:cxx11-on allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Unsupported" # Clang 10 .test:x86-64:linux:clang-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:x86-64:linux:clang-10:cxx11-off" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:clang-10:cxx11-off:official: extends: .test:x86-64:linux:clang-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:clang-10:cxx11-off:unsupported: extends: .test:x86-64:linux:clang-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:x86-64:linux:clang-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:x86-64:linux:clang-10:cxx11-on" ] tags: - eigen-runner - linux - x86-64 test:x86-64:linux:clang-10:cxx11-on:official: extends: .test:x86-64:linux:clang-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Official" test:x86-64:linux:clang-10:cxx11-on:unsupported: extends: .test:x86-64:linux:clang-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" ##### AArch64 ################################################################## # GCC-10 .test:aarch64:linux:gcc-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:aarch64:linux:gcc-10:cxx11-off" ] tags: - eigen-runner - linux - aarch64 test:aarch64:linux:gcc-10:cxx11-off:official: extends: .test:aarch64:linux:gcc-10:cxx11-off allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:aarch64:linux:gcc-10:cxx11-off:unsupported: extends: .test:aarch64:linux:gcc-10:cxx11-off allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:aarch64:linux:gcc-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:aarch64:linux:gcc-10:cxx11-on" ] tags: - eigen-runner - linux - aarch64 test:aarch64:linux:gcc-10:cxx11-on:official: extends: .test:aarch64:linux:gcc-10:cxx11-on allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:aarch64:linux:gcc-10:cxx11-on:unsupported: extends: .test:aarch64:linux:gcc-10:cxx11-on allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Unsupported" # Clang 10 .test:aarch64:linux:clang-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:aarch64:linux:clang-10:cxx11-off" ] tags: - eigen-runner - linux - aarch64 test:aarch64:linux:clang-10:cxx11-off:official: extends: .test:aarch64:linux:clang-10:cxx11-off allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:aarch64:linux:clang-10:cxx11-off:unsupported: extends: .test:aarch64:linux:clang-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:aarch64:linux:clang-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:aarch64:linux:clang-10:cxx11-on" ] tags: - eigen-runner - linux - aarch64 test:aarch64:linux:clang-10:cxx11-on:official: extends: .test:aarch64:linux:clang-10:cxx11-on allow_failure: true variables: EIGEN_CI_TEST_LABEL: "Official" test:aarch64:linux:clang-10:cxx11-on:unsupported: extends: .test:aarch64:linux:clang-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" ##### ppc64le ################################################################## # GCC-10 .test:ppc64le:linux:gcc-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:ppc64le:linux:gcc-10:cxx11-off" ] allow_failure: true tags: - eigen-runner - linux - ppc64le test:ppc64le:linux:gcc-10:cxx11-off:official: extends: .test:ppc64le:linux:gcc-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Official" test:ppc64le:linux:gcc-10:cxx11-off:unsupported: extends: .test:ppc64le:linux:gcc-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:ppc64le:linux:gcc-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "build:ppc64le:linux:gcc-10:cxx11-on" ] allow_failure: true tags: - eigen-runner - linux - ppc64le test:ppc64le:linux:gcc-10:cxx11-on:official: extends: .test:ppc64le:linux:gcc-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Official" test:ppc64le:linux:gcc-10:cxx11-on:unsupported: extends: .test:ppc64le:linux:gcc-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" # # Clang 10 .test:ppc64le:linux:clang-10:cxx11-off: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:ppc64le:linux:clang-10:cxx11-off" ] allow_failure: true tags: - eigen-runner - linux - ppc64le test:ppc64le:linux:clang-10:cxx11-off:official: extends: .test:ppc64le:linux:clang-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Official" test:ppc64le:linux:clang-10:cxx11-off:unsupported: extends: .test:ppc64le:linux:clang-10:cxx11-off variables: EIGEN_CI_TEST_LABEL: "Unsupported" .test:ppc64le:linux:clang-10:cxx11-on: extends: .test:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "build:ppc64le:linux:clang-10:cxx11-on" ] allow_failure: true tags: - eigen-runner - linux - ppc64le test:ppc64le:linux:clang-10:cxx11-on:official: extends: .test:ppc64le:linux:clang-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Official" test:ppc64le:linux:clang-10:cxx11-on:unsupported: extends: .test:ppc64le:linux:clang-10:cxx11-on variables: EIGEN_CI_TEST_LABEL: "Unsupported" piqp-octave/src/eigen/ci/smoketests.gitlab-ci.yml0000644000175100001770000000664214107270226021713 0ustar runnerdocker.buildsmoketests:linux:base: stage: buildsmoketests image: ubuntu:18.04 before_script: - apt-get update -y - apt-get install -y --no-install-recommends software-properties-common - add-apt-repository -y ppa:ubuntu-toolchain-r/test - apt-get update - apt-get install --no-install-recommends -y ${EIGEN_CI_CXX_COMPILER} ${EIGEN_CI_CC_COMPILER} cmake ninja-build script: - mkdir -p ${BUILDDIR} && cd ${BUILDDIR} - CXX=${EIGEN_CI_CXX_COMPILER} CC=${EIGEN_CI_CC_COMPILER} cmake -G ${EIGEN_CI_CMAKE_GENEATOR} -DEIGEN_TEST_CXX11=${EIGEN_TEST_CXX11} ${EIGEN_CI_ADDITIONAL_ARGS} .. - cmake --build . --target buildsmoketests artifacts: name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME" paths: - ${BUILDDIR}/ expire_in: 5 days only: - merge_requests buildsmoketests:x86-64:linux:gcc-10:cxx11-off: extends: .buildsmoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "off" buildsmoketests:x86-64:linux:gcc-10:cxx11-on: extends: .buildsmoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: "g++-10" EIGEN_CI_CC_COMPILER: "gcc-10" EIGEN_TEST_CXX11: "on" buildsmoketests:x86-64:linux:clang-10:cxx11-off: extends: .buildsmoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "off" buildsmoketests:x86-64:linux:clang-10:cxx11-on: extends: .buildsmoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: "clang++-10" EIGEN_CI_CC_COMPILER: "clang-10" EIGEN_TEST_CXX11: "on" .smoketests:linux:base: stage: smoketests image: ubuntu:18.04 before_script: - apt-get update -y - apt-get install -y --no-install-recommends software-properties-common - add-apt-repository -y ppa:ubuntu-toolchain-r/test - apt-get update - apt-get install --no-install-recommends -y ${EIGEN_CI_CXX_COMPILER} ${EIGEN_CI_CC_COMPILER} cmake ninja-build xsltproc script: - export CXX=${EIGEN_CI_CXX_COMPILER} - export CC=${EIGEN_CI_CC_COMPILER} - cd ${BUILDDIR} && ctest --output-on-failure --no-compress-output --build-no-clean -T test -L smoketest after_script: - apt-get update -y - apt-get install --no-install-recommends -y xsltproc - cd ${BUILDDIR} - xsltproc ../ci/CTest2JUnit.xsl Testing/`head -n 1 < Testing/TAG`/Test.xml > "JUnitTestResults_$CI_JOB_ID.xml" artifacts: reports: junit: - ${BUILDDIR}/JUnitTestResults_$CI_JOB_ID.xml expire_in: 5 days only: - merge_requests smoketests:x86-64:linux:gcc-10:cxx11-off: extends: .smoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "buildsmoketests:x86-64:linux:gcc-10:cxx11-off" ] smoketests:x86-64:linux:gcc-10:cxx11-on: extends: .smoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: g++-10 EIGEN_CI_CC_COMPILER: gcc-10 needs: [ "buildsmoketests:x86-64:linux:gcc-10:cxx11-on" ] smoketests:x86-64:linux:clang-10:cxx11-off: extends: .smoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "buildsmoketests:x86-64:linux:clang-10:cxx11-off" ] smoketests:x86-64:linux:clang-10:cxx11-on: extends: .smoketests:linux:base variables: EIGEN_CI_CXX_COMPILER: clang++-10 EIGEN_CI_CC_COMPILER: clang-10 needs: [ "buildsmoketests:x86-64:linux:clang-10:cxx11-on" ] piqp-octave/src/eigen/ci/CTest2JUnit.xsl0000644000175100001770000001512014107270226017732 0ustar runnerdocker BuildName: BuildStamp: Name: Generator: CompilerName: OSName: Hostname: OSRelease: OSVersion: OSPlatform: Is64Bits: VendorString: VendorID: FamilyID: ModelID: ProcessorCacheSize: NumberOfLogicalCPU: NumberOfPhysicalCPU: TotalVirtualMemory: TotalPhysicalMemory: LogicalProcessorsPerPhysical: ProcessorClockFrequency: piqp-octave/src/eigen/Eigen/0000755000175100001770000000000014107270226015561 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/SparseLU0000644000175100001770000000342614107270226017207 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSELU_MODULE_H #define EIGEN_SPARSELU_MODULE_H #include "SparseCore" /** * \defgroup SparseLU_Module SparseLU module * This module defines a supernodal factorization of general sparse matrices. * The code is fully optimized for supernode-panel updates with specialized kernels. * Please, see the documentation of the SparseLU class for more details. */ // Ordering interface #include "OrderingMethods" #include "src/Core/util/DisableStupidWarnings.h" #include "src/SparseLU/SparseLU_gemm_kernel.h" #include "src/SparseLU/SparseLU_Structs.h" #include "src/SparseLU/SparseLU_SupernodalMatrix.h" #include "src/SparseLU/SparseLUImpl.h" #include "src/SparseCore/SparseColEtree.h" #include "src/SparseLU/SparseLU_Memory.h" #include "src/SparseLU/SparseLU_heap_relax_snode.h" #include "src/SparseLU/SparseLU_relax_snode.h" #include "src/SparseLU/SparseLU_pivotL.h" #include "src/SparseLU/SparseLU_panel_dfs.h" #include "src/SparseLU/SparseLU_kernel_bmod.h" #include "src/SparseLU/SparseLU_panel_bmod.h" #include "src/SparseLU/SparseLU_column_dfs.h" #include "src/SparseLU/SparseLU_column_bmod.h" #include "src/SparseLU/SparseLU_copy_to_ucol.h" #include "src/SparseLU/SparseLU_pruneL.h" #include "src/SparseLU/SparseLU_Utils.h" #include "src/SparseLU/SparseLU.h" #include "src/Core/util/ReenableStupidWarnings.h" #endif // EIGEN_SPARSELU_MODULE_H piqp-octave/src/eigen/Eigen/src/0000755000175100001770000000000014107270226016350 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SparseLU/0000755000175100001770000000000014107270226020046 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SparseLU/SparseLUImpl.h0000644000175100001770000001031714107270226022541 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef SPARSELU_IMPL_H #define SPARSELU_IMPL_H namespace Eigen { namespace internal { /** \ingroup SparseLU_Module * \class SparseLUImpl * Base class for sparseLU */ template class SparseLUImpl { public: typedef Matrix ScalarVector; typedef Matrix IndexVector; typedef Matrix ScalarMatrix; typedef Map > MappedMatrixBlock; typedef typename ScalarVector::RealScalar RealScalar; typedef Ref > BlockScalarVector; typedef Ref > BlockIndexVector; typedef LU_GlobalLU_t GlobalLU_t; typedef SparseMatrix MatrixType; protected: template Index expand(VectorType& vec, Index& length, Index nbElts, Index keep_prev, Index& num_expansions); Index memInit(Index m, Index n, Index annz, Index lwork, Index fillratio, Index panel_size, GlobalLU_t& glu); template Index memXpand(VectorType& vec, Index& maxlen, Index nbElts, MemType memtype, Index& num_expansions); void heap_relax_snode (const Index n, IndexVector& et, const Index relax_columns, IndexVector& descendants, IndexVector& relax_end); void relax_snode (const Index n, IndexVector& et, const Index relax_columns, IndexVector& descendants, IndexVector& relax_end); Index snode_dfs(const Index jcol, const Index kcol,const MatrixType& mat, IndexVector& xprune, IndexVector& marker, GlobalLU_t& glu); Index snode_bmod (const Index jcol, const Index fsupc, ScalarVector& dense, GlobalLU_t& glu); Index pivotL(const Index jcol, const RealScalar& diagpivotthresh, IndexVector& perm_r, IndexVector& iperm_c, Index& pivrow, GlobalLU_t& glu); template void dfs_kernel(const StorageIndex jj, IndexVector& perm_r, Index& nseg, IndexVector& panel_lsub, IndexVector& segrep, Ref repfnz_col, IndexVector& xprune, Ref marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu, Index& nextl_col, Index krow, Traits& traits); void panel_dfs(const Index m, const Index w, const Index jcol, MatrixType& A, IndexVector& perm_r, Index& nseg, ScalarVector& dense, IndexVector& panel_lsub, IndexVector& segrep, IndexVector& repfnz, IndexVector& xprune, IndexVector& marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu); void panel_bmod(const Index m, const Index w, const Index jcol, const Index nseg, ScalarVector& dense, ScalarVector& tempv, IndexVector& segrep, IndexVector& repfnz, GlobalLU_t& glu); Index column_dfs(const Index m, const Index jcol, IndexVector& perm_r, Index maxsuper, Index& nseg, BlockIndexVector lsub_col, IndexVector& segrep, BlockIndexVector repfnz, IndexVector& xprune, IndexVector& marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu); Index column_bmod(const Index jcol, const Index nseg, BlockScalarVector dense, ScalarVector& tempv, BlockIndexVector segrep, BlockIndexVector repfnz, Index fpanelc, GlobalLU_t& glu); Index copy_to_ucol(const Index jcol, const Index nseg, IndexVector& segrep, BlockIndexVector repfnz ,IndexVector& perm_r, BlockScalarVector dense, GlobalLU_t& glu); void pruneL(const Index jcol, const IndexVector& perm_r, const Index pivrow, const Index nseg, const IndexVector& segrep, BlockIndexVector repfnz, IndexVector& xprune, GlobalLU_t& glu); void countnz(const Index n, Index& nnzL, Index& nnzU, GlobalLU_t& glu); void fixupL(const Index n, const IndexVector& perm_r, GlobalLU_t& glu); template friend struct column_dfs_traits; }; } // end namespace internal } // namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_panel_dfs.h0000644000175100001770000002150414107270226023732 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]panel_dfs.c file in SuperLU * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_PANEL_DFS_H #define SPARSELU_PANEL_DFS_H namespace Eigen { namespace internal { template struct panel_dfs_traits { typedef typename IndexVector::Scalar StorageIndex; panel_dfs_traits(Index jcol, StorageIndex* marker) : m_jcol(jcol), m_marker(marker) {} bool update_segrep(Index krep, StorageIndex jj) { if(m_marker[krep] template void SparseLUImpl::dfs_kernel(const StorageIndex jj, IndexVector& perm_r, Index& nseg, IndexVector& panel_lsub, IndexVector& segrep, Ref repfnz_col, IndexVector& xprune, Ref marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu, Index& nextl_col, Index krow, Traits& traits ) { StorageIndex kmark = marker(krow); // For each unmarked krow of jj marker(krow) = jj; StorageIndex kperm = perm_r(krow); if (kperm == emptyIdxLU ) { // krow is in L : place it in structure of L(*, jj) panel_lsub(nextl_col++) = StorageIndex(krow); // krow is indexed into A traits.mem_expand(panel_lsub, nextl_col, kmark); } else { // krow is in U : if its supernode-representative krep // has been explored, update repfnz(*) // krep = supernode representative of the current row StorageIndex krep = glu.xsup(glu.supno(kperm)+1) - 1; // First nonzero element in the current column: StorageIndex myfnz = repfnz_col(krep); if (myfnz != emptyIdxLU ) { // Representative visited before if (myfnz > kperm ) repfnz_col(krep) = kperm; } else { // Otherwise, perform dfs starting at krep StorageIndex oldrep = emptyIdxLU; parent(krep) = oldrep; repfnz_col(krep) = kperm; StorageIndex xdfs = glu.xlsub(krep); Index maxdfs = xprune(krep); StorageIndex kpar; do { // For each unmarked kchild of krep while (xdfs < maxdfs) { StorageIndex kchild = glu.lsub(xdfs); xdfs++; StorageIndex chmark = marker(kchild); if (chmark != jj ) { marker(kchild) = jj; StorageIndex chperm = perm_r(kchild); if (chperm == emptyIdxLU) { // case kchild is in L: place it in L(*, j) panel_lsub(nextl_col++) = kchild; traits.mem_expand(panel_lsub, nextl_col, chmark); } else { // case kchild is in U : // chrep = its supernode-rep. If its rep has been explored, // update its repfnz(*) StorageIndex chrep = glu.xsup(glu.supno(chperm)+1) - 1; myfnz = repfnz_col(chrep); if (myfnz != emptyIdxLU) { // Visited before if (myfnz > chperm) repfnz_col(chrep) = chperm; } else { // Cont. dfs at snode-rep of kchild xplore(krep) = xdfs; oldrep = krep; krep = chrep; // Go deeper down G(L) parent(krep) = oldrep; repfnz_col(krep) = chperm; xdfs = glu.xlsub(krep); maxdfs = xprune(krep); } // end if myfnz != -1 } // end if chperm == -1 } // end if chmark !=jj } // end while xdfs < maxdfs // krow has no more unexplored nbrs : // Place snode-rep krep in postorder DFS, if this // segment is seen for the first time. (Note that // "repfnz(krep)" may change later.) // Baktrack dfs to its parent if(traits.update_segrep(krep,jj)) //if (marker1(krep) < jcol ) { segrep(nseg) = krep; ++nseg; //marker1(krep) = jj; } kpar = parent(krep); // Pop recursion, mimic recursion if (kpar == emptyIdxLU) break; // dfs done krep = kpar; xdfs = xplore(krep); maxdfs = xprune(krep); } while (kpar != emptyIdxLU); // Do until empty stack } // end if (myfnz = -1) } // end if (kperm == -1) } /** * \brief Performs a symbolic factorization on a panel of columns [jcol, jcol+w) * * A supernode representative is the last column of a supernode. * The nonzeros in U[*,j] are segments that end at supernodes representatives * * The routine returns a list of the supernodal representatives * in topological order of the dfs that generates them. This list is * a superset of the topological order of each individual column within * the panel. * The location of the first nonzero in each supernodal segment * (supernodal entry location) is also returned. Each column has * a separate list for this purpose. * * Two markers arrays are used for dfs : * marker[i] == jj, if i was visited during dfs of current column jj; * marker1[i] >= jcol, if i was visited by earlier columns in this panel; * * \param[in] m number of rows in the matrix * \param[in] w Panel size * \param[in] jcol Starting column of the panel * \param[in] A Input matrix in column-major storage * \param[in] perm_r Row permutation * \param[out] nseg Number of U segments * \param[out] dense Accumulate the column vectors of the panel * \param[out] panel_lsub Subscripts of the row in the panel * \param[out] segrep Segment representative i.e first nonzero row of each segment * \param[out] repfnz First nonzero location in each row * \param[out] xprune The pruned elimination tree * \param[out] marker work vector * \param parent The elimination tree * \param xplore work vector * \param glu The global data structure * */ template void SparseLUImpl::panel_dfs(const Index m, const Index w, const Index jcol, MatrixType& A, IndexVector& perm_r, Index& nseg, ScalarVector& dense, IndexVector& panel_lsub, IndexVector& segrep, IndexVector& repfnz, IndexVector& xprune, IndexVector& marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu) { Index nextl_col; // Next available position in panel_lsub[*,jj] // Initialize pointers VectorBlock marker1(marker, m, m); nseg = 0; panel_dfs_traits traits(jcol, marker1.data()); // For each column in the panel for (StorageIndex jj = StorageIndex(jcol); jj < jcol + w; jj++) { nextl_col = (jj - jcol) * m; VectorBlock repfnz_col(repfnz, nextl_col, m); // First nonzero location in each row VectorBlock dense_col(dense,nextl_col, m); // Accumulate a column vector here // For each nnz in A[*, jj] do depth first search for (typename MatrixType::InnerIterator it(A, jj); it; ++it) { Index krow = it.row(); dense_col(krow) = it.value(); StorageIndex kmark = marker(krow); if (kmark == jj) continue; // krow visited before, go to the next nonzero dfs_kernel(jj, perm_r, nseg, panel_lsub, segrep, repfnz_col, xprune, marker, parent, xplore, glu, nextl_col, krow, traits); }// end for nonzeros in column jj } // end for column jj } } // end namespace internal } // end namespace Eigen #endif // SPARSELU_PANEL_DFS_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_heap_relax_snode.h0000644000175100001770000001012514107270226025274 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* This file is a modified version of heap_relax_snode.c file in SuperLU * -- SuperLU routine (version 3.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * October 15, 2003 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_HEAP_RELAX_SNODE_H #define SPARSELU_HEAP_RELAX_SNODE_H namespace Eigen { namespace internal { /** * \brief Identify the initial relaxed supernodes * * This routine applied to a symmetric elimination tree. * It assumes that the matrix has been reordered according to the postorder of the etree * \param n The number of columns * \param et elimination tree * \param relax_columns Maximum number of columns allowed in a relaxed snode * \param descendants Number of descendants of each node in the etree * \param relax_end last column in a supernode */ template void SparseLUImpl::heap_relax_snode (const Index n, IndexVector& et, const Index relax_columns, IndexVector& descendants, IndexVector& relax_end) { // The etree may not be postordered, but its heap ordered IndexVector post; internal::treePostorder(StorageIndex(n), et, post); // Post order etree IndexVector inv_post(n+1); for (StorageIndex i = 0; i < n+1; ++i) inv_post(post(i)) = i; // inv_post = post.inverse()??? // Renumber etree in postorder IndexVector iwork(n); IndexVector et_save(n+1); for (Index i = 0; i < n; ++i) { iwork(post(i)) = post(et(i)); } et_save = et; // Save the original etree et = iwork; // compute the number of descendants of each node in the etree relax_end.setConstant(emptyIdxLU); Index j, parent; descendants.setZero(); for (j = 0; j < n; j++) { parent = et(j); if (parent != n) // not the dummy root descendants(parent) += descendants(j) + 1; } // Identify the relaxed supernodes by postorder traversal of the etree Index snode_start; // beginning of a snode StorageIndex k; Index nsuper_et_post = 0; // Number of relaxed snodes in postordered etree Index nsuper_et = 0; // Number of relaxed snodes in the original etree StorageIndex l; for (j = 0; j < n; ) { parent = et(j); snode_start = j; while ( parent != n && descendants(parent) < relax_columns ) { j = parent; parent = et(j); } // Found a supernode in postordered etree, j is the last column ++nsuper_et_post; k = StorageIndex(n); for (Index i = snode_start; i <= j; ++i) k = (std::min)(k, inv_post(i)); l = inv_post(j); if ( (l - k) == (j - snode_start) ) // Same number of columns in the snode { // This is also a supernode in the original etree relax_end(k) = l; // Record last column ++nsuper_et; } else { for (Index i = snode_start; i <= j; ++i) { l = inv_post(i); if (descendants(i) == 0) { relax_end(l) = l; ++nsuper_et; } } } j++; // Search for a new leaf while (descendants(j) != 0 && j < n) j++; } // End postorder traversal of the etree // Recover the original etree et = et_save; } } // end namespace internal } // end namespace Eigen #endif // SPARSELU_HEAP_RELAX_SNODE_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_gemm_kernel.h0000644000175100001770000002375114107270226024272 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSELU_GEMM_KERNEL_H #define EIGEN_SPARSELU_GEMM_KERNEL_H namespace Eigen { namespace internal { /** \internal * A general matrix-matrix product kernel optimized for the SparseLU factorization. * - A, B, and C must be column major * - lda and ldc must be multiples of the respective packet size * - C must have the same alignment as A */ template EIGEN_DONT_INLINE void sparselu_gemm(Index m, Index n, Index d, const Scalar* A, Index lda, const Scalar* B, Index ldb, Scalar* C, Index ldc) { using namespace Eigen::internal; typedef typename packet_traits::type Packet; enum { NumberOfRegisters = EIGEN_ARCH_DEFAULT_NUMBER_OF_REGISTERS, PacketSize = packet_traits::size, PM = 8, // peeling in M RN = 2, // register blocking RK = NumberOfRegisters>=16 ? 4 : 2, // register blocking BM = 4096/sizeof(Scalar), // number of rows of A-C per chunk SM = PM*PacketSize // step along M }; Index d_end = (d/RK)*RK; // number of columns of A (rows of B) suitable for full register blocking Index n_end = (n/RN)*RN; // number of columns of B-C suitable for processing RN columns at once Index i0 = internal::first_default_aligned(A,m); eigen_internal_assert(((lda%PacketSize)==0) && ((ldc%PacketSize)==0) && (i0==internal::first_default_aligned(C,m))); // handle the non aligned rows of A and C without any optimization: for(Index i=0; i(BM, m-ib); // actual number of rows Index actual_b_end1 = (actual_b/SM)*SM; // actual number of rows suitable for peeling Index actual_b_end2 = (actual_b/PacketSize)*PacketSize; // actual number of rows suitable for vectorization // Let's process two columns of B-C at once for(Index j=0; j(Bc0[0]); } { b10 = pset1(Bc0[1]); } if(RK==4) { b20 = pset1(Bc0[2]); } if(RK==4) { b30 = pset1(Bc0[3]); } { b01 = pset1(Bc1[0]); } { b11 = pset1(Bc1[1]); } if(RK==4) { b21 = pset1(Bc1[2]); } if(RK==4) { b31 = pset1(Bc1[3]); } Packet a0, a1, a2, a3, c0, c1, t0, t1; const Scalar* A0 = A+ib+(k+0)*lda; const Scalar* A1 = A+ib+(k+1)*lda; const Scalar* A2 = A+ib+(k+2)*lda; const Scalar* A3 = A+ib+(k+3)*lda; Scalar* C0 = C+ib+(j+0)*ldc; Scalar* C1 = C+ib+(j+1)*ldc; a0 = pload(A0); a1 = pload(A1); if(RK==4) { a2 = pload(A2); a3 = pload(A3); } else { // workaround "may be used uninitialized in this function" warning a2 = a3 = a0; } #define KMADD(c, a, b, tmp) {tmp = b; tmp = pmul(a,tmp); c = padd(c,tmp);} #define WORK(I) \ c0 = pload(C0+i+(I)*PacketSize); \ c1 = pload(C1+i+(I)*PacketSize); \ KMADD(c0, a0, b00, t0) \ KMADD(c1, a0, b01, t1) \ a0 = pload(A0+i+(I+1)*PacketSize); \ KMADD(c0, a1, b10, t0) \ KMADD(c1, a1, b11, t1) \ a1 = pload(A1+i+(I+1)*PacketSize); \ if(RK==4){ KMADD(c0, a2, b20, t0) }\ if(RK==4){ KMADD(c1, a2, b21, t1) }\ if(RK==4){ a2 = pload(A2+i+(I+1)*PacketSize); }\ if(RK==4){ KMADD(c0, a3, b30, t0) }\ if(RK==4){ KMADD(c1, a3, b31, t1) }\ if(RK==4){ a3 = pload(A3+i+(I+1)*PacketSize); }\ pstore(C0+i+(I)*PacketSize, c0); \ pstore(C1+i+(I)*PacketSize, c1) // process rows of A' - C' with aggressive vectorization and peeling for(Index i=0; i0) { const Scalar* Bc0 = B+(n-1)*ldb; for(Index k=0; k(Bc0[0]); b10 = pset1(Bc0[1]); if(RK==4) b20 = pset1(Bc0[2]); if(RK==4) b30 = pset1(Bc0[3]); Packet a0, a1, a2, a3, c0, t0/*, t1*/; const Scalar* A0 = A+ib+(k+0)*lda; const Scalar* A1 = A+ib+(k+1)*lda; const Scalar* A2 = A+ib+(k+2)*lda; const Scalar* A3 = A+ib+(k+3)*lda; Scalar* C0 = C+ib+(n_end)*ldc; a0 = pload(A0); a1 = pload(A1); if(RK==4) { a2 = pload(A2); a3 = pload(A3); } else { // workaround "may be used uninitialized in this function" warning a2 = a3 = a0; } #define WORK(I) \ c0 = pload(C0+i+(I)*PacketSize); \ KMADD(c0, a0, b00, t0) \ a0 = pload(A0+i+(I+1)*PacketSize); \ KMADD(c0, a1, b10, t0) \ a1 = pload(A1+i+(I+1)*PacketSize); \ if(RK==4){ KMADD(c0, a2, b20, t0) }\ if(RK==4){ a2 = pload(A2+i+(I+1)*PacketSize); }\ if(RK==4){ KMADD(c0, a3, b30, t0) }\ if(RK==4){ a3 = pload(A3+i+(I+1)*PacketSize); }\ pstore(C0+i+(I)*PacketSize, c0); // aggressive vectorization and peeling for(Index i=0; i0) { for(Index j=0; j1 ? Aligned : 0 }; typedef Map, Alignment > MapVector; typedef Map, Alignment > ConstMapVector; if(rd==1) MapVector(C+j*ldc+ib,actual_b) += B[0+d_end+j*ldb] * ConstMapVector(A+(d_end+0)*lda+ib, actual_b); else if(rd==2) MapVector(C+j*ldc+ib,actual_b) += B[0+d_end+j*ldb] * ConstMapVector(A+(d_end+0)*lda+ib, actual_b) + B[1+d_end+j*ldb] * ConstMapVector(A+(d_end+1)*lda+ib, actual_b); else MapVector(C+j*ldc+ib,actual_b) += B[0+d_end+j*ldb] * ConstMapVector(A+(d_end+0)*lda+ib, actual_b) + B[1+d_end+j*ldb] * ConstMapVector(A+(d_end+1)*lda+ib, actual_b) + B[2+d_end+j*ldb] * ConstMapVector(A+(d_end+2)*lda+ib, actual_b); } } } // blocking on the rows of A and C } #undef KMADD } // namespace internal } // namespace Eigen #endif // EIGEN_SPARSELU_GEMM_KERNEL_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_Structs.h0000644000175100001770000001155614107270226023454 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file comes from a partly modified version of files slu_[s,d,c,z]defs.h * -- SuperLU routine (version 4.1) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November, 2010 * * Global data structures used in LU factorization - * * nsuper: #supernodes = nsuper + 1, numbered [0, nsuper]. * (xsup,supno): supno[i] is the supernode no to which i belongs; * xsup(s) points to the beginning of the s-th supernode. * e.g. supno 0 1 2 2 3 3 3 4 4 4 4 4 (n=12) * xsup 0 1 2 4 7 12 * Note: dfs will be performed on supernode rep. relative to the new * row pivoting ordering * * (xlsub,lsub): lsub[*] contains the compressed subscript of * rectangular supernodes; xlsub[j] points to the starting * location of the j-th column in lsub[*]. Note that xlsub * is indexed by column. * Storage: original row subscripts * * During the course of sparse LU factorization, we also use * (xlsub,lsub) for the purpose of symmetric pruning. For each * supernode {s,s+1,...,t=s+r} with first column s and last * column t, the subscript set * lsub[j], j=xlsub[s], .., xlsub[s+1]-1 * is the structure of column s (i.e. structure of this supernode). * It is used for the storage of numerical values. * Furthermore, * lsub[j], j=xlsub[t], .., xlsub[t+1]-1 * is the structure of the last column t of this supernode. * It is for the purpose of symmetric pruning. Therefore, the * structural subscripts can be rearranged without making physical * interchanges among the numerical values. * * However, if the supernode has only one column, then we * only keep one set of subscripts. For any subscript interchange * performed, similar interchange must be done on the numerical * values. * * The last column structures (for pruning) will be removed * after the numercial LU factorization phase. * * (xlusup,lusup): lusup[*] contains the numerical values of the * rectangular supernodes; xlusup[j] points to the starting * location of the j-th column in storage vector lusup[*] * Note: xlusup is indexed by column. * Each rectangular supernode is stored by column-major * scheme, consistent with Fortran 2-dim array storage. * * (xusub,ucol,usub): ucol[*] stores the numerical values of * U-columns outside the rectangular supernodes. The row * subscript of nonzero ucol[k] is stored in usub[k]. * xusub[i] points to the starting location of column i in ucol. * Storage: new row subscripts; that is subscripts of PA. */ #ifndef EIGEN_LU_STRUCTS #define EIGEN_LU_STRUCTS namespace Eigen { namespace internal { typedef enum {LUSUP, UCOL, LSUB, USUB, LLVL, ULVL} MemType; template struct LU_GlobalLU_t { typedef typename IndexVector::Scalar StorageIndex; IndexVector xsup; //First supernode column ... xsup(s) points to the beginning of the s-th supernode IndexVector supno; // Supernode number corresponding to this column (column to supernode mapping) ScalarVector lusup; // nonzero values of L ordered by columns IndexVector lsub; // Compressed row indices of L rectangular supernodes. IndexVector xlusup; // pointers to the beginning of each column in lusup IndexVector xlsub; // pointers to the beginning of each column in lsub Index nzlmax; // Current max size of lsub Index nzlumax; // Current max size of lusup ScalarVector ucol; // nonzero values of U ordered by columns IndexVector usub; // row indices of U columns in ucol IndexVector xusub; // Pointers to the beginning of each column of U in ucol Index nzumax; // Current max size of ucol Index n; // Number of columns in the matrix Index num_expansions; }; // Values to set for performance struct perfvalues { Index panel_size; // a panel consists of at most consecutive columns Index relax; // To control degree of relaxing supernodes. If the number of nodes (columns) // in a subtree of the elimination tree is less than relax, this subtree is considered // as one supernode regardless of the row structures of those columns Index maxsuper; // The maximum size for a supernode in complete LU Index rowblk; // The minimum row dimension for 2-D blocking to be used; Index colblk; // The minimum column dimension for 2-D blocking to be used; Index fillfactor; // The estimated fills factors for L and U, compared with A }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_LU_STRUCTS piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_copy_to_ucol.h0000644000175100001770000000714114107270226024476 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]copy_to_ucol.c file in SuperLU * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_COPY_TO_UCOL_H #define SPARSELU_COPY_TO_UCOL_H namespace Eigen { namespace internal { /** * \brief Performs numeric block updates (sup-col) in topological order * * \param jcol current column to update * \param nseg Number of segments in the U part * \param segrep segment representative ... * \param repfnz First nonzero column in each row ... * \param perm_r Row permutation * \param dense Store the full representation of the column * \param glu Global LU data. * \return 0 - successful return * > 0 - number of bytes allocated when run out of space * */ template Index SparseLUImpl::copy_to_ucol(const Index jcol, const Index nseg, IndexVector& segrep, BlockIndexVector repfnz ,IndexVector& perm_r, BlockScalarVector dense, GlobalLU_t& glu) { Index ksub, krep, ksupno; Index jsupno = glu.supno(jcol); // For each nonzero supernode segment of U[*,j] in topological order Index k = nseg - 1, i; StorageIndex nextu = glu.xusub(jcol); Index kfnz, isub, segsize; Index new_next,irow; Index fsupc, mem; for (ksub = 0; ksub < nseg; ksub++) { krep = segrep(k); k--; ksupno = glu.supno(krep); if (jsupno != ksupno ) // should go into ucol(); { kfnz = repfnz(krep); if (kfnz != emptyIdxLU) { // Nonzero U-segment fsupc = glu.xsup(ksupno); isub = glu.xlsub(fsupc) + kfnz - fsupc; segsize = krep - kfnz + 1; new_next = nextu + segsize; while (new_next > glu.nzumax) { mem = memXpand(glu.ucol, glu.nzumax, nextu, UCOL, glu.num_expansions); if (mem) return mem; mem = memXpand(glu.usub, glu.nzumax, nextu, USUB, glu.num_expansions); if (mem) return mem; } for (i = 0; i < segsize; i++) { irow = glu.lsub(isub); glu.usub(nextu) = perm_r(irow); // Unlike the L part, the U part is stored in its final order glu.ucol(nextu) = dense(irow); dense(irow) = Scalar(0.0); nextu++; isub++; } } // end nonzero U-segment } // end if jsupno } // end for each segment glu.xusub(jcol + 1) = nextu; // close U(*,jcol) return 0; } } // namespace internal } // end namespace Eigen #endif // SPARSELU_COPY_TO_UCOL_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_Memory.h0000644000175100001770000001666214107270226023260 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]memory.c files in SuperLU * -- SuperLU routine (version 3.1) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * August 1, 2008 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef EIGEN_SPARSELU_MEMORY #define EIGEN_SPARSELU_MEMORY namespace Eigen { namespace internal { enum { LUNoMarker = 3 }; enum {emptyIdxLU = -1}; inline Index LUnumTempV(Index& m, Index& w, Index& t, Index& b) { return (std::max)(m, (t+b)*w); } template< typename Scalar> inline Index LUTempSpace(Index&m, Index& w) { return (2*w + 4 + LUNoMarker) * m * sizeof(Index) + (w + 1) * m * sizeof(Scalar); } /** * Expand the existing storage to accommodate more fill-ins * \param vec Valid pointer to the vector to allocate or expand * \param[in,out] length At input, contain the current length of the vector that is to be increased. At output, length of the newly allocated vector * \param[in] nbElts Current number of elements in the factors * \param keep_prev 1: use length and do not expand the vector; 0: compute new_len and expand * \param[in,out] num_expansions Number of times the memory has been expanded */ template template Index SparseLUImpl::expand(VectorType& vec, Index& length, Index nbElts, Index keep_prev, Index& num_expansions) { float alpha = 1.5; // Ratio of the memory increase Index new_len; // New size of the allocated memory if(num_expansions == 0 || keep_prev) new_len = length ; // First time allocate requested else new_len = (std::max)(length+1,Index(alpha * length)); VectorType old_vec; // Temporary vector to hold the previous values if (nbElts > 0 ) old_vec = vec.segment(0,nbElts); //Allocate or expand the current vector #ifdef EIGEN_EXCEPTIONS try #endif { vec.resize(new_len); } #ifdef EIGEN_EXCEPTIONS catch(std::bad_alloc& ) #else if(!vec.size()) #endif { if (!num_expansions) { // First time to allocate from LUMemInit() // Let LUMemInit() deals with it. return -1; } if (keep_prev) { // In this case, the memory length should not not be reduced return new_len; } else { // Reduce the size and increase again Index tries = 0; // Number of attempts do { alpha = (alpha + 1)/2; new_len = (std::max)(length+1,Index(alpha * length)); #ifdef EIGEN_EXCEPTIONS try #endif { vec.resize(new_len); } #ifdef EIGEN_EXCEPTIONS catch(std::bad_alloc& ) #else if (!vec.size()) #endif { tries += 1; if ( tries > 10) return new_len; } } while (!vec.size()); } } //Copy the previous values to the newly allocated space if (nbElts > 0) vec.segment(0, nbElts) = old_vec; length = new_len; if(num_expansions) ++num_expansions; return 0; } /** * \brief Allocate various working space for the numerical factorization phase. * \param m number of rows of the input matrix * \param n number of columns * \param annz number of initial nonzeros in the matrix * \param lwork if lwork=-1, this routine returns an estimated size of the required memory * \param glu persistent data to facilitate multiple factors : will be deleted later ?? * \param fillratio estimated ratio of fill in the factors * \param panel_size Size of a panel * \return an estimated size of the required memory if lwork = -1; otherwise, return the size of actually allocated memory when allocation failed, and 0 on success * \note Unlike SuperLU, this routine does not support successive factorization with the same pattern and the same row permutation */ template Index SparseLUImpl::memInit(Index m, Index n, Index annz, Index lwork, Index fillratio, Index panel_size, GlobalLU_t& glu) { Index& num_expansions = glu.num_expansions; //No memory expansions so far num_expansions = 0; glu.nzumax = glu.nzlumax = (std::min)(fillratio * (annz+1) / n, m) * n; // estimated number of nonzeros in U glu.nzlmax = (std::max)(Index(4), fillratio) * (annz+1) / 4; // estimated nnz in L factor // Return the estimated size to the user if necessary Index tempSpace; tempSpace = (2*panel_size + 4 + LUNoMarker) * m * sizeof(Index) + (panel_size + 1) * m * sizeof(Scalar); if (lwork == emptyIdxLU) { Index estimated_size; estimated_size = (5 * n + 5) * sizeof(Index) + tempSpace + (glu.nzlmax + glu.nzumax) * sizeof(Index) + (glu.nzlumax+glu.nzumax) * sizeof(Scalar) + n; return estimated_size; } // Setup the required space // First allocate Integer pointers for L\U factors glu.xsup.resize(n+1); glu.supno.resize(n+1); glu.xlsub.resize(n+1); glu.xlusup.resize(n+1); glu.xusub.resize(n+1); // Reserve memory for L/U factors do { if( (expand(glu.lusup, glu.nzlumax, 0, 0, num_expansions)<0) || (expand(glu.ucol, glu.nzumax, 0, 0, num_expansions)<0) || (expand (glu.lsub, glu.nzlmax, 0, 0, num_expansions)<0) || (expand (glu.usub, glu.nzumax, 0, 1, num_expansions)<0) ) { //Reduce the estimated size and retry glu.nzlumax /= 2; glu.nzumax /= 2; glu.nzlmax /= 2; if (glu.nzlumax < annz ) return glu.nzlumax; } } while (!glu.lusup.size() || !glu.ucol.size() || !glu.lsub.size() || !glu.usub.size()); ++num_expansions; return 0; } // end LuMemInit /** * \brief Expand the existing storage * \param vec vector to expand * \param[in,out] maxlen On input, previous size of vec (Number of elements to copy ). on output, new size * \param nbElts current number of elements in the vector. * \param memtype Type of the element to expand * \param num_expansions Number of expansions * \return 0 on success, > 0 size of the memory allocated so far */ template template Index SparseLUImpl::memXpand(VectorType& vec, Index& maxlen, Index nbElts, MemType memtype, Index& num_expansions) { Index failed_size; if (memtype == USUB) failed_size = this->expand(vec, maxlen, nbElts, 1, num_expansions); else failed_size = this->expand(vec, maxlen, nbElts, 0, num_expansions); if (failed_size) return failed_size; return 0 ; } } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSELU_MEMORY piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_Utils.h0000644000175100001770000000400114107270226023070 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSELU_UTILS_H #define EIGEN_SPARSELU_UTILS_H namespace Eigen { namespace internal { /** * \brief Count Nonzero elements in the factors */ template void SparseLUImpl::countnz(const Index n, Index& nnzL, Index& nnzU, GlobalLU_t& glu) { nnzL = 0; nnzU = (glu.xusub)(n); Index nsuper = (glu.supno)(n); Index jlen; Index i, j, fsupc; if (n <= 0 ) return; // For each supernode for (i = 0; i <= nsuper; i++) { fsupc = glu.xsup(i); jlen = glu.xlsub(fsupc+1) - glu.xlsub(fsupc); for (j = fsupc; j < glu.xsup(i+1); j++) { nnzL += jlen; nnzU += j - fsupc + 1; jlen--; } } } /** * \brief Fix up the data storage lsub for L-subscripts. * * It removes the subscripts sets for structural pruning, * and applies permutation to the remaining subscripts * */ template void SparseLUImpl::fixupL(const Index n, const IndexVector& perm_r, GlobalLU_t& glu) { Index fsupc, i, j, k, jstart; StorageIndex nextl = 0; Index nsuper = (glu.supno)(n); // For each supernode for (i = 0; i <= nsuper; i++) { fsupc = glu.xsup(i); jstart = glu.xlsub(fsupc); glu.xlsub(fsupc) = nextl; for (j = jstart; j < glu.xlsub(fsupc + 1); j++) { glu.lsub(nextl) = perm_r(glu.lsub(j)); // Now indexed into P*A nextl++; } for (k = fsupc+1; k < glu.xsup(i+1); k++) glu.xlsub(k) = nextl; // other columns in supernode i } glu.xlsub(n) = nextl; } } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSELU_UTILS_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_panel_bmod.h0000644000175100001770000002044514107270226024102 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]panel_bmod.c file in SuperLU * -- SuperLU routine (version 3.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * October 15, 2003 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_PANEL_BMOD_H #define SPARSELU_PANEL_BMOD_H namespace Eigen { namespace internal { /** * \brief Performs numeric block updates (sup-panel) in topological order. * * Before entering this routine, the original nonzeros in the panel * were already copied into the spa[m,w] * * \param m number of rows in the matrix * \param w Panel size * \param jcol Starting column of the panel * \param nseg Number of segments in the U part * \param dense Store the full representation of the panel * \param tempv working array * \param segrep segment representative... first row in the segment * \param repfnz First nonzero rows * \param glu Global LU data. * * */ template void SparseLUImpl::panel_bmod(const Index m, const Index w, const Index jcol, const Index nseg, ScalarVector& dense, ScalarVector& tempv, IndexVector& segrep, IndexVector& repfnz, GlobalLU_t& glu) { Index ksub,jj,nextl_col; Index fsupc, nsupc, nsupr, nrow; Index krep, kfnz; Index lptr; // points to the row subscripts of a supernode Index luptr; // ... Index segsize,no_zeros ; // For each nonz supernode segment of U[*,j] in topological order Index k = nseg - 1; const Index PacketSize = internal::packet_traits::size; for (ksub = 0; ksub < nseg; ksub++) { // For each updating supernode /* krep = representative of current k-th supernode * fsupc = first supernodal column * nsupc = number of columns in a supernode * nsupr = number of rows in a supernode */ krep = segrep(k); k--; fsupc = glu.xsup(glu.supno(krep)); nsupc = krep - fsupc + 1; nsupr = glu.xlsub(fsupc+1) - glu.xlsub(fsupc); nrow = nsupr - nsupc; lptr = glu.xlsub(fsupc); // loop over the panel columns to detect the actual number of columns and rows Index u_rows = 0; Index u_cols = 0; for (jj = jcol; jj < jcol + w; jj++) { nextl_col = (jj-jcol) * m; VectorBlock repfnz_col(repfnz, nextl_col, m); // First nonzero column index for each row kfnz = repfnz_col(krep); if ( kfnz == emptyIdxLU ) continue; // skip any zero segment segsize = krep - kfnz + 1; u_cols++; u_rows = (std::max)(segsize,u_rows); } if(nsupc >= 2) { Index ldu = internal::first_multiple(u_rows, PacketSize); Map > U(tempv.data(), u_rows, u_cols, OuterStride<>(ldu)); // gather U Index u_col = 0; for (jj = jcol; jj < jcol + w; jj++) { nextl_col = (jj-jcol) * m; VectorBlock repfnz_col(repfnz, nextl_col, m); // First nonzero column index for each row VectorBlock dense_col(dense, nextl_col, m); // Scatter/gather entire matrix column from/to here kfnz = repfnz_col(krep); if ( kfnz == emptyIdxLU ) continue; // skip any zero segment segsize = krep - kfnz + 1; luptr = glu.xlusup(fsupc); no_zeros = kfnz - fsupc; Index isub = lptr + no_zeros; Index off = u_rows-segsize; for (Index i = 0; i < off; i++) U(i,u_col) = 0; for (Index i = 0; i < segsize; i++) { Index irow = glu.lsub(isub); U(i+off,u_col) = dense_col(irow); ++isub; } u_col++; } // solve U = A^-1 U luptr = glu.xlusup(fsupc); Index lda = glu.xlusup(fsupc+1) - glu.xlusup(fsupc); no_zeros = (krep - u_rows + 1) - fsupc; luptr += lda * no_zeros + no_zeros; MappedMatrixBlock A(glu.lusup.data()+luptr, u_rows, u_rows, OuterStride<>(lda) ); U = A.template triangularView().solve(U); // update luptr += u_rows; MappedMatrixBlock B(glu.lusup.data()+luptr, nrow, u_rows, OuterStride<>(lda) ); eigen_assert(tempv.size()>w*ldu + nrow*w + 1); Index ldl = internal::first_multiple(nrow, PacketSize); Index offset = (PacketSize-internal::first_default_aligned(B.data(), PacketSize)) % PacketSize; MappedMatrixBlock L(tempv.data()+w*ldu+offset, nrow, u_cols, OuterStride<>(ldl)); L.setZero(); internal::sparselu_gemm(L.rows(), L.cols(), B.cols(), B.data(), B.outerStride(), U.data(), U.outerStride(), L.data(), L.outerStride()); // scatter U and L u_col = 0; for (jj = jcol; jj < jcol + w; jj++) { nextl_col = (jj-jcol) * m; VectorBlock repfnz_col(repfnz, nextl_col, m); // First nonzero column index for each row VectorBlock dense_col(dense, nextl_col, m); // Scatter/gather entire matrix column from/to here kfnz = repfnz_col(krep); if ( kfnz == emptyIdxLU ) continue; // skip any zero segment segsize = krep - kfnz + 1; no_zeros = kfnz - fsupc; Index isub = lptr + no_zeros; Index off = u_rows-segsize; for (Index i = 0; i < segsize; i++) { Index irow = glu.lsub(isub++); dense_col(irow) = U.coeff(i+off,u_col); U.coeffRef(i+off,u_col) = 0; } // Scatter l into SPA dense[] for (Index i = 0; i < nrow; i++) { Index irow = glu.lsub(isub++); dense_col(irow) -= L.coeff(i,u_col); L.coeffRef(i,u_col) = 0; } u_col++; } } else // level 2 only { // Sequence through each column in the panel for (jj = jcol; jj < jcol + w; jj++) { nextl_col = (jj-jcol) * m; VectorBlock repfnz_col(repfnz, nextl_col, m); // First nonzero column index for each row VectorBlock dense_col(dense, nextl_col, m); // Scatter/gather entire matrix column from/to here kfnz = repfnz_col(krep); if ( kfnz == emptyIdxLU ) continue; // skip any zero segment segsize = krep - kfnz + 1; luptr = glu.xlusup(fsupc); Index lda = glu.xlusup(fsupc+1)-glu.xlusup(fsupc);// nsupr // Perform a trianglar solve and block update, // then scatter the result of sup-col update to dense[] no_zeros = kfnz - fsupc; if(segsize==1) LU_kernel_bmod<1>::run(segsize, dense_col, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); else if(segsize==2) LU_kernel_bmod<2>::run(segsize, dense_col, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); else if(segsize==3) LU_kernel_bmod<3>::run(segsize, dense_col, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); else LU_kernel_bmod::run(segsize, dense_col, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); } // End for each column in the panel } } // End for each updating supernode } // end panel bmod } // end namespace internal } // end namespace Eigen #endif // SPARSELU_PANEL_BMOD_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_SupernodalMatrix.h0000644000175100001770000003104514107270226025301 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSELU_SUPERNODAL_MATRIX_H #define EIGEN_SPARSELU_SUPERNODAL_MATRIX_H namespace Eigen { namespace internal { /** \ingroup SparseLU_Module * \brief a class to manipulate the L supernodal factor from the SparseLU factorization * * This class contain the data to easily store * and manipulate the supernodes during the factorization and solution phase of Sparse LU. * Only the lower triangular matrix has supernodes. * * NOTE : This class corresponds to the SCformat structure in SuperLU * */ /* TODO * InnerIterator as for sparsematrix * SuperInnerIterator to iterate through all supernodes * Function for triangular solve */ template class MappedSuperNodalMatrix { public: typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; typedef Matrix IndexVector; typedef Matrix ScalarVector; public: MappedSuperNodalMatrix() { } MappedSuperNodalMatrix(Index m, Index n, ScalarVector& nzval, IndexVector& nzval_colptr, IndexVector& rowind, IndexVector& rowind_colptr, IndexVector& col_to_sup, IndexVector& sup_to_col ) { setInfos(m, n, nzval, nzval_colptr, rowind, rowind_colptr, col_to_sup, sup_to_col); } ~MappedSuperNodalMatrix() { } /** * Set appropriate pointers for the lower triangular supernodal matrix * These infos are available at the end of the numerical factorization * FIXME This class will be modified such that it can be use in the course * of the factorization. */ void setInfos(Index m, Index n, ScalarVector& nzval, IndexVector& nzval_colptr, IndexVector& rowind, IndexVector& rowind_colptr, IndexVector& col_to_sup, IndexVector& sup_to_col ) { m_row = m; m_col = n; m_nzval = nzval.data(); m_nzval_colptr = nzval_colptr.data(); m_rowind = rowind.data(); m_rowind_colptr = rowind_colptr.data(); m_nsuper = col_to_sup(n); m_col_to_sup = col_to_sup.data(); m_sup_to_col = sup_to_col.data(); } /** * Number of rows */ Index rows() const { return m_row; } /** * Number of columns */ Index cols() const { return m_col; } /** * Return the array of nonzero values packed by column * * The size is nnz */ Scalar* valuePtr() { return m_nzval; } const Scalar* valuePtr() const { return m_nzval; } /** * Return the pointers to the beginning of each column in \ref valuePtr() */ StorageIndex* colIndexPtr() { return m_nzval_colptr; } const StorageIndex* colIndexPtr() const { return m_nzval_colptr; } /** * Return the array of compressed row indices of all supernodes */ StorageIndex* rowIndex() { return m_rowind; } const StorageIndex* rowIndex() const { return m_rowind; } /** * Return the location in \em rowvaluePtr() which starts each column */ StorageIndex* rowIndexPtr() { return m_rowind_colptr; } const StorageIndex* rowIndexPtr() const { return m_rowind_colptr; } /** * Return the array of column-to-supernode mapping */ StorageIndex* colToSup() { return m_col_to_sup; } const StorageIndex* colToSup() const { return m_col_to_sup; } /** * Return the array of supernode-to-column mapping */ StorageIndex* supToCol() { return m_sup_to_col; } const StorageIndex* supToCol() const { return m_sup_to_col; } /** * Return the number of supernodes */ Index nsuper() const { return m_nsuper; } class InnerIterator; template void solveInPlace( MatrixBase&X) const; template void solveTransposedInPlace( MatrixBase&X) const; protected: Index m_row; // Number of rows Index m_col; // Number of columns Index m_nsuper; // Number of supernodes Scalar* m_nzval; //array of nonzero values packed by column StorageIndex* m_nzval_colptr; //nzval_colptr[j] Stores the location in nzval[] which starts column j StorageIndex* m_rowind; // Array of compressed row indices of rectangular supernodes StorageIndex* m_rowind_colptr; //rowind_colptr[j] stores the location in rowind[] which starts column j StorageIndex* m_col_to_sup; // col_to_sup[j] is the supernode number to which column j belongs StorageIndex* m_sup_to_col; //sup_to_col[s] points to the starting column of the s-th supernode private : }; /** * \brief InnerIterator class to iterate over nonzero values of the current column in the supernodal matrix L * */ template class MappedSuperNodalMatrix::InnerIterator { public: InnerIterator(const MappedSuperNodalMatrix& mat, Index outer) : m_matrix(mat), m_outer(outer), m_supno(mat.colToSup()[outer]), m_idval(mat.colIndexPtr()[outer]), m_startidval(m_idval), m_endidval(mat.colIndexPtr()[outer+1]), m_idrow(mat.rowIndexPtr()[mat.supToCol()[mat.colToSup()[outer]]]), m_endidrow(mat.rowIndexPtr()[mat.supToCol()[mat.colToSup()[outer]]+1]) {} inline InnerIterator& operator++() { m_idval++; m_idrow++; return *this; } inline Scalar value() const { return m_matrix.valuePtr()[m_idval]; } inline Scalar& valueRef() { return const_cast(m_matrix.valuePtr()[m_idval]); } inline Index index() const { return m_matrix.rowIndex()[m_idrow]; } inline Index row() const { return index(); } inline Index col() const { return m_outer; } inline Index supIndex() const { return m_supno; } inline operator bool() const { return ( (m_idval < m_endidval) && (m_idval >= m_startidval) && (m_idrow < m_endidrow) ); } protected: const MappedSuperNodalMatrix& m_matrix; // Supernodal lower triangular matrix const Index m_outer; // Current column const Index m_supno; // Current SuperNode number Index m_idval; // Index to browse the values in the current column const Index m_startidval; // Start of the column value const Index m_endidval; // End of the column value Index m_idrow; // Index to browse the row indices Index m_endidrow; // End index of row indices of the current column }; /** * \brief Solve with the supernode triangular matrix * */ template template void MappedSuperNodalMatrix::solveInPlace( MatrixBase&X) const { /* Explicit type conversion as the Index type of MatrixBase may be wider than Index */ // eigen_assert(X.rows() <= NumTraits::highest()); // eigen_assert(X.cols() <= NumTraits::highest()); Index n = int(X.rows()); Index nrhs = Index(X.cols()); const Scalar * Lval = valuePtr(); // Nonzero values Matrix work(n, nrhs); // working vector work.setZero(); for (Index k = 0; k <= nsuper(); k ++) { Index fsupc = supToCol()[k]; // First column of the current supernode Index istart = rowIndexPtr()[fsupc]; // Pointer index to the subscript of the current column Index nsupr = rowIndexPtr()[fsupc+1] - istart; // Number of rows in the current supernode Index nsupc = supToCol()[k+1] - fsupc; // Number of columns in the current supernode Index nrow = nsupr - nsupc; // Number of rows in the non-diagonal part of the supernode Index irow; //Current index row if (nsupc == 1 ) { for (Index j = 0; j < nrhs; j++) { InnerIterator it(*this, fsupc); ++it; // Skip the diagonal element for (; it; ++it) { irow = it.row(); X(irow, j) -= X(fsupc, j) * it.value(); } } } else { // The supernode has more than one column Index luptr = colIndexPtr()[fsupc]; Index lda = colIndexPtr()[fsupc+1] - luptr; // Triangular solve Map, 0, OuterStride<> > A( &(Lval[luptr]), nsupc, nsupc, OuterStride<>(lda) ); Map< Matrix, 0, OuterStride<> > U (&(X(fsupc,0)), nsupc, nrhs, OuterStride<>(n) ); U = A.template triangularView().solve(U); // Matrix-vector product new (&A) Map, 0, OuterStride<> > ( &(Lval[luptr+nsupc]), nrow, nsupc, OuterStride<>(lda) ); work.topRows(nrow).noalias() = A * U; //Begin Scatter for (Index j = 0; j < nrhs; j++) { Index iptr = istart + nsupc; for (Index i = 0; i < nrow; i++) { irow = rowIndex()[iptr]; X(irow, j) -= work(i, j); // Scatter operation work(i, j) = Scalar(0); iptr++; } } } } } template template void MappedSuperNodalMatrix::solveTransposedInPlace( MatrixBase&X) const { using numext::conj; Index n = int(X.rows()); Index nrhs = Index(X.cols()); const Scalar * Lval = valuePtr(); // Nonzero values Matrix work(n, nrhs); // working vector work.setZero(); for (Index k = nsuper(); k >= 0; k--) { Index fsupc = supToCol()[k]; // First column of the current supernode Index istart = rowIndexPtr()[fsupc]; // Pointer index to the subscript of the current column Index nsupr = rowIndexPtr()[fsupc+1] - istart; // Number of rows in the current supernode Index nsupc = supToCol()[k+1] - fsupc; // Number of columns in the current supernode Index nrow = nsupr - nsupc; // Number of rows in the non-diagonal part of the supernode Index irow; //Current index row if (nsupc == 1 ) { for (Index j = 0; j < nrhs; j++) { InnerIterator it(*this, fsupc); ++it; // Skip the diagonal element for (; it; ++it) { irow = it.row(); X(fsupc,j) -= X(irow, j) * (Conjugate?conj(it.value()):it.value()); } } } else { // The supernode has more than one column Index luptr = colIndexPtr()[fsupc]; Index lda = colIndexPtr()[fsupc+1] - luptr; //Begin Gather for (Index j = 0; j < nrhs; j++) { Index iptr = istart + nsupc; for (Index i = 0; i < nrow; i++) { irow = rowIndex()[iptr]; work.topRows(nrow)(i,j)= X(irow,j); // Gather operation iptr++; } } // Matrix-vector product with transposed submatrix Map, 0, OuterStride<> > A( &(Lval[luptr+nsupc]), nrow, nsupc, OuterStride<>(lda) ); Map< Matrix, 0, OuterStride<> > U (&(X(fsupc,0)), nsupc, nrhs, OuterStride<>(n) ); if(Conjugate) U = U - A.adjoint() * work.topRows(nrow); else U = U - A.transpose() * work.topRows(nrow); // Triangular solve (of transposed diagonal block) new (&A) Map, 0, OuterStride<> > ( &(Lval[luptr]), nsupc, nsupc, OuterStride<>(lda) ); if(Conjugate) U = A.adjoint().template triangularView().solve(U); else U = A.transpose().template triangularView().solve(U); } } } } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSELU_MATRIX_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_relax_snode.h0000644000175100001770000000551114107270226024302 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* This file is a modified version of heap_relax_snode.c file in SuperLU * -- SuperLU routine (version 3.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * October 15, 2003 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_RELAX_SNODE_H #define SPARSELU_RELAX_SNODE_H namespace Eigen { namespace internal { /** * \brief Identify the initial relaxed supernodes * * This routine is applied to a column elimination tree. * It assumes that the matrix has been reordered according to the postorder of the etree * \param n the number of columns * \param et elimination tree * \param relax_columns Maximum number of columns allowed in a relaxed snode * \param descendants Number of descendants of each node in the etree * \param relax_end last column in a supernode */ template void SparseLUImpl::relax_snode (const Index n, IndexVector& et, const Index relax_columns, IndexVector& descendants, IndexVector& relax_end) { // compute the number of descendants of each node in the etree Index parent; relax_end.setConstant(emptyIdxLU); descendants.setZero(); for (Index j = 0; j < n; j++) { parent = et(j); if (parent != n) // not the dummy root descendants(parent) += descendants(j) + 1; } // Identify the relaxed supernodes by postorder traversal of the etree Index snode_start; // beginning of a snode for (Index j = 0; j < n; ) { parent = et(j); snode_start = j; while ( parent != n && descendants(parent) < relax_columns ) { j = parent; parent = et(j); } // Found a supernode in postordered etree, j is the last column relax_end(snode_start) = StorageIndex(j); // Record last column j++; // Search for a new leaf while (descendants(j) != 0 && j < n) j++; } // End postorder traversal of the etree } } // end namespace internal } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_pivotL.h0000644000175100001770000001156314107270226023260 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of xpivotL.c file in SuperLU * -- SuperLU routine (version 3.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * October 15, 2003 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_PIVOTL_H #define SPARSELU_PIVOTL_H namespace Eigen { namespace internal { /** * \brief Performs the numerical pivotin on the current column of L, and the CDIV operation. * * Pivot policy : * (1) Compute thresh = u * max_(i>=j) abs(A_ij); * (2) IF user specifies pivot row k and abs(A_kj) >= thresh THEN * pivot row = k; * ELSE IF abs(A_jj) >= thresh THEN * pivot row = j; * ELSE * pivot row = m; * * Note: If you absolutely want to use a given pivot order, then set u=0.0. * * \param jcol The current column of L * \param diagpivotthresh diagonal pivoting threshold * \param[in,out] perm_r Row permutation (threshold pivoting) * \param[in] iperm_c column permutation - used to finf diagonal of Pc*A*Pc' * \param[out] pivrow The pivot row * \param glu Global LU data * \return 0 if success, i > 0 if U(i,i) is exactly zero * */ template Index SparseLUImpl::pivotL(const Index jcol, const RealScalar& diagpivotthresh, IndexVector& perm_r, IndexVector& iperm_c, Index& pivrow, GlobalLU_t& glu) { Index fsupc = (glu.xsup)((glu.supno)(jcol)); // First column in the supernode containing the column jcol Index nsupc = jcol - fsupc; // Number of columns in the supernode portion, excluding jcol; nsupc >=0 Index lptr = glu.xlsub(fsupc); // pointer to the starting location of the row subscripts for this supernode portion Index nsupr = glu.xlsub(fsupc+1) - lptr; // Number of rows in the supernode Index lda = glu.xlusup(fsupc+1) - glu.xlusup(fsupc); // leading dimension Scalar* lu_sup_ptr = &(glu.lusup.data()[glu.xlusup(fsupc)]); // Start of the current supernode Scalar* lu_col_ptr = &(glu.lusup.data()[glu.xlusup(jcol)]); // Start of jcol in the supernode StorageIndex* lsub_ptr = &(glu.lsub.data()[lptr]); // Start of row indices of the supernode // Determine the largest abs numerical value for partial pivoting Index diagind = iperm_c(jcol); // diagonal index RealScalar pivmax(-1.0); Index pivptr = nsupc; Index diag = emptyIdxLU; RealScalar rtemp; Index isub, icol, itemp, k; for (isub = nsupc; isub < nsupr; ++isub) { using std::abs; rtemp = abs(lu_col_ptr[isub]); if (rtemp > pivmax) { pivmax = rtemp; pivptr = isub; } if (lsub_ptr[isub] == diagind) diag = isub; } // Test for singularity if ( pivmax <= RealScalar(0.0) ) { // if pivmax == -1, the column is structurally empty, otherwise it is only numerically zero pivrow = pivmax < RealScalar(0.0) ? diagind : lsub_ptr[pivptr]; perm_r(pivrow) = StorageIndex(jcol); return (jcol+1); } RealScalar thresh = diagpivotthresh * pivmax; // Choose appropriate pivotal element { // Test if the diagonal element can be used as a pivot (given the threshold value) if (diag >= 0 ) { // Diagonal element exists using std::abs; rtemp = abs(lu_col_ptr[diag]); if (rtemp != RealScalar(0.0) && rtemp >= thresh) pivptr = diag; } pivrow = lsub_ptr[pivptr]; } // Record pivot row perm_r(pivrow) = StorageIndex(jcol); // Interchange row subscripts if (pivptr != nsupc ) { std::swap( lsub_ptr[pivptr], lsub_ptr[nsupc] ); // Interchange numerical values as well, for the two rows in the whole snode // such that L is indexed the same way as A for (icol = 0; icol <= nsupc; icol++) { itemp = pivptr + icol * lda; std::swap(lu_sup_ptr[itemp], lu_sup_ptr[nsupc + icol * lda]); } } // cdiv operations Scalar temp = Scalar(1.0) / lu_col_ptr[nsupc]; for (k = nsupc+1; k < nsupr; k++) lu_col_ptr[k] *= temp; return 0; } } // end namespace internal } // end namespace Eigen #endif // SPARSELU_PIVOTL_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_pruneL.h0000644000175100001770000001070114107270226023241 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]pruneL.c file in SuperLU * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_PRUNEL_H #define SPARSELU_PRUNEL_H namespace Eigen { namespace internal { /** * \brief Prunes the L-structure. * * It prunes the L-structure of supernodes whose L-structure contains the current pivot row "pivrow" * * * \param jcol The current column of L * \param[in] perm_r Row permutation * \param[out] pivrow The pivot row * \param nseg Number of segments * \param segrep * \param repfnz * \param[out] xprune * \param glu Global LU data * */ template void SparseLUImpl::pruneL(const Index jcol, const IndexVector& perm_r, const Index pivrow, const Index nseg, const IndexVector& segrep, BlockIndexVector repfnz, IndexVector& xprune, GlobalLU_t& glu) { // For each supernode-rep irep in U(*,j] Index jsupno = glu.supno(jcol); Index i,irep,irep1; bool movnum, do_prune = false; Index kmin = 0, kmax = 0, minloc, maxloc,krow; for (i = 0; i < nseg; i++) { irep = segrep(i); irep1 = irep + 1; do_prune = false; // Don't prune with a zero U-segment if (repfnz(irep) == emptyIdxLU) continue; // If a snode overlaps with the next panel, then the U-segment // is fragmented into two parts -- irep and irep1. We should let // pruning occur at the rep-column in irep1s snode. if (glu.supno(irep) == glu.supno(irep1) ) continue; // don't prune // If it has not been pruned & it has a nonz in row L(pivrow,i) if (glu.supno(irep) != jsupno ) { if ( xprune (irep) >= glu.xlsub(irep1) ) { kmin = glu.xlsub(irep); kmax = glu.xlsub(irep1) - 1; for (krow = kmin; krow <= kmax; krow++) { if (glu.lsub(krow) == pivrow) { do_prune = true; break; } } } if (do_prune) { // do a quicksort-type partition // movnum=true means that the num values have to be exchanged movnum = false; if (irep == glu.xsup(glu.supno(irep)) ) // Snode of size 1 movnum = true; while (kmin <= kmax) { if (perm_r(glu.lsub(kmax)) == emptyIdxLU) kmax--; else if ( perm_r(glu.lsub(kmin)) != emptyIdxLU) kmin++; else { // kmin below pivrow (not yet pivoted), and kmax // above pivrow: interchange the two suscripts std::swap(glu.lsub(kmin), glu.lsub(kmax)); // If the supernode has only one column, then we // only keep one set of subscripts. For any subscript // intercnahge performed, similar interchange must be // done on the numerical values. if (movnum) { minloc = glu.xlusup(irep) + ( kmin - glu.xlsub(irep) ); maxloc = glu.xlusup(irep) + ( kmax - glu.xlsub(irep) ); std::swap(glu.lusup(minloc), glu.lusup(maxloc)); } kmin++; kmax--; } } // end while xprune(irep) = StorageIndex(kmin); //Pruning } // end if do_prune } // end pruning } // End for each U-segment } } // end namespace internal } // end namespace Eigen #endif // SPARSELU_PRUNEL_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_kernel_bmod.h0000644000175100001770000001313314107270226024257 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef SPARSELU_KERNEL_BMOD_H #define SPARSELU_KERNEL_BMOD_H namespace Eigen { namespace internal { template struct LU_kernel_bmod { /** \internal * \brief Performs numeric block updates from a given supernode to a single column * * \param segsize Size of the segment (and blocks ) to use for updates * \param[in,out] dense Packed values of the original matrix * \param tempv temporary vector to use for updates * \param lusup array containing the supernodes * \param lda Leading dimension in the supernode * \param nrow Number of rows in the rectangular part of the supernode * \param lsub compressed row subscripts of supernodes * \param lptr pointer to the first column of the current supernode in lsub * \param no_zeros Number of nonzeros elements before the diagonal part of the supernode */ template static EIGEN_DONT_INLINE void run(const Index segsize, BlockScalarVector& dense, ScalarVector& tempv, ScalarVector& lusup, Index& luptr, const Index lda, const Index nrow, IndexVector& lsub, const Index lptr, const Index no_zeros); }; template template EIGEN_DONT_INLINE void LU_kernel_bmod::run(const Index segsize, BlockScalarVector& dense, ScalarVector& tempv, ScalarVector& lusup, Index& luptr, const Index lda, const Index nrow, IndexVector& lsub, const Index lptr, const Index no_zeros) { typedef typename ScalarVector::Scalar Scalar; // First, copy U[*,j] segment from dense(*) to tempv(*) // The result of triangular solve is in tempv[*]; // The result of matric-vector update is in dense[*] Index isub = lptr + no_zeros; Index i; Index irow; for (i = 0; i < ((SegSizeAtCompileTime==Dynamic)?segsize:SegSizeAtCompileTime); i++) { irow = lsub(isub); tempv(i) = dense(irow); ++isub; } // Dense triangular solve -- start effective triangle luptr += lda * no_zeros + no_zeros; // Form Eigen matrix and vector Map, 0, OuterStride<> > A( &(lusup.data()[luptr]), segsize, segsize, OuterStride<>(lda) ); Map > u(tempv.data(), segsize); u = A.template triangularView().solve(u); // Dense matrix-vector product y <-- B*x luptr += segsize; const Index PacketSize = internal::packet_traits::size; Index ldl = internal::first_multiple(nrow, PacketSize); Map, 0, OuterStride<> > B( &(lusup.data()[luptr]), nrow, segsize, OuterStride<>(lda) ); Index aligned_offset = internal::first_default_aligned(tempv.data()+segsize, PacketSize); Index aligned_with_B_offset = (PacketSize-internal::first_default_aligned(B.data(), PacketSize))%PacketSize; Map, 0, OuterStride<> > l(tempv.data()+segsize+aligned_offset+aligned_with_B_offset, nrow, OuterStride<>(ldl) ); l.setZero(); internal::sparselu_gemm(l.rows(), l.cols(), B.cols(), B.data(), B.outerStride(), u.data(), u.outerStride(), l.data(), l.outerStride()); // Scatter tempv[] into SPA dense[] as a temporary storage isub = lptr + no_zeros; for (i = 0; i < ((SegSizeAtCompileTime==Dynamic)?segsize:SegSizeAtCompileTime); i++) { irow = lsub(isub++); dense(irow) = tempv(i); } // Scatter l into SPA dense[] for (i = 0; i < nrow; i++) { irow = lsub(isub++); dense(irow) -= l(i); } } template <> struct LU_kernel_bmod<1> { template static EIGEN_DONT_INLINE void run(const Index /*segsize*/, BlockScalarVector& dense, ScalarVector& /*tempv*/, ScalarVector& lusup, Index& luptr, const Index lda, const Index nrow, IndexVector& lsub, const Index lptr, const Index no_zeros); }; template EIGEN_DONT_INLINE void LU_kernel_bmod<1>::run(const Index /*segsize*/, BlockScalarVector& dense, ScalarVector& /*tempv*/, ScalarVector& lusup, Index& luptr, const Index lda, const Index nrow, IndexVector& lsub, const Index lptr, const Index no_zeros) { typedef typename ScalarVector::Scalar Scalar; typedef typename IndexVector::Scalar StorageIndex; Scalar f = dense(lsub(lptr + no_zeros)); luptr += lda * no_zeros + no_zeros + 1; const Scalar* a(lusup.data() + luptr); const StorageIndex* irow(lsub.data()+lptr + no_zeros + 1); Index i = 0; for (; i+1 < nrow; i+=2) { Index i0 = *(irow++); Index i1 = *(irow++); Scalar a0 = *(a++); Scalar a1 = *(a++); Scalar d0 = dense.coeff(i0); Scalar d1 = dense.coeff(i1); d0 -= f*a0; d1 -= f*a1; dense.coeffRef(i0) = d0; dense.coeffRef(i1) = d1; } if(i // Copyright (C) 2012-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_LU_H #define EIGEN_SPARSE_LU_H namespace Eigen { template > class SparseLU; template struct SparseLUMatrixLReturnType; template struct SparseLUMatrixUReturnType; template class SparseLUTransposeView : public SparseSolverBase > { protected: typedef SparseSolverBase > APIBase; using APIBase::m_isInitialized; public: typedef typename SparseLUType::Scalar Scalar; typedef typename SparseLUType::StorageIndex StorageIndex; typedef typename SparseLUType::MatrixType MatrixType; typedef typename SparseLUType::OrderingType OrderingType; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; SparseLUTransposeView() : m_sparseLU(NULL) {} SparseLUTransposeView(const SparseLUTransposeView& view) { this->m_sparseLU = view.m_sparseLU; } void setIsInitialized(const bool isInitialized) {this->m_isInitialized = isInitialized;} void setSparseLU(SparseLUType* sparseLU) {m_sparseLU = sparseLU;} using APIBase::_solve_impl; template bool _solve_impl(const MatrixBase &B, MatrixBase &X_base) const { Dest& X(X_base.derived()); eigen_assert(m_sparseLU->info() == Success && "The matrix should be factorized first"); EIGEN_STATIC_ASSERT((Dest::Flags&RowMajorBit)==0, THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES); // this ugly const_cast_derived() helps to detect aliasing when applying the permutations for(Index j = 0; j < B.cols(); ++j){ X.col(j) = m_sparseLU->colsPermutation() * B.const_cast_derived().col(j); } //Forward substitution with transposed or adjoint of U m_sparseLU->matrixU().template solveTransposedInPlace(X); //Backward substitution with transposed or adjoint of L m_sparseLU->matrixL().template solveTransposedInPlace(X); // Permute back the solution for (Index j = 0; j < B.cols(); ++j) X.col(j) = m_sparseLU->rowsPermutation().transpose() * X.col(j); return true; } inline Index rows() const { return m_sparseLU->rows(); } inline Index cols() const { return m_sparseLU->cols(); } private: SparseLUType *m_sparseLU; SparseLUTransposeView& operator=(const SparseLUTransposeView&); }; /** \ingroup SparseLU_Module * \class SparseLU * * \brief Sparse supernodal LU factorization for general matrices * * This class implements the supernodal LU factorization for general matrices. * It uses the main techniques from the sequential SuperLU package * (http://crd-legacy.lbl.gov/~xiaoye/SuperLU/). It handles transparently real * and complex arithmetic with single and double precision, depending on the * scalar type of your input matrix. * The code has been optimized to provide BLAS-3 operations during supernode-panel updates. * It benefits directly from the built-in high-performant Eigen BLAS routines. * Moreover, when the size of a supernode is very small, the BLAS calls are avoided to * enable a better optimization from the compiler. For best performance, * you should compile it with NDEBUG flag to avoid the numerous bounds checking on vectors. * * An important parameter of this class is the ordering method. It is used to reorder the columns * (and eventually the rows) of the matrix to reduce the number of new elements that are created during * numerical factorization. The cheapest method available is COLAMD. * See \link OrderingMethods_Module the OrderingMethods module \endlink for the list of * built-in and external ordering methods. * * Simple example with key steps * \code * VectorXd x(n), b(n); * SparseMatrix A; * SparseLU, COLAMDOrdering > solver; * // fill A and b; * // Compute the ordering permutation vector from the structural pattern of A * solver.analyzePattern(A); * // Compute the numerical factorization * solver.factorize(A); * //Use the factors to solve the linear system * x = solver.solve(b); * \endcode * * \warning The input matrix A should be in a \b compressed and \b column-major form. * Otherwise an expensive copy will be made. You can call the inexpensive makeCompressed() to get a compressed matrix. * * \note Unlike the initial SuperLU implementation, there is no step to equilibrate the matrix. * For badly scaled matrices, this step can be useful to reduce the pivoting during factorization. * If this is the case for your matrices, you can try the basic scaling method at * "unsupported/Eigen/src/IterativeSolvers/Scaling.h" * * \tparam _MatrixType The type of the sparse matrix. It must be a column-major SparseMatrix<> * \tparam _OrderingType The ordering method to use, either AMD, COLAMD or METIS. Default is COLMAD * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept * \sa \ref OrderingMethods_Module */ template class SparseLU : public SparseSolverBase >, public internal::SparseLUImpl { protected: typedef SparseSolverBase > APIBase; using APIBase::m_isInitialized; public: using APIBase::_solve_impl; typedef _MatrixType MatrixType; typedef _OrderingType OrderingType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix NCMatrix; typedef internal::MappedSuperNodalMatrix SCMatrix; typedef Matrix ScalarVector; typedef Matrix IndexVector; typedef PermutationMatrix PermutationType; typedef internal::SparseLUImpl Base; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: SparseLU():m_lastError(""),m_Ustore(0,0,0,0,0,0),m_symmetricmode(false),m_diagpivotthresh(1.0),m_detPermR(1) { initperfvalues(); } explicit SparseLU(const MatrixType& matrix) : m_lastError(""),m_Ustore(0,0,0,0,0,0),m_symmetricmode(false),m_diagpivotthresh(1.0),m_detPermR(1) { initperfvalues(); compute(matrix); } ~SparseLU() { // Free all explicit dynamic pointers } void analyzePattern (const MatrixType& matrix); void factorize (const MatrixType& matrix); void simplicialfactorize(const MatrixType& matrix); /** * Compute the symbolic and numeric factorization of the input sparse matrix. * The input matrix should be in column-major storage. */ void compute (const MatrixType& matrix) { // Analyze analyzePattern(matrix); //Factorize factorize(matrix); } /** \returns an expression of the transposed of the factored matrix. * * A typical usage is to solve for the transposed problem A^T x = b: * \code * solver.compute(A); * x = solver.transpose().solve(b); * \endcode * * \sa adjoint(), solve() */ const SparseLUTransposeView > transpose() { SparseLUTransposeView > transposeView; transposeView.setSparseLU(this); transposeView.setIsInitialized(this->m_isInitialized); return transposeView; } /** \returns an expression of the adjoint of the factored matrix * * A typical usage is to solve for the adjoint problem A' x = b: * \code * solver.compute(A); * x = solver.adjoint().solve(b); * \endcode * * For real scalar types, this function is equivalent to transpose(). * * \sa transpose(), solve() */ const SparseLUTransposeView > adjoint() { SparseLUTransposeView > adjointView; adjointView.setSparseLU(this); adjointView.setIsInitialized(this->m_isInitialized); return adjointView; } inline Index rows() const { return m_mat.rows(); } inline Index cols() const { return m_mat.cols(); } /** Indicate that the pattern of the input matrix is symmetric */ void isSymmetric(bool sym) { m_symmetricmode = sym; } /** \returns an expression of the matrix L, internally stored as supernodes * The only operation available with this expression is the triangular solve * \code * y = b; matrixL().solveInPlace(y); * \endcode */ SparseLUMatrixLReturnType matrixL() const { return SparseLUMatrixLReturnType(m_Lstore); } /** \returns an expression of the matrix U, * The only operation available with this expression is the triangular solve * \code * y = b; matrixU().solveInPlace(y); * \endcode */ SparseLUMatrixUReturnType > matrixU() const { return SparseLUMatrixUReturnType >(m_Lstore, m_Ustore); } /** * \returns a reference to the row matrix permutation \f$ P_r \f$ such that \f$P_r A P_c^T = L U\f$ * \sa colsPermutation() */ inline const PermutationType& rowsPermutation() const { return m_perm_r; } /** * \returns a reference to the column matrix permutation\f$ P_c^T \f$ such that \f$P_r A P_c^T = L U\f$ * \sa rowsPermutation() */ inline const PermutationType& colsPermutation() const { return m_perm_c; } /** Set the threshold used for a diagonal entry to be an acceptable pivot. */ void setPivotThreshold(const RealScalar& thresh) { m_diagpivotthresh = thresh; } #ifdef EIGEN_PARSED_BY_DOXYGEN /** \returns the solution X of \f$ A X = B \f$ using the current decomposition of A. * * \warning the destination matrix X in X = this->solve(B) must be colmun-major. * * \sa compute() */ template inline const Solve solve(const MatrixBase& B) const; #endif // EIGEN_PARSED_BY_DOXYGEN /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the LU factorization reports a problem, zero diagonal for instance * \c InvalidInput if the input matrix is invalid * * \sa iparm() */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } /** * \returns A string describing the type of error */ std::string lastErrorMessage() const { return m_lastError; } template bool _solve_impl(const MatrixBase &B, MatrixBase &X_base) const { Dest& X(X_base.derived()); eigen_assert(m_factorizationIsOk && "The matrix should be factorized first"); EIGEN_STATIC_ASSERT((Dest::Flags&RowMajorBit)==0, THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES); // Permute the right hand side to form X = Pr*B // on return, X is overwritten by the computed solution X.resize(B.rows(),B.cols()); // this ugly const_cast_derived() helps to detect aliasing when applying the permutations for(Index j = 0; j < B.cols(); ++j) X.col(j) = rowsPermutation() * B.const_cast_derived().col(j); //Forward substitution with L this->matrixL().solveInPlace(X); this->matrixU().solveInPlace(X); // Permute back the solution for (Index j = 0; j < B.cols(); ++j) X.col(j) = colsPermutation().inverse() * X.col(j); return true; } /** * \returns the absolute value of the determinant of the matrix of which * *this is the QR decomposition. * * \warning a determinant can be very big or small, so for matrices * of large enough dimension, there is a risk of overflow/underflow. * One way to work around that is to use logAbsDeterminant() instead. * * \sa logAbsDeterminant(), signDeterminant() */ Scalar absDeterminant() { using std::abs; eigen_assert(m_factorizationIsOk && "The matrix should be factorized first."); // Initialize with the determinant of the row matrix Scalar det = Scalar(1.); // Note that the diagonal blocks of U are stored in supernodes, // which are available in the L part :) for (Index j = 0; j < this->cols(); ++j) { for (typename SCMatrix::InnerIterator it(m_Lstore, j); it; ++it) { if(it.index() == j) { det *= abs(it.value()); break; } } } return det; } /** \returns the natural log of the absolute value of the determinant of the matrix * of which **this is the QR decomposition * * \note This method is useful to work around the risk of overflow/underflow that's * inherent to the determinant computation. * * \sa absDeterminant(), signDeterminant() */ Scalar logAbsDeterminant() const { using std::log; using std::abs; eigen_assert(m_factorizationIsOk && "The matrix should be factorized first."); Scalar det = Scalar(0.); for (Index j = 0; j < this->cols(); ++j) { for (typename SCMatrix::InnerIterator it(m_Lstore, j); it; ++it) { if(it.row() < j) continue; if(it.row() == j) { det += log(abs(it.value())); break; } } } return det; } /** \returns A number representing the sign of the determinant * * \sa absDeterminant(), logAbsDeterminant() */ Scalar signDeterminant() { eigen_assert(m_factorizationIsOk && "The matrix should be factorized first."); // Initialize with the determinant of the row matrix Index det = 1; // Note that the diagonal blocks of U are stored in supernodes, // which are available in the L part :) for (Index j = 0; j < this->cols(); ++j) { for (typename SCMatrix::InnerIterator it(m_Lstore, j); it; ++it) { if(it.index() == j) { if(it.value()<0) det = -det; else if(it.value()==0) return 0; break; } } } return det * m_detPermR * m_detPermC; } /** \returns The determinant of the matrix. * * \sa absDeterminant(), logAbsDeterminant() */ Scalar determinant() { eigen_assert(m_factorizationIsOk && "The matrix should be factorized first."); // Initialize with the determinant of the row matrix Scalar det = Scalar(1.); // Note that the diagonal blocks of U are stored in supernodes, // which are available in the L part :) for (Index j = 0; j < this->cols(); ++j) { for (typename SCMatrix::InnerIterator it(m_Lstore, j); it; ++it) { if(it.index() == j) { det *= it.value(); break; } } } return (m_detPermR * m_detPermC) > 0 ? det : -det; } Index nnzL() const { return m_nnzL; }; Index nnzU() const { return m_nnzU; }; protected: // Functions void initperfvalues() { m_perfv.panel_size = 16; m_perfv.relax = 1; m_perfv.maxsuper = 128; m_perfv.rowblk = 16; m_perfv.colblk = 8; m_perfv.fillfactor = 20; } // Variables mutable ComputationInfo m_info; bool m_factorizationIsOk; bool m_analysisIsOk; std::string m_lastError; NCMatrix m_mat; // The input (permuted ) matrix SCMatrix m_Lstore; // The lower triangular matrix (supernodal) MappedSparseMatrix m_Ustore; // The upper triangular matrix PermutationType m_perm_c; // Column permutation PermutationType m_perm_r ; // Row permutation IndexVector m_etree; // Column elimination tree typename Base::GlobalLU_t m_glu; // SparseLU options bool m_symmetricmode; // values for performance internal::perfvalues m_perfv; RealScalar m_diagpivotthresh; // Specifies the threshold used for a diagonal entry to be an acceptable pivot Index m_nnzL, m_nnzU; // Nonzeros in L and U factors Index m_detPermR, m_detPermC; // Determinants of the permutation matrices private: // Disable copy constructor SparseLU (const SparseLU& ); }; // End class SparseLU // Functions needed by the anaysis phase /** * Compute the column permutation to minimize the fill-in * * - Apply this permutation to the input matrix - * * - Compute the column elimination tree on the permuted matrix * * - Postorder the elimination tree and the column permutation * */ template void SparseLU::analyzePattern(const MatrixType& mat) { //TODO It is possible as in SuperLU to compute row and columns scaling vectors to equilibrate the matrix mat. // Firstly, copy the whole input matrix. m_mat = mat; // Compute fill-in ordering OrderingType ord; ord(m_mat,m_perm_c); // Apply the permutation to the column of the input matrix if (m_perm_c.size()) { m_mat.uncompress(); //NOTE: The effect of this command is only to create the InnerNonzeros pointers. FIXME : This vector is filled but not subsequently used. // Then, permute only the column pointers ei_declare_aligned_stack_constructed_variable(StorageIndex,outerIndexPtr,mat.cols()+1,mat.isCompressed()?const_cast(mat.outerIndexPtr()):0); // If the input matrix 'mat' is uncompressed, then the outer-indices do not match the ones of m_mat, and a copy is thus needed. if(!mat.isCompressed()) IndexVector::Map(outerIndexPtr, mat.cols()+1) = IndexVector::Map(m_mat.outerIndexPtr(),mat.cols()+1); // Apply the permutation and compute the nnz per column. for (Index i = 0; i < mat.cols(); i++) { m_mat.outerIndexPtr()[m_perm_c.indices()(i)] = outerIndexPtr[i]; m_mat.innerNonZeroPtr()[m_perm_c.indices()(i)] = outerIndexPtr[i+1] - outerIndexPtr[i]; } } // Compute the column elimination tree of the permuted matrix IndexVector firstRowElt; internal::coletree(m_mat, m_etree,firstRowElt); // In symmetric mode, do not do postorder here if (!m_symmetricmode) { IndexVector post, iwork; // Post order etree internal::treePostorder(StorageIndex(m_mat.cols()), m_etree, post); // Renumber etree in postorder Index m = m_mat.cols(); iwork.resize(m+1); for (Index i = 0; i < m; ++i) iwork(post(i)) = post(m_etree(i)); m_etree = iwork; // Postmultiply A*Pc by post, i.e reorder the matrix according to the postorder of the etree PermutationType post_perm(m); for (Index i = 0; i < m; i++) post_perm.indices()(i) = post(i); // Combine the two permutations : postorder the permutation for future use if(m_perm_c.size()) { m_perm_c = post_perm * m_perm_c; } } // end postordering m_analysisIsOk = true; } // Functions needed by the numerical factorization phase /** * - Numerical factorization * - Interleaved with the symbolic factorization * On exit, info is * * = 0: successful factorization * * > 0: if info = i, and i is * * <= A->ncol: U(i,i) is exactly zero. The factorization has * been completed, but the factor U is exactly singular, * and division by zero will occur if it is used to solve a * system of equations. * * > A->ncol: number of bytes allocated when memory allocation * failure occurred, plus A->ncol. If lwork = -1, it is * the estimated amount of space needed, plus A->ncol. */ template void SparseLU::factorize(const MatrixType& matrix) { using internal::emptyIdxLU; eigen_assert(m_analysisIsOk && "analyzePattern() should be called first"); eigen_assert((matrix.rows() == matrix.cols()) && "Only for squared matrices"); m_isInitialized = true; // Apply the column permutation computed in analyzepattern() // m_mat = matrix * m_perm_c.inverse(); m_mat = matrix; if (m_perm_c.size()) { m_mat.uncompress(); //NOTE: The effect of this command is only to create the InnerNonzeros pointers. //Then, permute only the column pointers const StorageIndex * outerIndexPtr; if (matrix.isCompressed()) outerIndexPtr = matrix.outerIndexPtr(); else { StorageIndex* outerIndexPtr_t = new StorageIndex[matrix.cols()+1]; for(Index i = 0; i <= matrix.cols(); i++) outerIndexPtr_t[i] = m_mat.outerIndexPtr()[i]; outerIndexPtr = outerIndexPtr_t; } for (Index i = 0; i < matrix.cols(); i++) { m_mat.outerIndexPtr()[m_perm_c.indices()(i)] = outerIndexPtr[i]; m_mat.innerNonZeroPtr()[m_perm_c.indices()(i)] = outerIndexPtr[i+1] - outerIndexPtr[i]; } if(!matrix.isCompressed()) delete[] outerIndexPtr; } else { //FIXME This should not be needed if the empty permutation is handled transparently m_perm_c.resize(matrix.cols()); for(StorageIndex i = 0; i < matrix.cols(); ++i) m_perm_c.indices()(i) = i; } Index m = m_mat.rows(); Index n = m_mat.cols(); Index nnz = m_mat.nonZeros(); Index maxpanel = m_perfv.panel_size * m; // Allocate working storage common to the factor routines Index lwork = 0; Index info = Base::memInit(m, n, nnz, lwork, m_perfv.fillfactor, m_perfv.panel_size, m_glu); if (info) { m_lastError = "UNABLE TO ALLOCATE WORKING MEMORY\n\n" ; m_factorizationIsOk = false; return ; } // Set up pointers for integer working arrays IndexVector segrep(m); segrep.setZero(); IndexVector parent(m); parent.setZero(); IndexVector xplore(m); xplore.setZero(); IndexVector repfnz(maxpanel); IndexVector panel_lsub(maxpanel); IndexVector xprune(n); xprune.setZero(); IndexVector marker(m*internal::LUNoMarker); marker.setZero(); repfnz.setConstant(-1); panel_lsub.setConstant(-1); // Set up pointers for scalar working arrays ScalarVector dense; dense.setZero(maxpanel); ScalarVector tempv; tempv.setZero(internal::LUnumTempV(m, m_perfv.panel_size, m_perfv.maxsuper, /*m_perfv.rowblk*/m) ); // Compute the inverse of perm_c PermutationType iperm_c(m_perm_c.inverse()); // Identify initial relaxed snodes IndexVector relax_end(n); if ( m_symmetricmode == true ) Base::heap_relax_snode(n, m_etree, m_perfv.relax, marker, relax_end); else Base::relax_snode(n, m_etree, m_perfv.relax, marker, relax_end); m_perm_r.resize(m); m_perm_r.indices().setConstant(-1); marker.setConstant(-1); m_detPermR = 1; // Record the determinant of the row permutation m_glu.supno(0) = emptyIdxLU; m_glu.xsup.setConstant(0); m_glu.xsup(0) = m_glu.xlsub(0) = m_glu.xusub(0) = m_glu.xlusup(0) = Index(0); // Work on one 'panel' at a time. A panel is one of the following : // (a) a relaxed supernode at the bottom of the etree, or // (b) panel_size contiguous columns, defined by the user Index jcol; Index pivrow; // Pivotal row number in the original row matrix Index nseg1; // Number of segments in U-column above panel row jcol Index nseg; // Number of segments in each U-column Index irep; Index i, k, jj; for (jcol = 0; jcol < n; ) { // Adjust panel size so that a panel won't overlap with the next relaxed snode. Index panel_size = m_perfv.panel_size; // upper bound on panel width for (k = jcol + 1; k < (std::min)(jcol+panel_size, n); k++) { if (relax_end(k) != emptyIdxLU) { panel_size = k - jcol; break; } } if (k == n) panel_size = n - jcol; // Symbolic outer factorization on a panel of columns Base::panel_dfs(m, panel_size, jcol, m_mat, m_perm_r.indices(), nseg1, dense, panel_lsub, segrep, repfnz, xprune, marker, parent, xplore, m_glu); // Numeric sup-panel updates in topological order Base::panel_bmod(m, panel_size, jcol, nseg1, dense, tempv, segrep, repfnz, m_glu); // Sparse LU within the panel, and below the panel diagonal for ( jj = jcol; jj< jcol + panel_size; jj++) { k = (jj - jcol) * m; // Column index for w-wide arrays nseg = nseg1; // begin after all the panel segments //Depth-first-search for the current column VectorBlock panel_lsubk(panel_lsub, k, m); VectorBlock repfnz_k(repfnz, k, m); info = Base::column_dfs(m, jj, m_perm_r.indices(), m_perfv.maxsuper, nseg, panel_lsubk, segrep, repfnz_k, xprune, marker, parent, xplore, m_glu); if ( info ) { m_lastError = "UNABLE TO EXPAND MEMORY IN COLUMN_DFS() "; m_info = NumericalIssue; m_factorizationIsOk = false; return; } // Numeric updates to this column VectorBlock dense_k(dense, k, m); VectorBlock segrep_k(segrep, nseg1, m-nseg1); info = Base::column_bmod(jj, (nseg - nseg1), dense_k, tempv, segrep_k, repfnz_k, jcol, m_glu); if ( info ) { m_lastError = "UNABLE TO EXPAND MEMORY IN COLUMN_BMOD() "; m_info = NumericalIssue; m_factorizationIsOk = false; return; } // Copy the U-segments to ucol(*) info = Base::copy_to_ucol(jj, nseg, segrep, repfnz_k ,m_perm_r.indices(), dense_k, m_glu); if ( info ) { m_lastError = "UNABLE TO EXPAND MEMORY IN COPY_TO_UCOL() "; m_info = NumericalIssue; m_factorizationIsOk = false; return; } // Form the L-segment info = Base::pivotL(jj, m_diagpivotthresh, m_perm_r.indices(), iperm_c.indices(), pivrow, m_glu); if ( info ) { m_lastError = "THE MATRIX IS STRUCTURALLY SINGULAR ... ZERO COLUMN AT "; std::ostringstream returnInfo; returnInfo << info; m_lastError += returnInfo.str(); m_info = NumericalIssue; m_factorizationIsOk = false; return; } // Update the determinant of the row permutation matrix // FIXME: the following test is not correct, we should probably take iperm_c into account and pivrow is not directly the row pivot. if (pivrow != jj) m_detPermR = -m_detPermR; // Prune columns (0:jj-1) using column jj Base::pruneL(jj, m_perm_r.indices(), pivrow, nseg, segrep, repfnz_k, xprune, m_glu); // Reset repfnz for this column for (i = 0; i < nseg; i++) { irep = segrep(i); repfnz_k(irep) = emptyIdxLU; } } // end SparseLU within the panel jcol += panel_size; // Move to the next panel } // end for -- end elimination m_detPermR = m_perm_r.determinant(); m_detPermC = m_perm_c.determinant(); // Count the number of nonzeros in factors Base::countnz(n, m_nnzL, m_nnzU, m_glu); // Apply permutation to the L subscripts Base::fixupL(n, m_perm_r.indices(), m_glu); // Create supernode matrix L m_Lstore.setInfos(m, n, m_glu.lusup, m_glu.xlusup, m_glu.lsub, m_glu.xlsub, m_glu.supno, m_glu.xsup); // Create the column major upper sparse matrix U; new (&m_Ustore) MappedSparseMatrix ( m, n, m_nnzU, m_glu.xusub.data(), m_glu.usub.data(), m_glu.ucol.data() ); m_info = Success; m_factorizationIsOk = true; } template struct SparseLUMatrixLReturnType : internal::no_assignment_operator { typedef typename MappedSupernodalType::Scalar Scalar; explicit SparseLUMatrixLReturnType(const MappedSupernodalType& mapL) : m_mapL(mapL) { } Index rows() const { return m_mapL.rows(); } Index cols() const { return m_mapL.cols(); } template void solveInPlace( MatrixBase &X) const { m_mapL.solveInPlace(X); } template void solveTransposedInPlace( MatrixBase &X) const { m_mapL.template solveTransposedInPlace(X); } const MappedSupernodalType& m_mapL; }; template struct SparseLUMatrixUReturnType : internal::no_assignment_operator { typedef typename MatrixLType::Scalar Scalar; SparseLUMatrixUReturnType(const MatrixLType& mapL, const MatrixUType& mapU) : m_mapL(mapL),m_mapU(mapU) { } Index rows() const { return m_mapL.rows(); } Index cols() const { return m_mapL.cols(); } template void solveInPlace(MatrixBase &X) const { Index nrhs = X.cols(); Index n = X.rows(); // Backward solve with U for (Index k = m_mapL.nsuper(); k >= 0; k--) { Index fsupc = m_mapL.supToCol()[k]; Index lda = m_mapL.colIndexPtr()[fsupc+1] - m_mapL.colIndexPtr()[fsupc]; // leading dimension Index nsupc = m_mapL.supToCol()[k+1] - fsupc; Index luptr = m_mapL.colIndexPtr()[fsupc]; if (nsupc == 1) { for (Index j = 0; j < nrhs; j++) { X(fsupc, j) /= m_mapL.valuePtr()[luptr]; } } else { // FIXME: the following lines should use Block expressions and not Map! Map, 0, OuterStride<> > A( &(m_mapL.valuePtr()[luptr]), nsupc, nsupc, OuterStride<>(lda) ); Map< Matrix, 0, OuterStride<> > U (&(X.coeffRef(fsupc,0)), nsupc, nrhs, OuterStride<>(n) ); U = A.template triangularView().solve(U); } for (Index j = 0; j < nrhs; ++j) { for (Index jcol = fsupc; jcol < fsupc + nsupc; jcol++) { typename MatrixUType::InnerIterator it(m_mapU, jcol); for ( ; it; ++it) { Index irow = it.index(); X(irow, j) -= X(jcol, j) * it.value(); } } } } // End For U-solve } template void solveTransposedInPlace(MatrixBase &X) const { using numext::conj; Index nrhs = X.cols(); Index n = X.rows(); // Forward solve with U for (Index k = 0; k <= m_mapL.nsuper(); k++) { Index fsupc = m_mapL.supToCol()[k]; Index lda = m_mapL.colIndexPtr()[fsupc+1] - m_mapL.colIndexPtr()[fsupc]; // leading dimension Index nsupc = m_mapL.supToCol()[k+1] - fsupc; Index luptr = m_mapL.colIndexPtr()[fsupc]; for (Index j = 0; j < nrhs; ++j) { for (Index jcol = fsupc; jcol < fsupc + nsupc; jcol++) { typename MatrixUType::InnerIterator it(m_mapU, jcol); for ( ; it; ++it) { Index irow = it.index(); X(jcol, j) -= X(irow, j) * (Conjugate? conj(it.value()): it.value()); } } } if (nsupc == 1) { for (Index j = 0; j < nrhs; j++) { X(fsupc, j) /= (Conjugate? conj(m_mapL.valuePtr()[luptr]) : m_mapL.valuePtr()[luptr]); } } else { Map, 0, OuterStride<> > A( &(m_mapL.valuePtr()[luptr]), nsupc, nsupc, OuterStride<>(lda) ); Map< Matrix, 0, OuterStride<> > U (&(X(fsupc,0)), nsupc, nrhs, OuterStride<>(n) ); if(Conjugate) U = A.adjoint().template triangularView().solve(U); else U = A.transpose().template triangularView().solve(U); } }// End For U-solve } const MatrixLType& m_mapL; const MatrixUType& m_mapU; }; } // End namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_column_bmod.h0000644000175100001770000001507014107270226024276 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of xcolumn_bmod.c file in SuperLU * -- SuperLU routine (version 3.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * October 15, 2003 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_COLUMN_BMOD_H #define SPARSELU_COLUMN_BMOD_H namespace Eigen { namespace internal { /** * \brief Performs numeric block updates (sup-col) in topological order * * \param jcol current column to update * \param nseg Number of segments in the U part * \param dense Store the full representation of the column * \param tempv working array * \param segrep segment representative ... * \param repfnz ??? First nonzero column in each row ??? ... * \param fpanelc First column in the current panel * \param glu Global LU data. * \return 0 - successful return * > 0 - number of bytes allocated when run out of space * */ template Index SparseLUImpl::column_bmod(const Index jcol, const Index nseg, BlockScalarVector dense, ScalarVector& tempv, BlockIndexVector segrep, BlockIndexVector repfnz, Index fpanelc, GlobalLU_t& glu) { Index jsupno, k, ksub, krep, ksupno; Index lptr, nrow, isub, irow, nextlu, new_next, ufirst; Index fsupc, nsupc, nsupr, luptr, kfnz, no_zeros; /* krep = representative of current k-th supernode * fsupc = first supernodal column * nsupc = number of columns in a supernode * nsupr = number of rows in a supernode * luptr = location of supernodal LU-block in storage * kfnz = first nonz in the k-th supernodal segment * no_zeros = no lf leading zeros in a supernodal U-segment */ jsupno = glu.supno(jcol); // For each nonzero supernode segment of U[*,j] in topological order k = nseg - 1; Index d_fsupc; // distance between the first column of the current panel and the // first column of the current snode Index fst_col; // First column within small LU update Index segsize; for (ksub = 0; ksub < nseg; ksub++) { krep = segrep(k); k--; ksupno = glu.supno(krep); if (jsupno != ksupno ) { // outside the rectangular supernode fsupc = glu.xsup(ksupno); fst_col = (std::max)(fsupc, fpanelc); // Distance from the current supernode to the current panel; // d_fsupc = 0 if fsupc > fpanelc d_fsupc = fst_col - fsupc; luptr = glu.xlusup(fst_col) + d_fsupc; lptr = glu.xlsub(fsupc) + d_fsupc; kfnz = repfnz(krep); kfnz = (std::max)(kfnz, fpanelc); segsize = krep - kfnz + 1; nsupc = krep - fst_col + 1; nsupr = glu.xlsub(fsupc+1) - glu.xlsub(fsupc); nrow = nsupr - d_fsupc - nsupc; Index lda = glu.xlusup(fst_col+1) - glu.xlusup(fst_col); // Perform a triangular solver and block update, // then scatter the result of sup-col update to dense no_zeros = kfnz - fst_col; if(segsize==1) LU_kernel_bmod<1>::run(segsize, dense, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); else LU_kernel_bmod::run(segsize, dense, tempv, glu.lusup, luptr, lda, nrow, glu.lsub, lptr, no_zeros); } // end if jsupno } // end for each segment // Process the supernodal portion of L\U[*,j] nextlu = glu.xlusup(jcol); fsupc = glu.xsup(jsupno); // copy the SPA dense into L\U[*,j] Index mem; new_next = nextlu + glu.xlsub(fsupc + 1) - glu.xlsub(fsupc); Index offset = internal::first_multiple(new_next, internal::packet_traits::size) - new_next; if(offset) new_next += offset; while (new_next > glu.nzlumax ) { mem = memXpand(glu.lusup, glu.nzlumax, nextlu, LUSUP, glu.num_expansions); if (mem) return mem; } for (isub = glu.xlsub(fsupc); isub < glu.xlsub(fsupc+1); isub++) { irow = glu.lsub(isub); glu.lusup(nextlu) = dense(irow); dense(irow) = Scalar(0.0); ++nextlu; } if(offset) { glu.lusup.segment(nextlu,offset).setZero(); nextlu += offset; } glu.xlusup(jcol + 1) = StorageIndex(nextlu); // close L\U(*,jcol); /* For more updates within the panel (also within the current supernode), * should start from the first column of the panel, or the first column * of the supernode, whichever is bigger. There are two cases: * 1) fsupc < fpanelc, then fst_col <-- fpanelc * 2) fsupc >= fpanelc, then fst_col <-- fsupc */ fst_col = (std::max)(fsupc, fpanelc); if (fst_col < jcol) { // Distance between the current supernode and the current panel // d_fsupc = 0 if fsupc >= fpanelc d_fsupc = fst_col - fsupc; lptr = glu.xlsub(fsupc) + d_fsupc; luptr = glu.xlusup(fst_col) + d_fsupc; nsupr = glu.xlsub(fsupc+1) - glu.xlsub(fsupc); // leading dimension nsupc = jcol - fst_col; // excluding jcol nrow = nsupr - d_fsupc - nsupc; // points to the beginning of jcol in snode L\U(jsupno) ufirst = glu.xlusup(jcol) + d_fsupc; Index lda = glu.xlusup(jcol+1) - glu.xlusup(jcol); MappedMatrixBlock A( &(glu.lusup.data()[luptr]), nsupc, nsupc, OuterStride<>(lda) ); VectorBlock u(glu.lusup, ufirst, nsupc); u = A.template triangularView().solve(u); new (&A) MappedMatrixBlock ( &(glu.lusup.data()[luptr+nsupc]), nrow, nsupc, OuterStride<>(lda) ); VectorBlock l(glu.lusup, ufirst+nsupc, nrow); l.noalias() -= A * u; } // End if fst_col return 0; } } // end namespace internal } // end namespace Eigen #endif // SPARSELU_COLUMN_BMOD_H piqp-octave/src/eigen/Eigen/src/SparseLU/SparseLU_column_dfs.h0000644000175100001770000001467014107270226024136 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of [s,d,c,z]column_dfs.c file in SuperLU * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSELU_COLUMN_DFS_H #define SPARSELU_COLUMN_DFS_H template class SparseLUImpl; namespace Eigen { namespace internal { template struct column_dfs_traits : no_assignment_operator { typedef typename ScalarVector::Scalar Scalar; typedef typename IndexVector::Scalar StorageIndex; column_dfs_traits(Index jcol, Index& jsuper, typename SparseLUImpl::GlobalLU_t& glu, SparseLUImpl& luImpl) : m_jcol(jcol), m_jsuper_ref(jsuper), m_glu(glu), m_luImpl(luImpl) {} bool update_segrep(Index /*krep*/, Index /*jj*/) { return true; } void mem_expand(IndexVector& lsub, Index& nextl, Index chmark) { if (nextl >= m_glu.nzlmax) m_luImpl.memXpand(lsub, m_glu.nzlmax, nextl, LSUB, m_glu.num_expansions); if (chmark != (m_jcol-1)) m_jsuper_ref = emptyIdxLU; } enum { ExpandMem = true }; Index m_jcol; Index& m_jsuper_ref; typename SparseLUImpl::GlobalLU_t& m_glu; SparseLUImpl& m_luImpl; }; /** * \brief Performs a symbolic factorization on column jcol and decide the supernode boundary * * A supernode representative is the last column of a supernode. * The nonzeros in U[*,j] are segments that end at supernodes representatives. * The routine returns a list of the supernodal representatives * in topological order of the dfs that generates them. * The location of the first nonzero in each supernodal segment * (supernodal entry location) is also returned. * * \param m number of rows in the matrix * \param jcol Current column * \param perm_r Row permutation * \param maxsuper Maximum number of column allowed in a supernode * \param [in,out] nseg Number of segments in current U[*,j] - new segments appended * \param lsub_col defines the rhs vector to start the dfs * \param [in,out] segrep Segment representatives - new segments appended * \param repfnz First nonzero location in each row * \param xprune * \param marker marker[i] == jj, if i was visited during dfs of current column jj; * \param parent * \param xplore working array * \param glu global LU data * \return 0 success * > 0 number of bytes allocated when run out of space * */ template Index SparseLUImpl::column_dfs(const Index m, const Index jcol, IndexVector& perm_r, Index maxsuper, Index& nseg, BlockIndexVector lsub_col, IndexVector& segrep, BlockIndexVector repfnz, IndexVector& xprune, IndexVector& marker, IndexVector& parent, IndexVector& xplore, GlobalLU_t& glu) { Index jsuper = glu.supno(jcol); Index nextl = glu.xlsub(jcol); VectorBlock marker2(marker, 2*m, m); column_dfs_traits traits(jcol, jsuper, glu, *this); // For each nonzero in A(*,jcol) do dfs for (Index k = 0; ((k < m) ? lsub_col[k] != emptyIdxLU : false) ; k++) { Index krow = lsub_col(k); lsub_col(k) = emptyIdxLU; Index kmark = marker2(krow); // krow was visited before, go to the next nonz; if (kmark == jcol) continue; dfs_kernel(StorageIndex(jcol), perm_r, nseg, glu.lsub, segrep, repfnz, xprune, marker2, parent, xplore, glu, nextl, krow, traits); } // for each nonzero ... Index fsupc; StorageIndex nsuper = glu.supno(jcol); StorageIndex jcolp1 = StorageIndex(jcol) + 1; Index jcolm1 = jcol - 1; // check to see if j belongs in the same supernode as j-1 if ( jcol == 0 ) { // Do nothing for column 0 nsuper = glu.supno(0) = 0 ; } else { fsupc = glu.xsup(nsuper); StorageIndex jptr = glu.xlsub(jcol); // Not yet compressed StorageIndex jm1ptr = glu.xlsub(jcolm1); // Use supernodes of type T2 : see SuperLU paper if ( (nextl-jptr != jptr-jm1ptr-1) ) jsuper = emptyIdxLU; // Make sure the number of columns in a supernode doesn't // exceed threshold if ( (jcol - fsupc) >= maxsuper) jsuper = emptyIdxLU; /* If jcol starts a new supernode, reclaim storage space in * glu.lsub from previous supernode. Note we only store * the subscript set of the first and last columns of * a supernode. (first for num values, last for pruning) */ if (jsuper == emptyIdxLU) { // starts a new supernode if ( (fsupc < jcolm1-1) ) { // >= 3 columns in nsuper StorageIndex ito = glu.xlsub(fsupc+1); glu.xlsub(jcolm1) = ito; StorageIndex istop = ito + jptr - jm1ptr; xprune(jcolm1) = istop; // initialize xprune(jcol-1) glu.xlsub(jcol) = istop; for (StorageIndex ifrom = jm1ptr; ifrom < nextl; ++ifrom, ++ito) glu.lsub(ito) = glu.lsub(ifrom); nextl = ito; // = istop + length(jcol) } nsuper++; glu.supno(jcol) = nsuper; } // if a new supernode } // end else: jcol > 0 // Tidy up the pointers before exit glu.xsup(nsuper+1) = jcolp1; glu.supno(jcolp1) = nsuper; xprune(jcol) = StorageIndex(nextl); // Initialize upper bound for pruning glu.xlsub(jcolp1) = StorageIndex(nextl); return 0; } } // end namespace internal } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/Householder/0000755000175100001770000000000014107270226020631 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Householder/BlockHouseholder.h0000644000175100001770000001126014107270226024236 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010 Vincent Lejeune // Copyright (C) 2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BLOCK_HOUSEHOLDER_H #define EIGEN_BLOCK_HOUSEHOLDER_H // This file contains some helper function to deal with block householder reflectors namespace Eigen { namespace internal { /** \internal */ // template // void make_block_householder_triangular_factor(TriangularFactorType& triFactor, const VectorsType& vectors, const CoeffsType& hCoeffs) // { // typedef typename VectorsType::Scalar Scalar; // const Index nbVecs = vectors.cols(); // eigen_assert(triFactor.rows() == nbVecs && triFactor.cols() == nbVecs && vectors.rows()>=nbVecs); // // for(Index i = 0; i < nbVecs; i++) // { // Index rs = vectors.rows() - i; // // Warning, note that hCoeffs may alias with vectors. // // It is then necessary to copy it before modifying vectors(i,i). // typename CoeffsType::Scalar h = hCoeffs(i); // // This hack permits to pass trough nested Block<> and Transpose<> expressions. // Scalar *Vii_ptr = const_cast(vectors.data() + vectors.outerStride()*i + vectors.innerStride()*i); // Scalar Vii = *Vii_ptr; // *Vii_ptr = Scalar(1); // triFactor.col(i).head(i).noalias() = -h * vectors.block(i, 0, rs, i).adjoint() // * vectors.col(i).tail(rs); // *Vii_ptr = Vii; // // FIXME add .noalias() once the triangular product can work inplace // triFactor.col(i).head(i) = triFactor.block(0,0,i,i).template triangularView() // * triFactor.col(i).head(i); // triFactor(i,i) = hCoeffs(i); // } // } /** \internal */ // This variant avoid modifications in vectors template void make_block_householder_triangular_factor(TriangularFactorType& triFactor, const VectorsType& vectors, const CoeffsType& hCoeffs) { const Index nbVecs = vectors.cols(); eigen_assert(triFactor.rows() == nbVecs && triFactor.cols() == nbVecs && vectors.rows()>=nbVecs); for(Index i = nbVecs-1; i >=0 ; --i) { Index rs = vectors.rows() - i - 1; Index rt = nbVecs-i-1; if(rt>0) { triFactor.row(i).tail(rt).noalias() = -hCoeffs(i) * vectors.col(i).tail(rs).adjoint() * vectors.bottomRightCorner(rs, rt).template triangularView(); // FIXME use the following line with .noalias() once the triangular product can work inplace // triFactor.row(i).tail(rt) = triFactor.row(i).tail(rt) * triFactor.bottomRightCorner(rt,rt).template triangularView(); for(Index j=nbVecs-1; j>i; --j) { typename TriangularFactorType::Scalar z = triFactor(i,j); triFactor(i,j) = z * triFactor(j,j); if(nbVecs-j-1>0) triFactor.row(i).tail(nbVecs-j-1) += z * triFactor.row(j).tail(nbVecs-j-1); } } triFactor(i,i) = hCoeffs(i); } } /** \internal * if forward then perform mat = H0 * H1 * H2 * mat * otherwise perform mat = H2 * H1 * H0 * mat */ template void apply_block_householder_on_the_left(MatrixType& mat, const VectorsType& vectors, const CoeffsType& hCoeffs, bool forward) { enum { TFactorSize = MatrixType::ColsAtCompileTime }; Index nbVecs = vectors.cols(); Matrix T(nbVecs,nbVecs); if(forward) make_block_householder_triangular_factor(T, vectors, hCoeffs); else make_block_householder_triangular_factor(T, vectors, hCoeffs.conjugate()); const TriangularView V(vectors); // A -= V T V^* A Matrix tmp = V.adjoint() * mat; // FIXME add .noalias() once the triangular product can work inplace if(forward) tmp = T.template triangularView() * tmp; else tmp = T.template triangularView().adjoint() * tmp; mat.noalias() -= V * tmp; } } // end namespace internal } // end namespace Eigen #endif // EIGEN_BLOCK_HOUSEHOLDER_H piqp-octave/src/eigen/Eigen/src/Householder/HouseholderSequence.h0000644000175100001770000005607314107270226024767 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2010 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_HOUSEHOLDER_SEQUENCE_H #define EIGEN_HOUSEHOLDER_SEQUENCE_H namespace Eigen { /** \ingroup Householder_Module * \householder_module * \class HouseholderSequence * \brief Sequence of Householder reflections acting on subspaces with decreasing size * \tparam VectorsType type of matrix containing the Householder vectors * \tparam CoeffsType type of vector containing the Householder coefficients * \tparam Side either OnTheLeft (the default) or OnTheRight * * This class represents a product sequence of Householder reflections where the first Householder reflection * acts on the whole space, the second Householder reflection leaves the one-dimensional subspace spanned by * the first unit vector invariant, the third Householder reflection leaves the two-dimensional subspace * spanned by the first two unit vectors invariant, and so on up to the last reflection which leaves all but * one dimensions invariant and acts only on the last dimension. Such sequences of Householder reflections * are used in several algorithms to zero out certain parts of a matrix. Indeed, the methods * HessenbergDecomposition::matrixQ(), Tridiagonalization::matrixQ(), HouseholderQR::householderQ(), * and ColPivHouseholderQR::householderQ() all return a %HouseholderSequence. * * More precisely, the class %HouseholderSequence represents an \f$ n \times n \f$ matrix \f$ H \f$ of the * form \f$ H = \prod_{i=0}^{n-1} H_i \f$ where the i-th Householder reflection is \f$ H_i = I - h_i v_i * v_i^* \f$. The i-th Householder coefficient \f$ h_i \f$ is a scalar and the i-th Householder vector \f$ * v_i \f$ is a vector of the form * \f[ * v_i = [\underbrace{0, \ldots, 0}_{i-1\mbox{ zeros}}, 1, \underbrace{*, \ldots,*}_{n-i\mbox{ arbitrary entries}} ]. * \f] * The last \f$ n-i \f$ entries of \f$ v_i \f$ are called the essential part of the Householder vector. * * Typical usages are listed below, where H is a HouseholderSequence: * \code * A.applyOnTheRight(H); // A = A * H * A.applyOnTheLeft(H); // A = H * A * A.applyOnTheRight(H.adjoint()); // A = A * H^* * A.applyOnTheLeft(H.adjoint()); // A = H^* * A * MatrixXd Q = H; // conversion to a dense matrix * \endcode * In addition to the adjoint, you can also apply the inverse (=adjoint), the transpose, and the conjugate operators. * * See the documentation for HouseholderSequence(const VectorsType&, const CoeffsType&) for an example. * * \sa MatrixBase::applyOnTheLeft(), MatrixBase::applyOnTheRight() */ namespace internal { template struct traits > { typedef typename VectorsType::Scalar Scalar; typedef typename VectorsType::StorageIndex StorageIndex; typedef typename VectorsType::StorageKind StorageKind; enum { RowsAtCompileTime = Side==OnTheLeft ? traits::RowsAtCompileTime : traits::ColsAtCompileTime, ColsAtCompileTime = RowsAtCompileTime, MaxRowsAtCompileTime = Side==OnTheLeft ? traits::MaxRowsAtCompileTime : traits::MaxColsAtCompileTime, MaxColsAtCompileTime = MaxRowsAtCompileTime, Flags = 0 }; }; struct HouseholderSequenceShape {}; template struct evaluator_traits > : public evaluator_traits_base > { typedef HouseholderSequenceShape Shape; }; template struct hseq_side_dependent_impl { typedef Block EssentialVectorType; typedef HouseholderSequence HouseholderSequenceType; static EIGEN_DEVICE_FUNC inline const EssentialVectorType essentialVector(const HouseholderSequenceType& h, Index k) { Index start = k+1+h.m_shift; return Block(h.m_vectors, start, k, h.rows()-start, 1); } }; template struct hseq_side_dependent_impl { typedef Transpose > EssentialVectorType; typedef HouseholderSequence HouseholderSequenceType; static inline const EssentialVectorType essentialVector(const HouseholderSequenceType& h, Index k) { Index start = k+1+h.m_shift; return Block(h.m_vectors, k, start, 1, h.rows()-start).transpose(); } }; template struct matrix_type_times_scalar_type { typedef typename ScalarBinaryOpTraits::ReturnType ResultScalar; typedef Matrix Type; }; } // end namespace internal template class HouseholderSequence : public EigenBase > { typedef typename internal::hseq_side_dependent_impl::EssentialVectorType EssentialVectorType; public: enum { RowsAtCompileTime = internal::traits::RowsAtCompileTime, ColsAtCompileTime = internal::traits::ColsAtCompileTime, MaxRowsAtCompileTime = internal::traits::MaxRowsAtCompileTime, MaxColsAtCompileTime = internal::traits::MaxColsAtCompileTime }; typedef typename internal::traits::Scalar Scalar; typedef HouseholderSequence< typename internal::conditional::IsComplex, typename internal::remove_all::type, VectorsType>::type, typename internal::conditional::IsComplex, typename internal::remove_all::type, CoeffsType>::type, Side > ConjugateReturnType; typedef HouseholderSequence< VectorsType, typename internal::conditional::IsComplex, typename internal::remove_all::type, CoeffsType>::type, Side > AdjointReturnType; typedef HouseholderSequence< typename internal::conditional::IsComplex, typename internal::remove_all::type, VectorsType>::type, CoeffsType, Side > TransposeReturnType; typedef HouseholderSequence< typename internal::add_const::type, typename internal::add_const::type, Side > ConstHouseholderSequence; /** \brief Constructor. * \param[in] v %Matrix containing the essential parts of the Householder vectors * \param[in] h Vector containing the Householder coefficients * * Constructs the Householder sequence with coefficients given by \p h and vectors given by \p v. The * i-th Householder coefficient \f$ h_i \f$ is given by \p h(i) and the essential part of the i-th * Householder vector \f$ v_i \f$ is given by \p v(k,i) with \p k > \p i (the subdiagonal part of the * i-th column). If \p v has fewer columns than rows, then the Householder sequence contains as many * Householder reflections as there are columns. * * \note The %HouseholderSequence object stores \p v and \p h by reference. * * Example: \include HouseholderSequence_HouseholderSequence.cpp * Output: \verbinclude HouseholderSequence_HouseholderSequence.out * * \sa setLength(), setShift() */ EIGEN_DEVICE_FUNC HouseholderSequence(const VectorsType& v, const CoeffsType& h) : m_vectors(v), m_coeffs(h), m_reverse(false), m_length(v.diagonalSize()), m_shift(0) { } /** \brief Copy constructor. */ EIGEN_DEVICE_FUNC HouseholderSequence(const HouseholderSequence& other) : m_vectors(other.m_vectors), m_coeffs(other.m_coeffs), m_reverse(other.m_reverse), m_length(other.m_length), m_shift(other.m_shift) { } /** \brief Number of rows of transformation viewed as a matrix. * \returns Number of rows * \details This equals the dimension of the space that the transformation acts on. */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return Side==OnTheLeft ? m_vectors.rows() : m_vectors.cols(); } /** \brief Number of columns of transformation viewed as a matrix. * \returns Number of columns * \details This equals the dimension of the space that the transformation acts on. */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return rows(); } /** \brief Essential part of a Householder vector. * \param[in] k Index of Householder reflection * \returns Vector containing non-trivial entries of k-th Householder vector * * This function returns the essential part of the Householder vector \f$ v_i \f$. This is a vector of * length \f$ n-i \f$ containing the last \f$ n-i \f$ entries of the vector * \f[ * v_i = [\underbrace{0, \ldots, 0}_{i-1\mbox{ zeros}}, 1, \underbrace{*, \ldots,*}_{n-i\mbox{ arbitrary entries}} ]. * \f] * The index \f$ i \f$ equals \p k + shift(), corresponding to the k-th column of the matrix \p v * passed to the constructor. * * \sa setShift(), shift() */ EIGEN_DEVICE_FUNC const EssentialVectorType essentialVector(Index k) const { eigen_assert(k >= 0 && k < m_length); return internal::hseq_side_dependent_impl::essentialVector(*this, k); } /** \brief %Transpose of the Householder sequence. */ TransposeReturnType transpose() const { return TransposeReturnType(m_vectors.conjugate(), m_coeffs) .setReverseFlag(!m_reverse) .setLength(m_length) .setShift(m_shift); } /** \brief Complex conjugate of the Householder sequence. */ ConjugateReturnType conjugate() const { return ConjugateReturnType(m_vectors.conjugate(), m_coeffs.conjugate()) .setReverseFlag(m_reverse) .setLength(m_length) .setShift(m_shift); } /** \returns an expression of the complex conjugate of \c *this if Cond==true, * returns \c *this otherwise. */ template EIGEN_DEVICE_FUNC inline typename internal::conditional::type conjugateIf() const { typedef typename internal::conditional::type ReturnType; return ReturnType(m_vectors.template conjugateIf(), m_coeffs.template conjugateIf()); } /** \brief Adjoint (conjugate transpose) of the Householder sequence. */ AdjointReturnType adjoint() const { return AdjointReturnType(m_vectors, m_coeffs.conjugate()) .setReverseFlag(!m_reverse) .setLength(m_length) .setShift(m_shift); } /** \brief Inverse of the Householder sequence (equals the adjoint). */ AdjointReturnType inverse() const { return adjoint(); } /** \internal */ template inline EIGEN_DEVICE_FUNC void evalTo(DestType& dst) const { Matrix workspace(rows()); evalTo(dst, workspace); } /** \internal */ template EIGEN_DEVICE_FUNC void evalTo(Dest& dst, Workspace& workspace) const { workspace.resize(rows()); Index vecs = m_length; if(internal::is_same_dense(dst,m_vectors)) { // in-place dst.diagonal().setOnes(); dst.template triangularView().setZero(); for(Index k = vecs-1; k >= 0; --k) { Index cornerSize = rows() - k - m_shift; if(m_reverse) dst.bottomRightCorner(cornerSize, cornerSize) .applyHouseholderOnTheRight(essentialVector(k), m_coeffs.coeff(k), workspace.data()); else dst.bottomRightCorner(cornerSize, cornerSize) .applyHouseholderOnTheLeft(essentialVector(k), m_coeffs.coeff(k), workspace.data()); // clear the off diagonal vector dst.col(k).tail(rows()-k-1).setZero(); } // clear the remaining columns if needed for(Index k = 0; kBlockSize) { dst.setIdentity(rows(), rows()); if(m_reverse) applyThisOnTheLeft(dst,workspace,true); else applyThisOnTheLeft(dst,workspace,true); } else { dst.setIdentity(rows(), rows()); for(Index k = vecs-1; k >= 0; --k) { Index cornerSize = rows() - k - m_shift; if(m_reverse) dst.bottomRightCorner(cornerSize, cornerSize) .applyHouseholderOnTheRight(essentialVector(k), m_coeffs.coeff(k), workspace.data()); else dst.bottomRightCorner(cornerSize, cornerSize) .applyHouseholderOnTheLeft(essentialVector(k), m_coeffs.coeff(k), workspace.data()); } } } /** \internal */ template inline void applyThisOnTheRight(Dest& dst) const { Matrix workspace(dst.rows()); applyThisOnTheRight(dst, workspace); } /** \internal */ template inline void applyThisOnTheRight(Dest& dst, Workspace& workspace) const { workspace.resize(dst.rows()); for(Index k = 0; k < m_length; ++k) { Index actual_k = m_reverse ? m_length-k-1 : k; dst.rightCols(rows()-m_shift-actual_k) .applyHouseholderOnTheRight(essentialVector(actual_k), m_coeffs.coeff(actual_k), workspace.data()); } } /** \internal */ template inline void applyThisOnTheLeft(Dest& dst, bool inputIsIdentity = false) const { Matrix workspace; applyThisOnTheLeft(dst, workspace, inputIsIdentity); } /** \internal */ template inline void applyThisOnTheLeft(Dest& dst, Workspace& workspace, bool inputIsIdentity = false) const { if(inputIsIdentity && m_reverse) inputIsIdentity = false; // if the entries are large enough, then apply the reflectors by block if(m_length>=BlockSize && dst.cols()>1) { // Make sure we have at least 2 useful blocks, otherwise it is point-less: Index blockSize = m_length::type,Dynamic,Dynamic> SubVectorsType; SubVectorsType sub_vecs1(m_vectors.const_cast_derived(), Side==OnTheRight ? k : start, Side==OnTheRight ? start : k, Side==OnTheRight ? bs : m_vectors.rows()-start, Side==OnTheRight ? m_vectors.cols()-start : bs); typename internal::conditional, SubVectorsType&>::type sub_vecs(sub_vecs1); Index dstStart = dst.rows()-rows()+m_shift+k; Index dstRows = rows()-m_shift-k; Block sub_dst(dst, dstStart, inputIsIdentity ? dstStart : 0, dstRows, inputIsIdentity ? dstRows : dst.cols()); apply_block_householder_on_the_left(sub_dst, sub_vecs, m_coeffs.segment(k, bs), !m_reverse); } } else { workspace.resize(dst.cols()); for(Index k = 0; k < m_length; ++k) { Index actual_k = m_reverse ? k : m_length-k-1; Index dstStart = rows()-m_shift-actual_k; dst.bottomRightCorner(dstStart, inputIsIdentity ? dstStart : dst.cols()) .applyHouseholderOnTheLeft(essentialVector(actual_k), m_coeffs.coeff(actual_k), workspace.data()); } } } /** \brief Computes the product of a Householder sequence with a matrix. * \param[in] other %Matrix being multiplied. * \returns Expression object representing the product. * * This function computes \f$ HM \f$ where \f$ H \f$ is the Householder sequence represented by \p *this * and \f$ M \f$ is the matrix \p other. */ template typename internal::matrix_type_times_scalar_type::Type operator*(const MatrixBase& other) const { typename internal::matrix_type_times_scalar_type::Type res(other.template cast::ResultScalar>()); applyThisOnTheLeft(res, internal::is_identity::value && res.rows()==res.cols()); return res; } template friend struct internal::hseq_side_dependent_impl; /** \brief Sets the length of the Householder sequence. * \param [in] length New value for the length. * * By default, the length \f$ n \f$ of the Householder sequence \f$ H = H_0 H_1 \ldots H_{n-1} \f$ is set * to the number of columns of the matrix \p v passed to the constructor, or the number of rows if that * is smaller. After this function is called, the length equals \p length. * * \sa length() */ EIGEN_DEVICE_FUNC HouseholderSequence& setLength(Index length) { m_length = length; return *this; } /** \brief Sets the shift of the Householder sequence. * \param [in] shift New value for the shift. * * By default, a %HouseholderSequence object represents \f$ H = H_0 H_1 \ldots H_{n-1} \f$ and the i-th * column of the matrix \p v passed to the constructor corresponds to the i-th Householder * reflection. After this function is called, the object represents \f$ H = H_{\mathrm{shift}} * H_{\mathrm{shift}+1} \ldots H_{n-1} \f$ and the i-th column of \p v corresponds to the (shift+i)-th * Householder reflection. * * \sa shift() */ EIGEN_DEVICE_FUNC HouseholderSequence& setShift(Index shift) { m_shift = shift; return *this; } EIGEN_DEVICE_FUNC Index length() const { return m_length; } /**< \brief Returns the length of the Householder sequence. */ EIGEN_DEVICE_FUNC Index shift() const { return m_shift; } /**< \brief Returns the shift of the Householder sequence. */ /* Necessary for .adjoint() and .conjugate() */ template friend class HouseholderSequence; protected: /** \internal * \brief Sets the reverse flag. * \param [in] reverse New value of the reverse flag. * * By default, the reverse flag is not set. If the reverse flag is set, then this object represents * \f$ H^r = H_{n-1} \ldots H_1 H_0 \f$ instead of \f$ H = H_0 H_1 \ldots H_{n-1} \f$. * \note For real valued HouseholderSequence this is equivalent to transposing \f$ H \f$. * * \sa reverseFlag(), transpose(), adjoint() */ HouseholderSequence& setReverseFlag(bool reverse) { m_reverse = reverse; return *this; } bool reverseFlag() const { return m_reverse; } /**< \internal \brief Returns the reverse flag. */ typename VectorsType::Nested m_vectors; typename CoeffsType::Nested m_coeffs; bool m_reverse; Index m_length; Index m_shift; enum { BlockSize = 48 }; }; /** \brief Computes the product of a matrix with a Householder sequence. * \param[in] other %Matrix being multiplied. * \param[in] h %HouseholderSequence being multiplied. * \returns Expression object representing the product. * * This function computes \f$ MH \f$ where \f$ M \f$ is the matrix \p other and \f$ H \f$ is the * Householder sequence represented by \p h. */ template typename internal::matrix_type_times_scalar_type::Type operator*(const MatrixBase& other, const HouseholderSequence& h) { typename internal::matrix_type_times_scalar_type::Type res(other.template cast::ResultScalar>()); h.applyThisOnTheRight(res); return res; } /** \ingroup Householder_Module \householder_module * \brief Convenience function for constructing a Householder sequence. * \returns A HouseholderSequence constructed from the specified arguments. */ template HouseholderSequence householderSequence(const VectorsType& v, const CoeffsType& h) { return HouseholderSequence(v, h); } /** \ingroup Householder_Module \householder_module * \brief Convenience function for constructing a Householder sequence. * \returns A HouseholderSequence constructed from the specified arguments. * \details This function differs from householderSequence() in that the template argument \p OnTheSide of * the constructed HouseholderSequence is set to OnTheRight, instead of the default OnTheLeft. */ template HouseholderSequence rightHouseholderSequence(const VectorsType& v, const CoeffsType& h) { return HouseholderSequence(v, h); } } // end namespace Eigen #endif // EIGEN_HOUSEHOLDER_SEQUENCE_H piqp-octave/src/eigen/Eigen/src/Householder/Householder.h0000644000175100001770000001236514107270226023272 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010 Benoit Jacob // Copyright (C) 2009 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_HOUSEHOLDER_H #define EIGEN_HOUSEHOLDER_H namespace Eigen { namespace internal { template struct decrement_size { enum { ret = n==Dynamic ? n : n-1 }; }; } /** Computes the elementary reflector H such that: * \f$ H *this = [ beta 0 ... 0]^T \f$ * where the transformation H is: * \f$ H = I - tau v v^*\f$ * and the vector v is: * \f$ v^T = [1 essential^T] \f$ * * The essential part of the vector \c v is stored in *this. * * On output: * \param tau the scaling factor of the Householder transformation * \param beta the result of H * \c *this * * \sa MatrixBase::makeHouseholder(), MatrixBase::applyHouseholderOnTheLeft(), * MatrixBase::applyHouseholderOnTheRight() */ template EIGEN_DEVICE_FUNC void MatrixBase::makeHouseholderInPlace(Scalar& tau, RealScalar& beta) { VectorBlock::ret> essentialPart(derived(), 1, size()-1); makeHouseholder(essentialPart, tau, beta); } /** Computes the elementary reflector H such that: * \f$ H *this = [ beta 0 ... 0]^T \f$ * where the transformation H is: * \f$ H = I - tau v v^*\f$ * and the vector v is: * \f$ v^T = [1 essential^T] \f$ * * On output: * \param essential the essential part of the vector \c v * \param tau the scaling factor of the Householder transformation * \param beta the result of H * \c *this * * \sa MatrixBase::makeHouseholderInPlace(), MatrixBase::applyHouseholderOnTheLeft(), * MatrixBase::applyHouseholderOnTheRight() */ template template EIGEN_DEVICE_FUNC void MatrixBase::makeHouseholder( EssentialPart& essential, Scalar& tau, RealScalar& beta) const { using std::sqrt; using numext::conj; EIGEN_STATIC_ASSERT_VECTOR_ONLY(EssentialPart) VectorBlock tail(derived(), 1, size()-1); RealScalar tailSqNorm = size()==1 ? RealScalar(0) : tail.squaredNorm(); Scalar c0 = coeff(0); const RealScalar tol = (std::numeric_limits::min)(); if(tailSqNorm <= tol && numext::abs2(numext::imag(c0))<=tol) { tau = RealScalar(0); beta = numext::real(c0); essential.setZero(); } else { beta = sqrt(numext::abs2(c0) + tailSqNorm); if (numext::real(c0)>=RealScalar(0)) beta = -beta; essential = tail / (c0 - beta); tau = conj((beta - c0) / beta); } } /** Apply the elementary reflector H given by * \f$ H = I - tau v v^*\f$ * with * \f$ v^T = [1 essential^T] \f$ * from the left to a vector or matrix. * * On input: * \param essential the essential part of the vector \c v * \param tau the scaling factor of the Householder transformation * \param workspace a pointer to working space with at least * this->cols() entries * * \sa MatrixBase::makeHouseholder(), MatrixBase::makeHouseholderInPlace(), * MatrixBase::applyHouseholderOnTheRight() */ template template EIGEN_DEVICE_FUNC void MatrixBase::applyHouseholderOnTheLeft( const EssentialPart& essential, const Scalar& tau, Scalar* workspace) { if(rows() == 1) { *this *= Scalar(1)-tau; } else if(tau!=Scalar(0)) { Map::type> tmp(workspace,cols()); Block bottom(derived(), 1, 0, rows()-1, cols()); tmp.noalias() = essential.adjoint() * bottom; tmp += this->row(0); this->row(0) -= tau * tmp; bottom.noalias() -= tau * essential * tmp; } } /** Apply the elementary reflector H given by * \f$ H = I - tau v v^*\f$ * with * \f$ v^T = [1 essential^T] \f$ * from the right to a vector or matrix. * * On input: * \param essential the essential part of the vector \c v * \param tau the scaling factor of the Householder transformation * \param workspace a pointer to working space with at least * this->rows() entries * * \sa MatrixBase::makeHouseholder(), MatrixBase::makeHouseholderInPlace(), * MatrixBase::applyHouseholderOnTheLeft() */ template template EIGEN_DEVICE_FUNC void MatrixBase::applyHouseholderOnTheRight( const EssentialPart& essential, const Scalar& tau, Scalar* workspace) { if(cols() == 1) { *this *= Scalar(1)-tau; } else if(tau!=Scalar(0)) { Map::type> tmp(workspace,rows()); Block right(derived(), 0, 1, rows(), cols()-1); tmp.noalias() = right * essential; tmp += this->col(0); this->col(0) -= tau * tmp; right.noalias() -= tau * tmp * essential.adjoint(); } } } // end namespace Eigen #endif // EIGEN_HOUSEHOLDER_H piqp-octave/src/eigen/Eigen/src/QR/0000755000175100001770000000000014107270226016672 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/QR/ColPivHouseholderQR.h0000644000175100001770000006163214107270226022714 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2009 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COLPIVOTINGHOUSEHOLDERQR_H #define EIGEN_COLPIVOTINGHOUSEHOLDERQR_H namespace Eigen { namespace internal { template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; } // end namespace internal /** \ingroup QR_Module * * \class ColPivHouseholderQR * * \brief Householder rank-revealing QR decomposition of a matrix with column-pivoting * * \tparam _MatrixType the type of the matrix of which we are computing the QR decomposition * * This class performs a rank-revealing QR decomposition of a matrix \b A into matrices \b P, \b Q and \b R * such that * \f[ * \mathbf{A} \, \mathbf{P} = \mathbf{Q} \, \mathbf{R} * \f] * by using Householder transformations. Here, \b P is a permutation matrix, \b Q a unitary matrix and \b R an * upper triangular matrix. * * This decomposition performs column pivoting in order to be rank-revealing and improve * numerical stability. It is slower than HouseholderQR, and faster than FullPivHouseholderQR. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * \sa MatrixBase::colPivHouseholderQr() */ template class ColPivHouseholderQR : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; friend class SolverBase; EIGEN_GENERIC_PUBLIC_INTERFACE(ColPivHouseholderQR) enum { MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef typename internal::plain_diag_type::type HCoeffsType; typedef PermutationMatrix PermutationType; typedef typename internal::plain_row_type::type IntRowVectorType; typedef typename internal::plain_row_type::type RowVectorType; typedef typename internal::plain_row_type::type RealRowVectorType; typedef HouseholderSequence::type> HouseholderSequenceType; typedef typename MatrixType::PlainObject PlainObject; private: typedef typename PermutationType::StorageIndex PermIndexType; public: /** * \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via ColPivHouseholderQR::compute(const MatrixType&). */ ColPivHouseholderQR() : m_qr(), m_hCoeffs(), m_colsPermutation(), m_colsTranspositions(), m_temp(), m_colNormsUpdated(), m_colNormsDirect(), m_isInitialized(false), m_usePrescribedThreshold(false) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa ColPivHouseholderQR() */ ColPivHouseholderQR(Index rows, Index cols) : m_qr(rows, cols), m_hCoeffs((std::min)(rows,cols)), m_colsPermutation(PermIndexType(cols)), m_colsTranspositions(cols), m_temp(cols), m_colNormsUpdated(cols), m_colNormsDirect(cols), m_isInitialized(false), m_usePrescribedThreshold(false) {} /** \brief Constructs a QR factorization from a given matrix * * This constructor computes the QR factorization of the matrix \a matrix by calling * the method compute(). It is a short cut for: * * \code * ColPivHouseholderQR qr(matrix.rows(), matrix.cols()); * qr.compute(matrix); * \endcode * * \sa compute() */ template explicit ColPivHouseholderQR(const EigenBase& matrix) : m_qr(matrix.rows(), matrix.cols()), m_hCoeffs((std::min)(matrix.rows(),matrix.cols())), m_colsPermutation(PermIndexType(matrix.cols())), m_colsTranspositions(matrix.cols()), m_temp(matrix.cols()), m_colNormsUpdated(matrix.cols()), m_colNormsDirect(matrix.cols()), m_isInitialized(false), m_usePrescribedThreshold(false) { compute(matrix.derived()); } /** \brief Constructs a QR factorization from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when \c MatrixType is a Eigen::Ref. * * \sa ColPivHouseholderQR(const EigenBase&) */ template explicit ColPivHouseholderQR(EigenBase& matrix) : m_qr(matrix.derived()), m_hCoeffs((std::min)(matrix.rows(),matrix.cols())), m_colsPermutation(PermIndexType(matrix.cols())), m_colsTranspositions(matrix.cols()), m_temp(matrix.cols()), m_colNormsUpdated(matrix.cols()), m_colNormsDirect(matrix.cols()), m_isInitialized(false), m_usePrescribedThreshold(false) { computeInPlace(); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** This method finds a solution x to the equation Ax=b, where A is the matrix of which * *this is the QR decomposition, if any exists. * * \param b the right-hand-side of the equation to solve. * * \returns a solution. * * \note_about_checking_solutions * * \note_about_arbitrary_choice_of_solution * * Example: \include ColPivHouseholderQR_solve.cpp * Output: \verbinclude ColPivHouseholderQR_solve.out */ template inline const Solve solve(const MatrixBase& b) const; #endif HouseholderSequenceType householderQ() const; HouseholderSequenceType matrixQ() const { return householderQ(); } /** \returns a reference to the matrix where the Householder QR decomposition is stored */ const MatrixType& matrixQR() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return m_qr; } /** \returns a reference to the matrix where the result Householder QR is stored * \warning The strict lower part of this matrix contains internal values. * Only the upper triangular part should be referenced. To get it, use * \code matrixR().template triangularView() \endcode * For rank-deficient matrices, use * \code * matrixR().topLeftCorner(rank(), rank()).template triangularView() * \endcode */ const MatrixType& matrixR() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return m_qr; } template ColPivHouseholderQR& compute(const EigenBase& matrix); /** \returns a const reference to the column permutation matrix */ const PermutationType& colsPermutation() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return m_colsPermutation; } /** \returns the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \warning a determinant can be very big or small, so for matrices * of large enough dimension, there is a risk of overflow/underflow. * One way to work around that is to use logAbsDeterminant() instead. * * \sa logAbsDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar absDeterminant() const; /** \returns the natural log of the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \note This method is useful to work around the risk of overflow/underflow that's inherent * to determinant computation. * * \sa absDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar logAbsDeterminant() const; /** \returns the rank of the matrix of which *this is the QR decomposition. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline Index rank() const { using std::abs; eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); RealScalar premultiplied_threshold = abs(m_maxpivot) * threshold(); Index result = 0; for(Index i = 0; i < m_nonzero_pivots; ++i) result += (abs(m_qr.coeff(i,i)) > premultiplied_threshold); return result; } /** \returns the dimension of the kernel of the matrix of which *this is the QR decomposition. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline Index dimensionOfKernel() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return cols() - rank(); } /** \returns true if the matrix of which *this is the QR decomposition represents an injective * linear map, i.e. has trivial kernel; false otherwise. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isInjective() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return rank() == cols(); } /** \returns true if the matrix of which *this is the QR decomposition represents a surjective * linear map; false otherwise. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isSurjective() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return rank() == rows(); } /** \returns true if the matrix of which *this is the QR decomposition is invertible. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isInvertible() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return isInjective() && isSurjective(); } /** \returns the inverse of the matrix of which *this is the QR decomposition. * * \note If this matrix is not invertible, the returned matrix has undefined coefficients. * Use isInvertible() to first determine whether this matrix is invertible. */ inline const Inverse inverse() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return Inverse(*this); } inline Index rows() const { return m_qr.rows(); } inline Index cols() const { return m_qr.cols(); } /** \returns a const reference to the vector of Householder coefficients used to represent the factor \c Q. * * For advanced uses only. */ const HCoeffsType& hCoeffs() const { return m_hCoeffs; } /** Allows to prescribe a threshold to be used by certain methods, such as rank(), * who need to determine when pivots are to be considered nonzero. This is not used for the * QR decomposition itself. * * When it needs to get the threshold value, Eigen calls threshold(). By default, this * uses a formula to automatically determine a reasonable threshold. * Once you have called the present method setThreshold(const RealScalar&), * your value is used instead. * * \param threshold The new value to use as the threshold. * * A pivot will be considered nonzero if its absolute value is strictly greater than * \f$ \vert pivot \vert \leqslant threshold \times \vert maxpivot \vert \f$ * where maxpivot is the biggest pivot. * * If you want to come back to the default behavior, call setThreshold(Default_t) */ ColPivHouseholderQR& setThreshold(const RealScalar& threshold) { m_usePrescribedThreshold = true; m_prescribedThreshold = threshold; return *this; } /** Allows to come back to the default behavior, letting Eigen use its default formula for * determining the threshold. * * You should pass the special object Eigen::Default as parameter here. * \code qr.setThreshold(Eigen::Default); \endcode * * See the documentation of setThreshold(const RealScalar&). */ ColPivHouseholderQR& setThreshold(Default_t) { m_usePrescribedThreshold = false; return *this; } /** Returns the threshold that will be used by certain methods such as rank(). * * See the documentation of setThreshold(const RealScalar&). */ RealScalar threshold() const { eigen_assert(m_isInitialized || m_usePrescribedThreshold); return m_usePrescribedThreshold ? m_prescribedThreshold // this formula comes from experimenting (see "LU precision tuning" thread on the list) // and turns out to be identical to Higham's formula used already in LDLt. : NumTraits::epsilon() * RealScalar(m_qr.diagonalSize()); } /** \returns the number of nonzero pivots in the QR decomposition. * Here nonzero is meant in the exact sense, not in a fuzzy sense. * So that notion isn't really intrinsically interesting, but it is * still useful when implementing algorithms. * * \sa rank() */ inline Index nonzeroPivots() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return m_nonzero_pivots; } /** \returns the absolute value of the biggest pivot, i.e. the biggest * diagonal coefficient of R. */ RealScalar maxPivot() const { return m_maxpivot; } /** \brief Reports whether the QR factorization was successful. * * \note This function always returns \c Success. It is provided for compatibility * with other factorization routines. * \returns \c Success */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return Success; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: friend class CompleteOrthogonalDecomposition; static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } void computeInPlace(); MatrixType m_qr; HCoeffsType m_hCoeffs; PermutationType m_colsPermutation; IntRowVectorType m_colsTranspositions; RowVectorType m_temp; RealRowVectorType m_colNormsUpdated; RealRowVectorType m_colNormsDirect; bool m_isInitialized, m_usePrescribedThreshold; RealScalar m_prescribedThreshold, m_maxpivot; Index m_nonzero_pivots; Index m_det_pq; }; template typename MatrixType::RealScalar ColPivHouseholderQR::absDeterminant() const { using std::abs; eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return abs(m_qr.diagonal().prod()); } template typename MatrixType::RealScalar ColPivHouseholderQR::logAbsDeterminant() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return m_qr.diagonal().cwiseAbs().array().log().sum(); } /** Performs the QR factorization of the given matrix \a matrix. The result of * the factorization is stored into \c *this, and a reference to \c *this * is returned. * * \sa class ColPivHouseholderQR, ColPivHouseholderQR(const MatrixType&) */ template template ColPivHouseholderQR& ColPivHouseholderQR::compute(const EigenBase& matrix) { m_qr = matrix.derived(); computeInPlace(); return *this; } template void ColPivHouseholderQR::computeInPlace() { check_template_parameters(); // the column permutation is stored as int indices, so just to be sure: eigen_assert(m_qr.cols()<=NumTraits::highest()); using std::abs; Index rows = m_qr.rows(); Index cols = m_qr.cols(); Index size = m_qr.diagonalSize(); m_hCoeffs.resize(size); m_temp.resize(cols); m_colsTranspositions.resize(m_qr.cols()); Index number_of_transpositions = 0; m_colNormsUpdated.resize(cols); m_colNormsDirect.resize(cols); for (Index k = 0; k < cols; ++k) { // colNormsDirect(k) caches the most recent directly computed norm of // column k. m_colNormsDirect.coeffRef(k) = m_qr.col(k).norm(); m_colNormsUpdated.coeffRef(k) = m_colNormsDirect.coeffRef(k); } RealScalar threshold_helper = numext::abs2(m_colNormsUpdated.maxCoeff() * NumTraits::epsilon()) / RealScalar(rows); RealScalar norm_downdate_threshold = numext::sqrt(NumTraits::epsilon()); m_nonzero_pivots = size; // the generic case is that in which all pivots are nonzero (invertible case) m_maxpivot = RealScalar(0); for(Index k = 0; k < size; ++k) { // first, we look up in our table m_colNormsUpdated which column has the biggest norm Index biggest_col_index; RealScalar biggest_col_sq_norm = numext::abs2(m_colNormsUpdated.tail(cols-k).maxCoeff(&biggest_col_index)); biggest_col_index += k; // Track the number of meaningful pivots but do not stop the decomposition to make // sure that the initial matrix is properly reproduced. See bug 941. if(m_nonzero_pivots==size && biggest_col_sq_norm < threshold_helper * RealScalar(rows-k)) m_nonzero_pivots = k; // apply the transposition to the columns m_colsTranspositions.coeffRef(k) = biggest_col_index; if(k != biggest_col_index) { m_qr.col(k).swap(m_qr.col(biggest_col_index)); std::swap(m_colNormsUpdated.coeffRef(k), m_colNormsUpdated.coeffRef(biggest_col_index)); std::swap(m_colNormsDirect.coeffRef(k), m_colNormsDirect.coeffRef(biggest_col_index)); ++number_of_transpositions; } // generate the householder vector, store it below the diagonal RealScalar beta; m_qr.col(k).tail(rows-k).makeHouseholderInPlace(m_hCoeffs.coeffRef(k), beta); // apply the householder transformation to the diagonal coefficient m_qr.coeffRef(k,k) = beta; // remember the maximum absolute value of diagonal coefficients if(abs(beta) > m_maxpivot) m_maxpivot = abs(beta); // apply the householder transformation m_qr.bottomRightCorner(rows-k, cols-k-1) .applyHouseholderOnTheLeft(m_qr.col(k).tail(rows-k-1), m_hCoeffs.coeffRef(k), &m_temp.coeffRef(k+1)); // update our table of norms of the columns for (Index j = k + 1; j < cols; ++j) { // The following implements the stable norm downgrade step discussed in // http://www.netlib.org/lapack/lawnspdf/lawn176.pdf // and used in LAPACK routines xGEQPF and xGEQP3. // See lines 278-297 in http://www.netlib.org/lapack/explore-html/dc/df4/sgeqpf_8f_source.html if (m_colNormsUpdated.coeffRef(j) != RealScalar(0)) { RealScalar temp = abs(m_qr.coeffRef(k, j)) / m_colNormsUpdated.coeffRef(j); temp = (RealScalar(1) + temp) * (RealScalar(1) - temp); temp = temp < RealScalar(0) ? RealScalar(0) : temp; RealScalar temp2 = temp * numext::abs2(m_colNormsUpdated.coeffRef(j) / m_colNormsDirect.coeffRef(j)); if (temp2 <= norm_downdate_threshold) { // The updated norm has become too inaccurate so re-compute the column // norm directly. m_colNormsDirect.coeffRef(j) = m_qr.col(j).tail(rows - k - 1).norm(); m_colNormsUpdated.coeffRef(j) = m_colNormsDirect.coeffRef(j); } else { m_colNormsUpdated.coeffRef(j) *= numext::sqrt(temp); } } } } m_colsPermutation.setIdentity(PermIndexType(cols)); for(PermIndexType k = 0; k < size/*m_nonzero_pivots*/; ++k) m_colsPermutation.applyTranspositionOnTheRight(k, PermIndexType(m_colsTranspositions.coeff(k))); m_det_pq = (number_of_transpositions%2) ? -1 : 1; m_isInitialized = true; } #ifndef EIGEN_PARSED_BY_DOXYGEN template template void ColPivHouseholderQR<_MatrixType>::_solve_impl(const RhsType &rhs, DstType &dst) const { const Index nonzero_pivots = nonzeroPivots(); if(nonzero_pivots == 0) { dst.setZero(); return; } typename RhsType::PlainObject c(rhs); c.applyOnTheLeft(householderQ().setLength(nonzero_pivots).adjoint() ); m_qr.topLeftCorner(nonzero_pivots, nonzero_pivots) .template triangularView() .solveInPlace(c.topRows(nonzero_pivots)); for(Index i = 0; i < nonzero_pivots; ++i) dst.row(m_colsPermutation.indices().coeff(i)) = c.row(i); for(Index i = nonzero_pivots; i < cols(); ++i) dst.row(m_colsPermutation.indices().coeff(i)).setZero(); } template template void ColPivHouseholderQR<_MatrixType>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { const Index nonzero_pivots = nonzeroPivots(); if(nonzero_pivots == 0) { dst.setZero(); return; } typename RhsType::PlainObject c(m_colsPermutation.transpose()*rhs); m_qr.topLeftCorner(nonzero_pivots, nonzero_pivots) .template triangularView() .transpose().template conjugateIf() .solveInPlace(c.topRows(nonzero_pivots)); dst.topRows(nonzero_pivots) = c.topRows(nonzero_pivots); dst.bottomRows(rows()-nonzero_pivots).setZero(); dst.applyOnTheLeft(householderQ().setLength(nonzero_pivots).template conjugateIf() ); } #endif namespace internal { template struct Assignment >, internal::assign_op::Scalar>, Dense2Dense> { typedef ColPivHouseholderQR QrType; typedef Inverse SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { dst = src.nestedExpression().solve(MatrixType::Identity(src.rows(), src.cols())); } }; } // end namespace internal /** \returns the matrix Q as a sequence of householder transformations. * You can extract the meaningful part only by using: * \code qr.householderQ().setLength(qr.nonzeroPivots()) \endcode*/ template typename ColPivHouseholderQR::HouseholderSequenceType ColPivHouseholderQR ::householderQ() const { eigen_assert(m_isInitialized && "ColPivHouseholderQR is not initialized."); return HouseholderSequenceType(m_qr, m_hCoeffs.conjugate()); } /** \return the column-pivoting Householder QR decomposition of \c *this. * * \sa class ColPivHouseholderQR */ template const ColPivHouseholderQR::PlainObject> MatrixBase::colPivHouseholderQr() const { return ColPivHouseholderQR(eval()); } } // end namespace Eigen #endif // EIGEN_COLPIVOTINGHOUSEHOLDERQR_H piqp-octave/src/eigen/Eigen/src/QR/HouseholderQR.h0000644000175100001770000003446114107270226021577 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // Copyright (C) 2009 Benoit Jacob // Copyright (C) 2010 Vincent Lejeune // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_QR_H #define EIGEN_QR_H namespace Eigen { namespace internal { template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; } // end namespace internal /** \ingroup QR_Module * * * \class HouseholderQR * * \brief Householder QR decomposition of a matrix * * \tparam _MatrixType the type of the matrix of which we are computing the QR decomposition * * This class performs a QR decomposition of a matrix \b A into matrices \b Q and \b R * such that * \f[ * \mathbf{A} = \mathbf{Q} \, \mathbf{R} * \f] * by using Householder transformations. Here, \b Q a unitary matrix and \b R an upper triangular matrix. * The result is stored in a compact way compatible with LAPACK. * * Note that no pivoting is performed. This is \b not a rank-revealing decomposition. * If you want that feature, use FullPivHouseholderQR or ColPivHouseholderQR instead. * * This Householder QR decomposition is faster, but less numerically stable and less feature-full than * FullPivHouseholderQR or ColPivHouseholderQR. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * \sa MatrixBase::householderQr() */ template class HouseholderQR : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; friend class SolverBase; EIGEN_GENERIC_PUBLIC_INTERFACE(HouseholderQR) enum { MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef Matrix MatrixQType; typedef typename internal::plain_diag_type::type HCoeffsType; typedef typename internal::plain_row_type::type RowVectorType; typedef HouseholderSequence::type> HouseholderSequenceType; /** * \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via HouseholderQR::compute(const MatrixType&). */ HouseholderQR() : m_qr(), m_hCoeffs(), m_temp(), m_isInitialized(false) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa HouseholderQR() */ HouseholderQR(Index rows, Index cols) : m_qr(rows, cols), m_hCoeffs((std::min)(rows,cols)), m_temp(cols), m_isInitialized(false) {} /** \brief Constructs a QR factorization from a given matrix * * This constructor computes the QR factorization of the matrix \a matrix by calling * the method compute(). It is a short cut for: * * \code * HouseholderQR qr(matrix.rows(), matrix.cols()); * qr.compute(matrix); * \endcode * * \sa compute() */ template explicit HouseholderQR(const EigenBase& matrix) : m_qr(matrix.rows(), matrix.cols()), m_hCoeffs((std::min)(matrix.rows(),matrix.cols())), m_temp(matrix.cols()), m_isInitialized(false) { compute(matrix.derived()); } /** \brief Constructs a QR factorization from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when * \c MatrixType is a Eigen::Ref. * * \sa HouseholderQR(const EigenBase&) */ template explicit HouseholderQR(EigenBase& matrix) : m_qr(matrix.derived()), m_hCoeffs((std::min)(matrix.rows(),matrix.cols())), m_temp(matrix.cols()), m_isInitialized(false) { computeInPlace(); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** This method finds a solution x to the equation Ax=b, where A is the matrix of which * *this is the QR decomposition, if any exists. * * \param b the right-hand-side of the equation to solve. * * \returns a solution. * * \note_about_checking_solutions * * \note_about_arbitrary_choice_of_solution * * Example: \include HouseholderQR_solve.cpp * Output: \verbinclude HouseholderQR_solve.out */ template inline const Solve solve(const MatrixBase& b) const; #endif /** This method returns an expression of the unitary matrix Q as a sequence of Householder transformations. * * The returned expression can directly be used to perform matrix products. It can also be assigned to a dense Matrix object. * Here is an example showing how to recover the full or thin matrix Q, as well as how to perform matrix products using operator*: * * Example: \include HouseholderQR_householderQ.cpp * Output: \verbinclude HouseholderQR_householderQ.out */ HouseholderSequenceType householderQ() const { eigen_assert(m_isInitialized && "HouseholderQR is not initialized."); return HouseholderSequenceType(m_qr, m_hCoeffs.conjugate()); } /** \returns a reference to the matrix where the Householder QR decomposition is stored * in a LAPACK-compatible way. */ const MatrixType& matrixQR() const { eigen_assert(m_isInitialized && "HouseholderQR is not initialized."); return m_qr; } template HouseholderQR& compute(const EigenBase& matrix) { m_qr = matrix.derived(); computeInPlace(); return *this; } /** \returns the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \warning a determinant can be very big or small, so for matrices * of large enough dimension, there is a risk of overflow/underflow. * One way to work around that is to use logAbsDeterminant() instead. * * \sa logAbsDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar absDeterminant() const; /** \returns the natural log of the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \note This method is useful to work around the risk of overflow/underflow that's inherent * to determinant computation. * * \sa absDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar logAbsDeterminant() const; inline Index rows() const { return m_qr.rows(); } inline Index cols() const { return m_qr.cols(); } /** \returns a const reference to the vector of Householder coefficients used to represent the factor \c Q. * * For advanced uses only. */ const HCoeffsType& hCoeffs() const { return m_hCoeffs; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } void computeInPlace(); MatrixType m_qr; HCoeffsType m_hCoeffs; RowVectorType m_temp; bool m_isInitialized; }; template typename MatrixType::RealScalar HouseholderQR::absDeterminant() const { using std::abs; eigen_assert(m_isInitialized && "HouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return abs(m_qr.diagonal().prod()); } template typename MatrixType::RealScalar HouseholderQR::logAbsDeterminant() const { eigen_assert(m_isInitialized && "HouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return m_qr.diagonal().cwiseAbs().array().log().sum(); } namespace internal { /** \internal */ template void householder_qr_inplace_unblocked(MatrixQR& mat, HCoeffs& hCoeffs, typename MatrixQR::Scalar* tempData = 0) { typedef typename MatrixQR::Scalar Scalar; typedef typename MatrixQR::RealScalar RealScalar; Index rows = mat.rows(); Index cols = mat.cols(); Index size = (std::min)(rows,cols); eigen_assert(hCoeffs.size() == size); typedef Matrix TempType; TempType tempVector; if(tempData==0) { tempVector.resize(cols); tempData = tempVector.data(); } for(Index k = 0; k < size; ++k) { Index remainingRows = rows - k; Index remainingCols = cols - k - 1; RealScalar beta; mat.col(k).tail(remainingRows).makeHouseholderInPlace(hCoeffs.coeffRef(k), beta); mat.coeffRef(k,k) = beta; // apply H to remaining part of m_qr from the left mat.bottomRightCorner(remainingRows, remainingCols) .applyHouseholderOnTheLeft(mat.col(k).tail(remainingRows-1), hCoeffs.coeffRef(k), tempData+k+1); } } /** \internal */ template struct householder_qr_inplace_blocked { // This is specialized for LAPACK-supported Scalar types in HouseholderQR_LAPACKE.h static void run(MatrixQR& mat, HCoeffs& hCoeffs, Index maxBlockSize=32, typename MatrixQR::Scalar* tempData = 0) { typedef typename MatrixQR::Scalar Scalar; typedef Block BlockType; Index rows = mat.rows(); Index cols = mat.cols(); Index size = (std::min)(rows, cols); typedef Matrix TempType; TempType tempVector; if(tempData==0) { tempVector.resize(cols); tempData = tempVector.data(); } Index blockSize = (std::min)(maxBlockSize,size); Index k = 0; for (k = 0; k < size; k += blockSize) { Index bs = (std::min)(size-k,blockSize); // actual size of the block Index tcols = cols - k - bs; // trailing columns Index brows = rows-k; // rows of the block // partition the matrix: // A00 | A01 | A02 // mat = A10 | A11 | A12 // A20 | A21 | A22 // and performs the qr dec of [A11^T A12^T]^T // and update [A21^T A22^T]^T using level 3 operations. // Finally, the algorithm continue on A22 BlockType A11_21 = mat.block(k,k,brows,bs); Block hCoeffsSegment = hCoeffs.segment(k,bs); householder_qr_inplace_unblocked(A11_21, hCoeffsSegment, tempData); if(tcols) { BlockType A21_22 = mat.block(k,k+bs,brows,tcols); apply_block_householder_on_the_left(A21_22,A11_21,hCoeffsSegment, false); // false == backward } } } }; } // end namespace internal #ifndef EIGEN_PARSED_BY_DOXYGEN template template void HouseholderQR<_MatrixType>::_solve_impl(const RhsType &rhs, DstType &dst) const { const Index rank = (std::min)(rows(), cols()); typename RhsType::PlainObject c(rhs); c.applyOnTheLeft(householderQ().setLength(rank).adjoint() ); m_qr.topLeftCorner(rank, rank) .template triangularView() .solveInPlace(c.topRows(rank)); dst.topRows(rank) = c.topRows(rank); dst.bottomRows(cols()-rank).setZero(); } template template void HouseholderQR<_MatrixType>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { const Index rank = (std::min)(rows(), cols()); typename RhsType::PlainObject c(rhs); m_qr.topLeftCorner(rank, rank) .template triangularView() .transpose().template conjugateIf() .solveInPlace(c.topRows(rank)); dst.topRows(rank) = c.topRows(rank); dst.bottomRows(rows()-rank).setZero(); dst.applyOnTheLeft(householderQ().setLength(rank).template conjugateIf() ); } #endif /** Performs the QR factorization of the given matrix \a matrix. The result of * the factorization is stored into \c *this, and a reference to \c *this * is returned. * * \sa class HouseholderQR, HouseholderQR(const MatrixType&) */ template void HouseholderQR::computeInPlace() { check_template_parameters(); Index rows = m_qr.rows(); Index cols = m_qr.cols(); Index size = (std::min)(rows,cols); m_hCoeffs.resize(size); m_temp.resize(cols); internal::householder_qr_inplace_blocked::run(m_qr, m_hCoeffs, 48, m_temp.data()); m_isInitialized = true; } /** \return the Householder QR decomposition of \c *this. * * \sa class HouseholderQR */ template const HouseholderQR::PlainObject> MatrixBase::householderQr() const { return HouseholderQR(eval()); } } // end namespace Eigen #endif // EIGEN_QR_H piqp-octave/src/eigen/Eigen/src/QR/HouseholderQR_LAPACKE.h0000644000175100001770000000566114107270226022717 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Householder QR decomposition of a matrix w/o pivoting based on * LAPACKE_?geqrf function. ******************************************************************************** */ #ifndef EIGEN_QR_LAPACKE_H #define EIGEN_QR_LAPACKE_H namespace Eigen { namespace internal { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_QR_NOPIV(EIGTYPE, LAPACKE_TYPE, LAPACKE_PREFIX) \ template \ struct householder_qr_inplace_blocked \ { \ static void run(MatrixQR& mat, HCoeffs& hCoeffs, Index = 32, \ typename MatrixQR::Scalar* = 0) \ { \ lapack_int m = (lapack_int) mat.rows(); \ lapack_int n = (lapack_int) mat.cols(); \ lapack_int lda = (lapack_int) mat.outerStride(); \ lapack_int matrix_order = (MatrixQR::IsRowMajor) ? LAPACK_ROW_MAJOR : LAPACK_COL_MAJOR; \ LAPACKE_##LAPACKE_PREFIX##geqrf( matrix_order, m, n, (LAPACKE_TYPE*)mat.data(), lda, (LAPACKE_TYPE*)hCoeffs.data()); \ hCoeffs.adjointInPlace(); \ } \ }; EIGEN_LAPACKE_QR_NOPIV(double, double, d) EIGEN_LAPACKE_QR_NOPIV(float, float, s) EIGEN_LAPACKE_QR_NOPIV(dcomplex, lapack_complex_double, z) EIGEN_LAPACKE_QR_NOPIV(scomplex, lapack_complex_float, c) } // end namespace internal } // end namespace Eigen #endif // EIGEN_QR_LAPACKE_H piqp-octave/src/eigen/Eigen/src/QR/CompleteOrthogonalDecomposition.h0000644000175100001770000005560514107270226025420 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2016 Rasmus Munk Larsen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COMPLETEORTHOGONALDECOMPOSITION_H #define EIGEN_COMPLETEORTHOGONALDECOMPOSITION_H namespace Eigen { namespace internal { template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; } // end namespace internal /** \ingroup QR_Module * * \class CompleteOrthogonalDecomposition * * \brief Complete orthogonal decomposition (COD) of a matrix. * * \param MatrixType the type of the matrix of which we are computing the COD. * * This class performs a rank-revealing complete orthogonal decomposition of a * matrix \b A into matrices \b P, \b Q, \b T, and \b Z such that * \f[ * \mathbf{A} \, \mathbf{P} = \mathbf{Q} \, * \begin{bmatrix} \mathbf{T} & \mathbf{0} \\ * \mathbf{0} & \mathbf{0} \end{bmatrix} \, \mathbf{Z} * \f] * by using Householder transformations. Here, \b P is a permutation matrix, * \b Q and \b Z are unitary matrices and \b T an upper triangular matrix of * size rank-by-rank. \b A may be rank deficient. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * \sa MatrixBase::completeOrthogonalDecomposition() */ template class CompleteOrthogonalDecomposition : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; template friend struct internal::solve_assertion; EIGEN_GENERIC_PUBLIC_INTERFACE(CompleteOrthogonalDecomposition) enum { MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef typename internal::plain_diag_type::type HCoeffsType; typedef PermutationMatrix PermutationType; typedef typename internal::plain_row_type::type IntRowVectorType; typedef typename internal::plain_row_type::type RowVectorType; typedef typename internal::plain_row_type::type RealRowVectorType; typedef HouseholderSequence< MatrixType, typename internal::remove_all< typename HCoeffsType::ConjugateReturnType>::type> HouseholderSequenceType; typedef typename MatrixType::PlainObject PlainObject; private: typedef typename PermutationType::Index PermIndexType; public: /** * \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via * \c CompleteOrthogonalDecomposition::compute(const* MatrixType&). */ CompleteOrthogonalDecomposition() : m_cpqr(), m_zCoeffs(), m_temp() {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa CompleteOrthogonalDecomposition() */ CompleteOrthogonalDecomposition(Index rows, Index cols) : m_cpqr(rows, cols), m_zCoeffs((std::min)(rows, cols)), m_temp(cols) {} /** \brief Constructs a complete orthogonal decomposition from a given * matrix. * * This constructor computes the complete orthogonal decomposition of the * matrix \a matrix by calling the method compute(). The default * threshold for rank determination will be used. It is a short cut for: * * \code * CompleteOrthogonalDecomposition cod(matrix.rows(), * matrix.cols()); * cod.setThreshold(Default); * cod.compute(matrix); * \endcode * * \sa compute() */ template explicit CompleteOrthogonalDecomposition(const EigenBase& matrix) : m_cpqr(matrix.rows(), matrix.cols()), m_zCoeffs((std::min)(matrix.rows(), matrix.cols())), m_temp(matrix.cols()) { compute(matrix.derived()); } /** \brief Constructs a complete orthogonal decomposition from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when \c MatrixType is a Eigen::Ref. * * \sa CompleteOrthogonalDecomposition(const EigenBase&) */ template explicit CompleteOrthogonalDecomposition(EigenBase& matrix) : m_cpqr(matrix.derived()), m_zCoeffs((std::min)(matrix.rows(), matrix.cols())), m_temp(matrix.cols()) { computeInPlace(); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** This method computes the minimum-norm solution X to a least squares * problem \f[\mathrm{minimize} \|A X - B\|, \f] where \b A is the matrix of * which \c *this is the complete orthogonal decomposition. * * \param b the right-hand sides of the problem to solve. * * \returns a solution. * */ template inline const Solve solve( const MatrixBase& b) const; #endif HouseholderSequenceType householderQ(void) const; HouseholderSequenceType matrixQ(void) const { return m_cpqr.householderQ(); } /** \returns the matrix \b Z. */ MatrixType matrixZ() const { MatrixType Z = MatrixType::Identity(m_cpqr.cols(), m_cpqr.cols()); applyZOnTheLeftInPlace(Z); return Z; } /** \returns a reference to the matrix where the complete orthogonal * decomposition is stored */ const MatrixType& matrixQTZ() const { return m_cpqr.matrixQR(); } /** \returns a reference to the matrix where the complete orthogonal * decomposition is stored. * \warning The strict lower part and \code cols() - rank() \endcode right * columns of this matrix contains internal values. * Only the upper triangular part should be referenced. To get it, use * \code matrixT().template triangularView() \endcode * For rank-deficient matrices, use * \code * matrixR().topLeftCorner(rank(), rank()).template triangularView() * \endcode */ const MatrixType& matrixT() const { return m_cpqr.matrixQR(); } template CompleteOrthogonalDecomposition& compute(const EigenBase& matrix) { // Compute the column pivoted QR factorization A P = Q R. m_cpqr.compute(matrix); computeInPlace(); return *this; } /** \returns a const reference to the column permutation matrix */ const PermutationType& colsPermutation() const { return m_cpqr.colsPermutation(); } /** \returns the absolute value of the determinant of the matrix of which * *this is the complete orthogonal decomposition. It has only linear * complexity (that is, O(n) where n is the dimension of the square matrix) * as the complete orthogonal decomposition has already been computed. * * \note This is only for square matrices. * * \warning a determinant can be very big or small, so for matrices * of large enough dimension, there is a risk of overflow/underflow. * One way to work around that is to use logAbsDeterminant() instead. * * \sa logAbsDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar absDeterminant() const; /** \returns the natural log of the absolute value of the determinant of the * matrix of which *this is the complete orthogonal decomposition. It has * only linear complexity (that is, O(n) where n is the dimension of the * square matrix) as the complete orthogonal decomposition has already been * computed. * * \note This is only for square matrices. * * \note This method is useful to work around the risk of overflow/underflow * that's inherent to determinant computation. * * \sa absDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar logAbsDeterminant() const; /** \returns the rank of the matrix of which *this is the complete orthogonal * decomposition. * * \note This method has to determine which pivots should be considered * nonzero. For that, it uses the threshold value that you can control by * calling setThreshold(const RealScalar&). */ inline Index rank() const { return m_cpqr.rank(); } /** \returns the dimension of the kernel of the matrix of which *this is the * complete orthogonal decomposition. * * \note This method has to determine which pivots should be considered * nonzero. For that, it uses the threshold value that you can control by * calling setThreshold(const RealScalar&). */ inline Index dimensionOfKernel() const { return m_cpqr.dimensionOfKernel(); } /** \returns true if the matrix of which *this is the decomposition represents * an injective linear map, i.e. has trivial kernel; false otherwise. * * \note This method has to determine which pivots should be considered * nonzero. For that, it uses the threshold value that you can control by * calling setThreshold(const RealScalar&). */ inline bool isInjective() const { return m_cpqr.isInjective(); } /** \returns true if the matrix of which *this is the decomposition represents * a surjective linear map; false otherwise. * * \note This method has to determine which pivots should be considered * nonzero. For that, it uses the threshold value that you can control by * calling setThreshold(const RealScalar&). */ inline bool isSurjective() const { return m_cpqr.isSurjective(); } /** \returns true if the matrix of which *this is the complete orthogonal * decomposition is invertible. * * \note This method has to determine which pivots should be considered * nonzero. For that, it uses the threshold value that you can control by * calling setThreshold(const RealScalar&). */ inline bool isInvertible() const { return m_cpqr.isInvertible(); } /** \returns the pseudo-inverse of the matrix of which *this is the complete * orthogonal decomposition. * \warning: Do not compute \c this->pseudoInverse()*rhs to solve a linear systems. * It is more efficient and numerically stable to call \c this->solve(rhs). */ inline const Inverse pseudoInverse() const { eigen_assert(m_cpqr.m_isInitialized && "CompleteOrthogonalDecomposition is not initialized."); return Inverse(*this); } inline Index rows() const { return m_cpqr.rows(); } inline Index cols() const { return m_cpqr.cols(); } /** \returns a const reference to the vector of Householder coefficients used * to represent the factor \c Q. * * For advanced uses only. */ inline const HCoeffsType& hCoeffs() const { return m_cpqr.hCoeffs(); } /** \returns a const reference to the vector of Householder coefficients * used to represent the factor \c Z. * * For advanced uses only. */ const HCoeffsType& zCoeffs() const { return m_zCoeffs; } /** Allows to prescribe a threshold to be used by certain methods, such as * rank(), who need to determine when pivots are to be considered nonzero. * Most be called before calling compute(). * * When it needs to get the threshold value, Eigen calls threshold(). By * default, this uses a formula to automatically determine a reasonable * threshold. Once you have called the present method * setThreshold(const RealScalar&), your value is used instead. * * \param threshold The new value to use as the threshold. * * A pivot will be considered nonzero if its absolute value is strictly * greater than * \f$ \vert pivot \vert \leqslant threshold \times \vert maxpivot \vert \f$ * where maxpivot is the biggest pivot. * * If you want to come back to the default behavior, call * setThreshold(Default_t) */ CompleteOrthogonalDecomposition& setThreshold(const RealScalar& threshold) { m_cpqr.setThreshold(threshold); return *this; } /** Allows to come back to the default behavior, letting Eigen use its default * formula for determining the threshold. * * You should pass the special object Eigen::Default as parameter here. * \code qr.setThreshold(Eigen::Default); \endcode * * See the documentation of setThreshold(const RealScalar&). */ CompleteOrthogonalDecomposition& setThreshold(Default_t) { m_cpqr.setThreshold(Default); return *this; } /** Returns the threshold that will be used by certain methods such as rank(). * * See the documentation of setThreshold(const RealScalar&). */ RealScalar threshold() const { return m_cpqr.threshold(); } /** \returns the number of nonzero pivots in the complete orthogonal * decomposition. Here nonzero is meant in the exact sense, not in a * fuzzy sense. So that notion isn't really intrinsically interesting, * but it is still useful when implementing algorithms. * * \sa rank() */ inline Index nonzeroPivots() const { return m_cpqr.nonzeroPivots(); } /** \returns the absolute value of the biggest pivot, i.e. the biggest * diagonal coefficient of R. */ inline RealScalar maxPivot() const { return m_cpqr.maxPivot(); } /** \brief Reports whether the complete orthogonal decomposition was * successful. * * \note This function always returns \c Success. It is provided for * compatibility * with other factorization routines. * \returns \c Success */ ComputationInfo info() const { eigen_assert(m_cpqr.m_isInitialized && "Decomposition is not initialized."); return Success; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType& rhs, DstType& dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } template void _check_solve_assertion(const Rhs& b) const { EIGEN_ONLY_USED_FOR_DEBUG(b); eigen_assert(m_cpqr.m_isInitialized && "CompleteOrthogonalDecomposition is not initialized."); eigen_assert((Transpose_?derived().cols():derived().rows())==b.rows() && "CompleteOrthogonalDecomposition::solve(): invalid number of rows of the right hand side matrix b"); } void computeInPlace(); /** Overwrites \b rhs with \f$ \mathbf{Z} * \mathbf{rhs} \f$ or * \f$ \mathbf{\overline Z} * \mathbf{rhs} \f$ if \c Conjugate * is set to \c true. */ template void applyZOnTheLeftInPlace(Rhs& rhs) const; /** Overwrites \b rhs with \f$ \mathbf{Z}^* * \mathbf{rhs} \f$. */ template void applyZAdjointOnTheLeftInPlace(Rhs& rhs) const; ColPivHouseholderQR m_cpqr; HCoeffsType m_zCoeffs; RowVectorType m_temp; }; template typename MatrixType::RealScalar CompleteOrthogonalDecomposition::absDeterminant() const { return m_cpqr.absDeterminant(); } template typename MatrixType::RealScalar CompleteOrthogonalDecomposition::logAbsDeterminant() const { return m_cpqr.logAbsDeterminant(); } /** Performs the complete orthogonal decomposition of the given matrix \a * matrix. The result of the factorization is stored into \c *this, and a * reference to \c *this is returned. * * \sa class CompleteOrthogonalDecomposition, * CompleteOrthogonalDecomposition(const MatrixType&) */ template void CompleteOrthogonalDecomposition::computeInPlace() { check_template_parameters(); // the column permutation is stored as int indices, so just to be sure: eigen_assert(m_cpqr.cols() <= NumTraits::highest()); const Index rank = m_cpqr.rank(); const Index cols = m_cpqr.cols(); const Index rows = m_cpqr.rows(); m_zCoeffs.resize((std::min)(rows, cols)); m_temp.resize(cols); if (rank < cols) { // We have reduced the (permuted) matrix to the form // [R11 R12] // [ 0 R22] // where R11 is r-by-r (r = rank) upper triangular, R12 is // r-by-(n-r), and R22 is empty or the norm of R22 is negligible. // We now compute the complete orthogonal decomposition by applying // Householder transformations from the right to the upper trapezoidal // matrix X = [R11 R12] to zero out R12 and obtain the factorization // [R11 R12] = [T11 0] * Z, where T11 is r-by-r upper triangular and // Z = Z(0) * Z(1) ... Z(r-1) is an n-by-n orthogonal matrix. // We store the data representing Z in R12 and m_zCoeffs. for (Index k = rank - 1; k >= 0; --k) { if (k != rank - 1) { // Given the API for Householder reflectors, it is more convenient if // we swap the leading parts of columns k and r-1 (zero-based) to form // the matrix X_k = [X(0:k, k), X(0:k, r:n)] m_cpqr.m_qr.col(k).head(k + 1).swap( m_cpqr.m_qr.col(rank - 1).head(k + 1)); } // Construct Householder reflector Z(k) to zero out the last row of X_k, // i.e. choose Z(k) such that // [X(k, k), X(k, r:n)] * Z(k) = [beta, 0, .., 0]. RealScalar beta; m_cpqr.m_qr.row(k) .tail(cols - rank + 1) .makeHouseholderInPlace(m_zCoeffs(k), beta); m_cpqr.m_qr(k, rank - 1) = beta; if (k > 0) { // Apply Z(k) to the first k rows of X_k m_cpqr.m_qr.topRightCorner(k, cols - rank + 1) .applyHouseholderOnTheRight( m_cpqr.m_qr.row(k).tail(cols - rank).adjoint(), m_zCoeffs(k), &m_temp(0)); } if (k != rank - 1) { // Swap X(0:k,k) back to its proper location. m_cpqr.m_qr.col(k).head(k + 1).swap( m_cpqr.m_qr.col(rank - 1).head(k + 1)); } } } } template template void CompleteOrthogonalDecomposition::applyZOnTheLeftInPlace( Rhs& rhs) const { const Index cols = this->cols(); const Index nrhs = rhs.cols(); const Index rank = this->rank(); Matrix temp((std::max)(cols, nrhs)); for (Index k = rank-1; k >= 0; --k) { if (k != rank - 1) { rhs.row(k).swap(rhs.row(rank - 1)); } rhs.middleRows(rank - 1, cols - rank + 1) .applyHouseholderOnTheLeft( matrixQTZ().row(k).tail(cols - rank).transpose().template conjugateIf(), zCoeffs().template conjugateIf()(k), &temp(0)); if (k != rank - 1) { rhs.row(k).swap(rhs.row(rank - 1)); } } } template template void CompleteOrthogonalDecomposition::applyZAdjointOnTheLeftInPlace( Rhs& rhs) const { const Index cols = this->cols(); const Index nrhs = rhs.cols(); const Index rank = this->rank(); Matrix temp((std::max)(cols, nrhs)); for (Index k = 0; k < rank; ++k) { if (k != rank - 1) { rhs.row(k).swap(rhs.row(rank - 1)); } rhs.middleRows(rank - 1, cols - rank + 1) .applyHouseholderOnTheLeft( matrixQTZ().row(k).tail(cols - rank).adjoint(), zCoeffs()(k), &temp(0)); if (k != rank - 1) { rhs.row(k).swap(rhs.row(rank - 1)); } } } #ifndef EIGEN_PARSED_BY_DOXYGEN template template void CompleteOrthogonalDecomposition<_MatrixType>::_solve_impl( const RhsType& rhs, DstType& dst) const { const Index rank = this->rank(); if (rank == 0) { dst.setZero(); return; } // Compute c = Q^* * rhs typename RhsType::PlainObject c(rhs); c.applyOnTheLeft(matrixQ().setLength(rank).adjoint()); // Solve T z = c(1:rank, :) dst.topRows(rank) = matrixT() .topLeftCorner(rank, rank) .template triangularView() .solve(c.topRows(rank)); const Index cols = this->cols(); if (rank < cols) { // Compute y = Z^* * [ z ] // [ 0 ] dst.bottomRows(cols - rank).setZero(); applyZAdjointOnTheLeftInPlace(dst); } // Undo permutation to get x = P^{-1} * y. dst = colsPermutation() * dst; } template template void CompleteOrthogonalDecomposition<_MatrixType>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { const Index rank = this->rank(); if (rank == 0) { dst.setZero(); return; } typename RhsType::PlainObject c(colsPermutation().transpose()*rhs); if (rank < cols()) { applyZOnTheLeftInPlace(c); } matrixT().topLeftCorner(rank, rank) .template triangularView() .transpose().template conjugateIf() .solveInPlace(c.topRows(rank)); dst.topRows(rank) = c.topRows(rank); dst.bottomRows(rows()-rank).setZero(); dst.applyOnTheLeft(householderQ().setLength(rank).template conjugateIf() ); } #endif namespace internal { template struct traits > > : traits::PlainObject> { enum { Flags = 0 }; }; template struct Assignment >, internal::assign_op::Scalar>, Dense2Dense> { typedef CompleteOrthogonalDecomposition CodType; typedef Inverse SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { typedef Matrix IdentityMatrixType; dst = src.nestedExpression().solve(IdentityMatrixType::Identity(src.cols(), src.cols())); } }; } // end namespace internal /** \returns the matrix Q as a sequence of householder transformations */ template typename CompleteOrthogonalDecomposition::HouseholderSequenceType CompleteOrthogonalDecomposition::householderQ() const { return m_cpqr.householderQ(); } /** \return the complete orthogonal decomposition of \c *this. * * \sa class CompleteOrthogonalDecomposition */ template const CompleteOrthogonalDecomposition::PlainObject> MatrixBase::completeOrthogonalDecomposition() const { return CompleteOrthogonalDecomposition(eval()); } } // end namespace Eigen #endif // EIGEN_COMPLETEORTHOGONALDECOMPOSITION_H piqp-octave/src/eigen/Eigen/src/QR/FullPivHouseholderQR.h0000644000175100001770000006422014107270226023075 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2009 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_FULLPIVOTINGHOUSEHOLDERQR_H #define EIGEN_FULLPIVOTINGHOUSEHOLDERQR_H namespace Eigen { namespace internal { template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; template struct FullPivHouseholderQRMatrixQReturnType; template struct traits > { typedef typename MatrixType::PlainObject ReturnType; }; } // end namespace internal /** \ingroup QR_Module * * \class FullPivHouseholderQR * * \brief Householder rank-revealing QR decomposition of a matrix with full pivoting * * \tparam _MatrixType the type of the matrix of which we are computing the QR decomposition * * This class performs a rank-revealing QR decomposition of a matrix \b A into matrices \b P, \b P', \b Q and \b R * such that * \f[ * \mathbf{P} \, \mathbf{A} \, \mathbf{P}' = \mathbf{Q} \, \mathbf{R} * \f] * by using Householder transformations. Here, \b P and \b P' are permutation matrices, \b Q a unitary matrix * and \b R an upper triangular matrix. * * This decomposition performs a very prudent full pivoting in order to be rank-revealing and achieve optimal * numerical stability. The trade-off is that it is slower than HouseholderQR and ColPivHouseholderQR. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * \sa MatrixBase::fullPivHouseholderQr() */ template class FullPivHouseholderQR : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; friend class SolverBase; EIGEN_GENERIC_PUBLIC_INTERFACE(FullPivHouseholderQR) enum { MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef internal::FullPivHouseholderQRMatrixQReturnType MatrixQReturnType; typedef typename internal::plain_diag_type::type HCoeffsType; typedef Matrix IntDiagSizeVectorType; typedef PermutationMatrix PermutationType; typedef typename internal::plain_row_type::type RowVectorType; typedef typename internal::plain_col_type::type ColVectorType; typedef typename MatrixType::PlainObject PlainObject; /** \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via FullPivHouseholderQR::compute(const MatrixType&). */ FullPivHouseholderQR() : m_qr(), m_hCoeffs(), m_rows_transpositions(), m_cols_transpositions(), m_cols_permutation(), m_temp(), m_isInitialized(false), m_usePrescribedThreshold(false) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa FullPivHouseholderQR() */ FullPivHouseholderQR(Index rows, Index cols) : m_qr(rows, cols), m_hCoeffs((std::min)(rows,cols)), m_rows_transpositions((std::min)(rows,cols)), m_cols_transpositions((std::min)(rows,cols)), m_cols_permutation(cols), m_temp(cols), m_isInitialized(false), m_usePrescribedThreshold(false) {} /** \brief Constructs a QR factorization from a given matrix * * This constructor computes the QR factorization of the matrix \a matrix by calling * the method compute(). It is a short cut for: * * \code * FullPivHouseholderQR qr(matrix.rows(), matrix.cols()); * qr.compute(matrix); * \endcode * * \sa compute() */ template explicit FullPivHouseholderQR(const EigenBase& matrix) : m_qr(matrix.rows(), matrix.cols()), m_hCoeffs((std::min)(matrix.rows(), matrix.cols())), m_rows_transpositions((std::min)(matrix.rows(), matrix.cols())), m_cols_transpositions((std::min)(matrix.rows(), matrix.cols())), m_cols_permutation(matrix.cols()), m_temp(matrix.cols()), m_isInitialized(false), m_usePrescribedThreshold(false) { compute(matrix.derived()); } /** \brief Constructs a QR factorization from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when \c MatrixType is a Eigen::Ref. * * \sa FullPivHouseholderQR(const EigenBase&) */ template explicit FullPivHouseholderQR(EigenBase& matrix) : m_qr(matrix.derived()), m_hCoeffs((std::min)(matrix.rows(), matrix.cols())), m_rows_transpositions((std::min)(matrix.rows(), matrix.cols())), m_cols_transpositions((std::min)(matrix.rows(), matrix.cols())), m_cols_permutation(matrix.cols()), m_temp(matrix.cols()), m_isInitialized(false), m_usePrescribedThreshold(false) { computeInPlace(); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** This method finds a solution x to the equation Ax=b, where A is the matrix of which * \c *this is the QR decomposition. * * \param b the right-hand-side of the equation to solve. * * \returns the exact or least-square solution if the rank is greater or equal to the number of columns of A, * and an arbitrary solution otherwise. * * \note_about_checking_solutions * * \note_about_arbitrary_choice_of_solution * * Example: \include FullPivHouseholderQR_solve.cpp * Output: \verbinclude FullPivHouseholderQR_solve.out */ template inline const Solve solve(const MatrixBase& b) const; #endif /** \returns Expression object representing the matrix Q */ MatrixQReturnType matrixQ(void) const; /** \returns a reference to the matrix where the Householder QR decomposition is stored */ const MatrixType& matrixQR() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return m_qr; } template FullPivHouseholderQR& compute(const EigenBase& matrix); /** \returns a const reference to the column permutation matrix */ const PermutationType& colsPermutation() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return m_cols_permutation; } /** \returns a const reference to the vector of indices representing the rows transpositions */ const IntDiagSizeVectorType& rowsTranspositions() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return m_rows_transpositions; } /** \returns the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \warning a determinant can be very big or small, so for matrices * of large enough dimension, there is a risk of overflow/underflow. * One way to work around that is to use logAbsDeterminant() instead. * * \sa logAbsDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar absDeterminant() const; /** \returns the natural log of the absolute value of the determinant of the matrix of which * *this is the QR decomposition. It has only linear complexity * (that is, O(n) where n is the dimension of the square matrix) * as the QR decomposition has already been computed. * * \note This is only for square matrices. * * \note This method is useful to work around the risk of overflow/underflow that's inherent * to determinant computation. * * \sa absDeterminant(), MatrixBase::determinant() */ typename MatrixType::RealScalar logAbsDeterminant() const; /** \returns the rank of the matrix of which *this is the QR decomposition. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline Index rank() const { using std::abs; eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); RealScalar premultiplied_threshold = abs(m_maxpivot) * threshold(); Index result = 0; for(Index i = 0; i < m_nonzero_pivots; ++i) result += (abs(m_qr.coeff(i,i)) > premultiplied_threshold); return result; } /** \returns the dimension of the kernel of the matrix of which *this is the QR decomposition. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline Index dimensionOfKernel() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return cols() - rank(); } /** \returns true if the matrix of which *this is the QR decomposition represents an injective * linear map, i.e. has trivial kernel; false otherwise. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isInjective() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return rank() == cols(); } /** \returns true if the matrix of which *this is the QR decomposition represents a surjective * linear map; false otherwise. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isSurjective() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return rank() == rows(); } /** \returns true if the matrix of which *this is the QR decomposition is invertible. * * \note This method has to determine which pivots should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline bool isInvertible() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return isInjective() && isSurjective(); } /** \returns the inverse of the matrix of which *this is the QR decomposition. * * \note If this matrix is not invertible, the returned matrix has undefined coefficients. * Use isInvertible() to first determine whether this matrix is invertible. */ inline const Inverse inverse() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return Inverse(*this); } inline Index rows() const { return m_qr.rows(); } inline Index cols() const { return m_qr.cols(); } /** \returns a const reference to the vector of Householder coefficients used to represent the factor \c Q. * * For advanced uses only. */ const HCoeffsType& hCoeffs() const { return m_hCoeffs; } /** Allows to prescribe a threshold to be used by certain methods, such as rank(), * who need to determine when pivots are to be considered nonzero. This is not used for the * QR decomposition itself. * * When it needs to get the threshold value, Eigen calls threshold(). By default, this * uses a formula to automatically determine a reasonable threshold. * Once you have called the present method setThreshold(const RealScalar&), * your value is used instead. * * \param threshold The new value to use as the threshold. * * A pivot will be considered nonzero if its absolute value is strictly greater than * \f$ \vert pivot \vert \leqslant threshold \times \vert maxpivot \vert \f$ * where maxpivot is the biggest pivot. * * If you want to come back to the default behavior, call setThreshold(Default_t) */ FullPivHouseholderQR& setThreshold(const RealScalar& threshold) { m_usePrescribedThreshold = true; m_prescribedThreshold = threshold; return *this; } /** Allows to come back to the default behavior, letting Eigen use its default formula for * determining the threshold. * * You should pass the special object Eigen::Default as parameter here. * \code qr.setThreshold(Eigen::Default); \endcode * * See the documentation of setThreshold(const RealScalar&). */ FullPivHouseholderQR& setThreshold(Default_t) { m_usePrescribedThreshold = false; return *this; } /** Returns the threshold that will be used by certain methods such as rank(). * * See the documentation of setThreshold(const RealScalar&). */ RealScalar threshold() const { eigen_assert(m_isInitialized || m_usePrescribedThreshold); return m_usePrescribedThreshold ? m_prescribedThreshold // this formula comes from experimenting (see "LU precision tuning" thread on the list) // and turns out to be identical to Higham's formula used already in LDLt. : NumTraits::epsilon() * RealScalar(m_qr.diagonalSize()); } /** \returns the number of nonzero pivots in the QR decomposition. * Here nonzero is meant in the exact sense, not in a fuzzy sense. * So that notion isn't really intrinsically interesting, but it is * still useful when implementing algorithms. * * \sa rank() */ inline Index nonzeroPivots() const { eigen_assert(m_isInitialized && "LU is not initialized."); return m_nonzero_pivots; } /** \returns the absolute value of the biggest pivot, i.e. the biggest * diagonal coefficient of U. */ RealScalar maxPivot() const { return m_maxpivot; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } void computeInPlace(); MatrixType m_qr; HCoeffsType m_hCoeffs; IntDiagSizeVectorType m_rows_transpositions; IntDiagSizeVectorType m_cols_transpositions; PermutationType m_cols_permutation; RowVectorType m_temp; bool m_isInitialized, m_usePrescribedThreshold; RealScalar m_prescribedThreshold, m_maxpivot; Index m_nonzero_pivots; RealScalar m_precision; Index m_det_pq; }; template typename MatrixType::RealScalar FullPivHouseholderQR::absDeterminant() const { using std::abs; eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return abs(m_qr.diagonal().prod()); } template typename MatrixType::RealScalar FullPivHouseholderQR::logAbsDeterminant() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); eigen_assert(m_qr.rows() == m_qr.cols() && "You can't take the determinant of a non-square matrix!"); return m_qr.diagonal().cwiseAbs().array().log().sum(); } /** Performs the QR factorization of the given matrix \a matrix. The result of * the factorization is stored into \c *this, and a reference to \c *this * is returned. * * \sa class FullPivHouseholderQR, FullPivHouseholderQR(const MatrixType&) */ template template FullPivHouseholderQR& FullPivHouseholderQR::compute(const EigenBase& matrix) { m_qr = matrix.derived(); computeInPlace(); return *this; } template void FullPivHouseholderQR::computeInPlace() { check_template_parameters(); using std::abs; Index rows = m_qr.rows(); Index cols = m_qr.cols(); Index size = (std::min)(rows,cols); m_hCoeffs.resize(size); m_temp.resize(cols); m_precision = NumTraits::epsilon() * RealScalar(size); m_rows_transpositions.resize(size); m_cols_transpositions.resize(size); Index number_of_transpositions = 0; RealScalar biggest(0); m_nonzero_pivots = size; // the generic case is that in which all pivots are nonzero (invertible case) m_maxpivot = RealScalar(0); for (Index k = 0; k < size; ++k) { Index row_of_biggest_in_corner, col_of_biggest_in_corner; typedef internal::scalar_score_coeff_op Scoring; typedef typename Scoring::result_type Score; Score score = m_qr.bottomRightCorner(rows-k, cols-k) .unaryExpr(Scoring()) .maxCoeff(&row_of_biggest_in_corner, &col_of_biggest_in_corner); row_of_biggest_in_corner += k; col_of_biggest_in_corner += k; RealScalar biggest_in_corner = internal::abs_knowing_score()(m_qr(row_of_biggest_in_corner, col_of_biggest_in_corner), score); if(k==0) biggest = biggest_in_corner; // if the corner is negligible, then we have less than full rank, and we can finish early if(internal::isMuchSmallerThan(biggest_in_corner, biggest, m_precision)) { m_nonzero_pivots = k; for(Index i = k; i < size; i++) { m_rows_transpositions.coeffRef(i) = internal::convert_index(i); m_cols_transpositions.coeffRef(i) = internal::convert_index(i); m_hCoeffs.coeffRef(i) = Scalar(0); } break; } m_rows_transpositions.coeffRef(k) = internal::convert_index(row_of_biggest_in_corner); m_cols_transpositions.coeffRef(k) = internal::convert_index(col_of_biggest_in_corner); if(k != row_of_biggest_in_corner) { m_qr.row(k).tail(cols-k).swap(m_qr.row(row_of_biggest_in_corner).tail(cols-k)); ++number_of_transpositions; } if(k != col_of_biggest_in_corner) { m_qr.col(k).swap(m_qr.col(col_of_biggest_in_corner)); ++number_of_transpositions; } RealScalar beta; m_qr.col(k).tail(rows-k).makeHouseholderInPlace(m_hCoeffs.coeffRef(k), beta); m_qr.coeffRef(k,k) = beta; // remember the maximum absolute value of diagonal coefficients if(abs(beta) > m_maxpivot) m_maxpivot = abs(beta); m_qr.bottomRightCorner(rows-k, cols-k-1) .applyHouseholderOnTheLeft(m_qr.col(k).tail(rows-k-1), m_hCoeffs.coeffRef(k), &m_temp.coeffRef(k+1)); } m_cols_permutation.setIdentity(cols); for(Index k = 0; k < size; ++k) m_cols_permutation.applyTranspositionOnTheRight(k, m_cols_transpositions.coeff(k)); m_det_pq = (number_of_transpositions%2) ? -1 : 1; m_isInitialized = true; } #ifndef EIGEN_PARSED_BY_DOXYGEN template template void FullPivHouseholderQR<_MatrixType>::_solve_impl(const RhsType &rhs, DstType &dst) const { const Index l_rank = rank(); // FIXME introduce nonzeroPivots() and use it here. and more generally, // make the same improvements in this dec as in FullPivLU. if(l_rank==0) { dst.setZero(); return; } typename RhsType::PlainObject c(rhs); Matrix temp(rhs.cols()); for (Index k = 0; k < l_rank; ++k) { Index remainingSize = rows()-k; c.row(k).swap(c.row(m_rows_transpositions.coeff(k))); c.bottomRightCorner(remainingSize, rhs.cols()) .applyHouseholderOnTheLeft(m_qr.col(k).tail(remainingSize-1), m_hCoeffs.coeff(k), &temp.coeffRef(0)); } m_qr.topLeftCorner(l_rank, l_rank) .template triangularView() .solveInPlace(c.topRows(l_rank)); for(Index i = 0; i < l_rank; ++i) dst.row(m_cols_permutation.indices().coeff(i)) = c.row(i); for(Index i = l_rank; i < cols(); ++i) dst.row(m_cols_permutation.indices().coeff(i)).setZero(); } template template void FullPivHouseholderQR<_MatrixType>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { const Index l_rank = rank(); if(l_rank == 0) { dst.setZero(); return; } typename RhsType::PlainObject c(m_cols_permutation.transpose()*rhs); m_qr.topLeftCorner(l_rank, l_rank) .template triangularView() .transpose().template conjugateIf() .solveInPlace(c.topRows(l_rank)); dst.topRows(l_rank) = c.topRows(l_rank); dst.bottomRows(rows()-l_rank).setZero(); Matrix temp(dst.cols()); const Index size = (std::min)(rows(), cols()); for (Index k = size-1; k >= 0; --k) { Index remainingSize = rows()-k; dst.bottomRightCorner(remainingSize, dst.cols()) .applyHouseholderOnTheLeft(m_qr.col(k).tail(remainingSize-1).template conjugateIf(), m_hCoeffs.template conjugateIf().coeff(k), &temp.coeffRef(0)); dst.row(k).swap(dst.row(m_rows_transpositions.coeff(k))); } } #endif namespace internal { template struct Assignment >, internal::assign_op::Scalar>, Dense2Dense> { typedef FullPivHouseholderQR QrType; typedef Inverse SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { dst = src.nestedExpression().solve(MatrixType::Identity(src.rows(), src.cols())); } }; /** \ingroup QR_Module * * \brief Expression type for return value of FullPivHouseholderQR::matrixQ() * * \tparam MatrixType type of underlying dense matrix */ template struct FullPivHouseholderQRMatrixQReturnType : public ReturnByValue > { public: typedef typename FullPivHouseholderQR::IntDiagSizeVectorType IntDiagSizeVectorType; typedef typename internal::plain_diag_type::type HCoeffsType; typedef Matrix WorkVectorType; FullPivHouseholderQRMatrixQReturnType(const MatrixType& qr, const HCoeffsType& hCoeffs, const IntDiagSizeVectorType& rowsTranspositions) : m_qr(qr), m_hCoeffs(hCoeffs), m_rowsTranspositions(rowsTranspositions) {} template void evalTo(ResultType& result) const { const Index rows = m_qr.rows(); WorkVectorType workspace(rows); evalTo(result, workspace); } template void evalTo(ResultType& result, WorkVectorType& workspace) const { using numext::conj; // compute the product H'_0 H'_1 ... H'_n-1, // where H_k is the k-th Householder transformation I - h_k v_k v_k' // and v_k is the k-th Householder vector [1,m_qr(k+1,k), m_qr(k+2,k), ...] const Index rows = m_qr.rows(); const Index cols = m_qr.cols(); const Index size = (std::min)(rows, cols); workspace.resize(rows); result.setIdentity(rows, rows); for (Index k = size-1; k >= 0; k--) { result.block(k, k, rows-k, rows-k) .applyHouseholderOnTheLeft(m_qr.col(k).tail(rows-k-1), conj(m_hCoeffs.coeff(k)), &workspace.coeffRef(k)); result.row(k).swap(result.row(m_rowsTranspositions.coeff(k))); } } Index rows() const { return m_qr.rows(); } Index cols() const { return m_qr.rows(); } protected: typename MatrixType::Nested m_qr; typename HCoeffsType::Nested m_hCoeffs; typename IntDiagSizeVectorType::Nested m_rowsTranspositions; }; // template // struct evaluator > // : public evaluator > > // {}; } // end namespace internal template inline typename FullPivHouseholderQR::MatrixQReturnType FullPivHouseholderQR::matrixQ() const { eigen_assert(m_isInitialized && "FullPivHouseholderQR is not initialized."); return MatrixQReturnType(m_qr, m_hCoeffs, m_rows_transpositions); } /** \return the full-pivoting Householder QR decomposition of \c *this. * * \sa class FullPivHouseholderQR */ template const FullPivHouseholderQR::PlainObject> MatrixBase::fullPivHouseholderQr() const { return FullPivHouseholderQR(eval()); } } // end namespace Eigen #endif // EIGEN_FULLPIVOTINGHOUSEHOLDERQR_H piqp-octave/src/eigen/Eigen/src/QR/ColPivHouseholderQR_LAPACKE.h0000644000175100001770000001106614107270226024030 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Householder QR decomposition of a matrix with column pivoting based on * LAPACKE_?geqp3 function. ******************************************************************************** */ #ifndef EIGEN_COLPIVOTINGHOUSEHOLDERQR_LAPACKE_H #define EIGEN_COLPIVOTINGHOUSEHOLDERQR_LAPACKE_H namespace Eigen { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_QR_COLPIV(EIGTYPE, LAPACKE_TYPE, LAPACKE_PREFIX, EIGCOLROW, LAPACKE_COLROW) \ template<> template inline \ ColPivHouseholderQR >& \ ColPivHouseholderQR >::compute( \ const EigenBase& matrix) \ \ { \ using std::abs; \ typedef Matrix MatrixType; \ typedef MatrixType::RealScalar RealScalar; \ Index rows = matrix.rows();\ Index cols = matrix.cols();\ \ m_qr = matrix;\ Index size = m_qr.diagonalSize();\ m_hCoeffs.resize(size);\ \ m_colsTranspositions.resize(cols);\ /*Index number_of_transpositions = 0;*/ \ \ m_nonzero_pivots = 0; \ m_maxpivot = RealScalar(0);\ m_colsPermutation.resize(cols); \ m_colsPermutation.indices().setZero(); \ \ lapack_int lda = internal::convert_index(m_qr.outerStride()); \ lapack_int matrix_order = LAPACKE_COLROW; \ LAPACKE_##LAPACKE_PREFIX##geqp3( matrix_order, internal::convert_index(rows), internal::convert_index(cols), \ (LAPACKE_TYPE*)m_qr.data(), lda, (lapack_int*)m_colsPermutation.indices().data(), (LAPACKE_TYPE*)m_hCoeffs.data()); \ m_isInitialized = true; \ m_maxpivot=m_qr.diagonal().cwiseAbs().maxCoeff(); \ m_hCoeffs.adjointInPlace(); \ RealScalar premultiplied_threshold = abs(m_maxpivot) * threshold(); \ lapack_int *perm = m_colsPermutation.indices().data(); \ for(Index i=0;i premultiplied_threshold);\ } \ for(Index i=0;i // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_UMFPACKSUPPORT_H #define EIGEN_UMFPACKSUPPORT_H // for compatibility with super old version of umfpack, // not sure this is really needed, but this is harmless. #ifndef SuiteSparse_long #ifdef UF_long #define SuiteSparse_long UF_long #else #error neither SuiteSparse_long nor UF_long are defined #endif #endif namespace Eigen { /* TODO extract L, extract U, compute det, etc... */ // generic double/complex wrapper functions: // Defaults inline void umfpack_defaults(double control[UMFPACK_CONTROL], double, int) { umfpack_di_defaults(control); } inline void umfpack_defaults(double control[UMFPACK_CONTROL], std::complex, int) { umfpack_zi_defaults(control); } inline void umfpack_defaults(double control[UMFPACK_CONTROL], double, SuiteSparse_long) { umfpack_dl_defaults(control); } inline void umfpack_defaults(double control[UMFPACK_CONTROL], std::complex, SuiteSparse_long) { umfpack_zl_defaults(control); } // Report info inline void umfpack_report_info(double control[UMFPACK_CONTROL], double info[UMFPACK_INFO], double, int) { umfpack_di_report_info(control, info);} inline void umfpack_report_info(double control[UMFPACK_CONTROL], double info[UMFPACK_INFO], std::complex, int) { umfpack_zi_report_info(control, info);} inline void umfpack_report_info(double control[UMFPACK_CONTROL], double info[UMFPACK_INFO], double, SuiteSparse_long) { umfpack_dl_report_info(control, info);} inline void umfpack_report_info(double control[UMFPACK_CONTROL], double info[UMFPACK_INFO], std::complex, SuiteSparse_long) { umfpack_zl_report_info(control, info);} // Report status inline void umfpack_report_status(double control[UMFPACK_CONTROL], int status, double, int) { umfpack_di_report_status(control, status);} inline void umfpack_report_status(double control[UMFPACK_CONTROL], int status, std::complex, int) { umfpack_zi_report_status(control, status);} inline void umfpack_report_status(double control[UMFPACK_CONTROL], int status, double, SuiteSparse_long) { umfpack_dl_report_status(control, status);} inline void umfpack_report_status(double control[UMFPACK_CONTROL], int status, std::complex, SuiteSparse_long) { umfpack_zl_report_status(control, status);} // report control inline void umfpack_report_control(double control[UMFPACK_CONTROL], double, int) { umfpack_di_report_control(control);} inline void umfpack_report_control(double control[UMFPACK_CONTROL], std::complex, int) { umfpack_zi_report_control(control);} inline void umfpack_report_control(double control[UMFPACK_CONTROL], double, SuiteSparse_long) { umfpack_dl_report_control(control);} inline void umfpack_report_control(double control[UMFPACK_CONTROL], std::complex, SuiteSparse_long) { umfpack_zl_report_control(control);} // Free numeric inline void umfpack_free_numeric(void **Numeric, double, int) { umfpack_di_free_numeric(Numeric); *Numeric = 0; } inline void umfpack_free_numeric(void **Numeric, std::complex, int) { umfpack_zi_free_numeric(Numeric); *Numeric = 0; } inline void umfpack_free_numeric(void **Numeric, double, SuiteSparse_long) { umfpack_dl_free_numeric(Numeric); *Numeric = 0; } inline void umfpack_free_numeric(void **Numeric, std::complex, SuiteSparse_long) { umfpack_zl_free_numeric(Numeric); *Numeric = 0; } // Free symbolic inline void umfpack_free_symbolic(void **Symbolic, double, int) { umfpack_di_free_symbolic(Symbolic); *Symbolic = 0; } inline void umfpack_free_symbolic(void **Symbolic, std::complex, int) { umfpack_zi_free_symbolic(Symbolic); *Symbolic = 0; } inline void umfpack_free_symbolic(void **Symbolic, double, SuiteSparse_long) { umfpack_dl_free_symbolic(Symbolic); *Symbolic = 0; } inline void umfpack_free_symbolic(void **Symbolic, std::complex, SuiteSparse_long) { umfpack_zl_free_symbolic(Symbolic); *Symbolic = 0; } // Symbolic inline int umfpack_symbolic(int n_row,int n_col, const int Ap[], const int Ai[], const double Ax[], void **Symbolic, const double Control [UMFPACK_CONTROL], double Info [UMFPACK_INFO]) { return umfpack_di_symbolic(n_row,n_col,Ap,Ai,Ax,Symbolic,Control,Info); } inline int umfpack_symbolic(int n_row,int n_col, const int Ap[], const int Ai[], const std::complex Ax[], void **Symbolic, const double Control [UMFPACK_CONTROL], double Info [UMFPACK_INFO]) { return umfpack_zi_symbolic(n_row,n_col,Ap,Ai,&numext::real_ref(Ax[0]),0,Symbolic,Control,Info); } inline SuiteSparse_long umfpack_symbolic( SuiteSparse_long n_row,SuiteSparse_long n_col, const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const double Ax[], void **Symbolic, const double Control [UMFPACK_CONTROL], double Info [UMFPACK_INFO]) { return umfpack_dl_symbolic(n_row,n_col,Ap,Ai,Ax,Symbolic,Control,Info); } inline SuiteSparse_long umfpack_symbolic( SuiteSparse_long n_row,SuiteSparse_long n_col, const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const std::complex Ax[], void **Symbolic, const double Control [UMFPACK_CONTROL], double Info [UMFPACK_INFO]) { return umfpack_zl_symbolic(n_row,n_col,Ap,Ai,&numext::real_ref(Ax[0]),0,Symbolic,Control,Info); } // Numeric inline int umfpack_numeric( const int Ap[], const int Ai[], const double Ax[], void *Symbolic, void **Numeric, const double Control[UMFPACK_CONTROL],double Info [UMFPACK_INFO]) { return umfpack_di_numeric(Ap,Ai,Ax,Symbolic,Numeric,Control,Info); } inline int umfpack_numeric( const int Ap[], const int Ai[], const std::complex Ax[], void *Symbolic, void **Numeric, const double Control[UMFPACK_CONTROL],double Info [UMFPACK_INFO]) { return umfpack_zi_numeric(Ap,Ai,&numext::real_ref(Ax[0]),0,Symbolic,Numeric,Control,Info); } inline SuiteSparse_long umfpack_numeric(const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const double Ax[], void *Symbolic, void **Numeric, const double Control[UMFPACK_CONTROL],double Info [UMFPACK_INFO]) { return umfpack_dl_numeric(Ap,Ai,Ax,Symbolic,Numeric,Control,Info); } inline SuiteSparse_long umfpack_numeric(const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const std::complex Ax[], void *Symbolic, void **Numeric, const double Control[UMFPACK_CONTROL],double Info [UMFPACK_INFO]) { return umfpack_zl_numeric(Ap,Ai,&numext::real_ref(Ax[0]),0,Symbolic,Numeric,Control,Info); } // solve inline int umfpack_solve( int sys, const int Ap[], const int Ai[], const double Ax[], double X[], const double B[], void *Numeric, const double Control[UMFPACK_CONTROL], double Info[UMFPACK_INFO]) { return umfpack_di_solve(sys,Ap,Ai,Ax,X,B,Numeric,Control,Info); } inline int umfpack_solve( int sys, const int Ap[], const int Ai[], const std::complex Ax[], std::complex X[], const std::complex B[], void *Numeric, const double Control[UMFPACK_CONTROL], double Info[UMFPACK_INFO]) { return umfpack_zi_solve(sys,Ap,Ai,&numext::real_ref(Ax[0]),0,&numext::real_ref(X[0]),0,&numext::real_ref(B[0]),0,Numeric,Control,Info); } inline SuiteSparse_long umfpack_solve(int sys, const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const double Ax[], double X[], const double B[], void *Numeric, const double Control[UMFPACK_CONTROL], double Info[UMFPACK_INFO]) { return umfpack_dl_solve(sys,Ap,Ai,Ax,X,B,Numeric,Control,Info); } inline SuiteSparse_long umfpack_solve(int sys, const SuiteSparse_long Ap[], const SuiteSparse_long Ai[], const std::complex Ax[], std::complex X[], const std::complex B[], void *Numeric, const double Control[UMFPACK_CONTROL], double Info[UMFPACK_INFO]) { return umfpack_zl_solve(sys,Ap,Ai,&numext::real_ref(Ax[0]),0,&numext::real_ref(X[0]),0,&numext::real_ref(B[0]),0,Numeric,Control,Info); } // Get Lunz inline int umfpack_get_lunz(int *lnz, int *unz, int *n_row, int *n_col, int *nz_udiag, void *Numeric, double) { return umfpack_di_get_lunz(lnz,unz,n_row,n_col,nz_udiag,Numeric); } inline int umfpack_get_lunz(int *lnz, int *unz, int *n_row, int *n_col, int *nz_udiag, void *Numeric, std::complex) { return umfpack_zi_get_lunz(lnz,unz,n_row,n_col,nz_udiag,Numeric); } inline SuiteSparse_long umfpack_get_lunz( SuiteSparse_long *lnz, SuiteSparse_long *unz, SuiteSparse_long *n_row, SuiteSparse_long *n_col, SuiteSparse_long *nz_udiag, void *Numeric, double) { return umfpack_dl_get_lunz(lnz,unz,n_row,n_col,nz_udiag,Numeric); } inline SuiteSparse_long umfpack_get_lunz( SuiteSparse_long *lnz, SuiteSparse_long *unz, SuiteSparse_long *n_row, SuiteSparse_long *n_col, SuiteSparse_long *nz_udiag, void *Numeric, std::complex) { return umfpack_zl_get_lunz(lnz,unz,n_row,n_col,nz_udiag,Numeric); } // Get Numeric inline int umfpack_get_numeric(int Lp[], int Lj[], double Lx[], int Up[], int Ui[], double Ux[], int P[], int Q[], double Dx[], int *do_recip, double Rs[], void *Numeric) { return umfpack_di_get_numeric(Lp,Lj,Lx,Up,Ui,Ux,P,Q,Dx,do_recip,Rs,Numeric); } inline int umfpack_get_numeric(int Lp[], int Lj[], std::complex Lx[], int Up[], int Ui[], std::complex Ux[], int P[], int Q[], std::complex Dx[], int *do_recip, double Rs[], void *Numeric) { double& lx0_real = numext::real_ref(Lx[0]); double& ux0_real = numext::real_ref(Ux[0]); double& dx0_real = numext::real_ref(Dx[0]); return umfpack_zi_get_numeric(Lp,Lj,Lx?&lx0_real:0,0,Up,Ui,Ux?&ux0_real:0,0,P,Q, Dx?&dx0_real:0,0,do_recip,Rs,Numeric); } inline SuiteSparse_long umfpack_get_numeric(SuiteSparse_long Lp[], SuiteSparse_long Lj[], double Lx[], SuiteSparse_long Up[], SuiteSparse_long Ui[], double Ux[], SuiteSparse_long P[], SuiteSparse_long Q[], double Dx[], SuiteSparse_long *do_recip, double Rs[], void *Numeric) { return umfpack_dl_get_numeric(Lp,Lj,Lx,Up,Ui,Ux,P,Q,Dx,do_recip,Rs,Numeric); } inline SuiteSparse_long umfpack_get_numeric(SuiteSparse_long Lp[], SuiteSparse_long Lj[], std::complex Lx[], SuiteSparse_long Up[], SuiteSparse_long Ui[], std::complex Ux[], SuiteSparse_long P[], SuiteSparse_long Q[], std::complex Dx[], SuiteSparse_long *do_recip, double Rs[], void *Numeric) { double& lx0_real = numext::real_ref(Lx[0]); double& ux0_real = numext::real_ref(Ux[0]); double& dx0_real = numext::real_ref(Dx[0]); return umfpack_zl_get_numeric(Lp,Lj,Lx?&lx0_real:0,0,Up,Ui,Ux?&ux0_real:0,0,P,Q, Dx?&dx0_real:0,0,do_recip,Rs,Numeric); } // Get Determinant inline int umfpack_get_determinant(double *Mx, double *Ex, void *NumericHandle, double User_Info [UMFPACK_INFO], int) { return umfpack_di_get_determinant(Mx,Ex,NumericHandle,User_Info); } inline int umfpack_get_determinant(std::complex *Mx, double *Ex, void *NumericHandle, double User_Info [UMFPACK_INFO], int) { double& mx_real = numext::real_ref(*Mx); return umfpack_zi_get_determinant(&mx_real,0,Ex,NumericHandle,User_Info); } inline SuiteSparse_long umfpack_get_determinant(double *Mx, double *Ex, void *NumericHandle, double User_Info [UMFPACK_INFO], SuiteSparse_long) { return umfpack_dl_get_determinant(Mx,Ex,NumericHandle,User_Info); } inline SuiteSparse_long umfpack_get_determinant(std::complex *Mx, double *Ex, void *NumericHandle, double User_Info [UMFPACK_INFO], SuiteSparse_long) { double& mx_real = numext::real_ref(*Mx); return umfpack_zl_get_determinant(&mx_real,0,Ex,NumericHandle,User_Info); } /** \ingroup UmfPackSupport_Module * \brief A sparse LU factorization and solver based on UmfPack * * This class allows to solve for A.X = B sparse linear problems via a LU factorization * using the UmfPack library. The sparse matrix A must be squared and full rank. * The vectors or matrices X and B can be either dense or sparse. * * \warning The input matrix A should be in a \b compressed and \b column-major form. * Otherwise an expensive copy will be made. You can call the inexpensive makeCompressed() to get a compressed matrix. * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept, class SparseLU */ template class UmfPackLU : public SparseSolverBase > { protected: typedef SparseSolverBase > Base; using Base::m_isInitialized; public: using Base::_solve_impl; typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef Matrix Vector; typedef Matrix IntRowVectorType; typedef Matrix IntColVectorType; typedef SparseMatrix LUMatrixType; typedef SparseMatrix UmfpackMatrixType; typedef Ref UmfpackMatrixRef; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: typedef Array UmfpackControl; typedef Array UmfpackInfo; UmfPackLU() : m_dummy(0,0), mp_matrix(m_dummy) { init(); } template explicit UmfPackLU(const InputMatrixType& matrix) : mp_matrix(matrix) { init(); compute(matrix); } ~UmfPackLU() { if(m_symbolic) umfpack_free_symbolic(&m_symbolic,Scalar(), StorageIndex()); if(m_numeric) umfpack_free_numeric(&m_numeric,Scalar(), StorageIndex()); } inline Index rows() const { return mp_matrix.rows(); } inline Index cols() const { return mp_matrix.cols(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix.appears to be negative. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } inline const LUMatrixType& matrixL() const { if (m_extractedDataAreDirty) extractData(); return m_l; } inline const LUMatrixType& matrixU() const { if (m_extractedDataAreDirty) extractData(); return m_u; } inline const IntColVectorType& permutationP() const { if (m_extractedDataAreDirty) extractData(); return m_p; } inline const IntRowVectorType& permutationQ() const { if (m_extractedDataAreDirty) extractData(); return m_q; } /** Computes the sparse Cholesky decomposition of \a matrix * Note that the matrix should be column-major, and in compressed format for best performance. * \sa SparseMatrix::makeCompressed(). */ template void compute(const InputMatrixType& matrix) { if(m_symbolic) umfpack_free_symbolic(&m_symbolic,Scalar(),StorageIndex()); if(m_numeric) umfpack_free_numeric(&m_numeric,Scalar(),StorageIndex()); grab(matrix.derived()); analyzePattern_impl(); factorize_impl(); } /** Performs a symbolic decomposition on the sparcity of \a matrix. * * This function is particularly useful when solving for several problems having the same structure. * * \sa factorize(), compute() */ template void analyzePattern(const InputMatrixType& matrix) { if(m_symbolic) umfpack_free_symbolic(&m_symbolic,Scalar(),StorageIndex()); if(m_numeric) umfpack_free_numeric(&m_numeric,Scalar(),StorageIndex()); grab(matrix.derived()); analyzePattern_impl(); } /** Provides the return status code returned by UmfPack during the numeric * factorization. * * \sa factorize(), compute() */ inline int umfpackFactorizeReturncode() const { eigen_assert(m_numeric && "UmfPackLU: you must first call factorize()"); return m_fact_errorCode; } /** Provides access to the control settings array used by UmfPack. * * If this array contains NaN's, the default values are used. * * See UMFPACK documentation for details. */ inline const UmfpackControl& umfpackControl() const { return m_control; } /** Provides access to the control settings array used by UmfPack. * * If this array contains NaN's, the default values are used. * * See UMFPACK documentation for details. */ inline UmfpackControl& umfpackControl() { return m_control; } /** Performs a numeric decomposition of \a matrix * * The given matrix must has the same sparcity than the matrix on which the pattern anylysis has been performed. * * \sa analyzePattern(), compute() */ template void factorize(const InputMatrixType& matrix) { eigen_assert(m_analysisIsOk && "UmfPackLU: you must first call analyzePattern()"); if(m_numeric) umfpack_free_numeric(&m_numeric,Scalar(),StorageIndex()); grab(matrix.derived()); factorize_impl(); } /** Prints the current UmfPack control settings. * * \sa umfpackControl() */ void printUmfpackControl() { umfpack_report_control(m_control.data(), Scalar(),StorageIndex()); } /** Prints statistics collected by UmfPack. * * \sa analyzePattern(), compute() */ void printUmfpackInfo() { eigen_assert(m_analysisIsOk && "UmfPackLU: you must first call analyzePattern()"); umfpack_report_info(m_control.data(), m_umfpackInfo.data(), Scalar(),StorageIndex()); } /** Prints the status of the previous factorization operation performed by UmfPack (symbolic or numerical factorization). * * \sa analyzePattern(), compute() */ void printUmfpackStatus() { eigen_assert(m_analysisIsOk && "UmfPackLU: you must first call analyzePattern()"); umfpack_report_status(m_control.data(), m_fact_errorCode, Scalar(),StorageIndex()); } /** \internal */ template bool _solve_impl(const MatrixBase &b, MatrixBase &x) const; Scalar determinant() const; void extractData() const; protected: void init() { m_info = InvalidInput; m_isInitialized = false; m_numeric = 0; m_symbolic = 0; m_extractedDataAreDirty = true; umfpack_defaults(m_control.data(), Scalar(),StorageIndex()); } void analyzePattern_impl() { m_fact_errorCode = umfpack_symbolic(internal::convert_index(mp_matrix.rows()), internal::convert_index(mp_matrix.cols()), mp_matrix.outerIndexPtr(), mp_matrix.innerIndexPtr(), mp_matrix.valuePtr(), &m_symbolic, m_control.data(), m_umfpackInfo.data()); m_isInitialized = true; m_info = m_fact_errorCode ? InvalidInput : Success; m_analysisIsOk = true; m_factorizationIsOk = false; m_extractedDataAreDirty = true; } void factorize_impl() { m_fact_errorCode = umfpack_numeric(mp_matrix.outerIndexPtr(), mp_matrix.innerIndexPtr(), mp_matrix.valuePtr(), m_symbolic, &m_numeric, m_control.data(), m_umfpackInfo.data()); m_info = m_fact_errorCode == UMFPACK_OK ? Success : NumericalIssue; m_factorizationIsOk = true; m_extractedDataAreDirty = true; } template void grab(const EigenBase &A) { mp_matrix.~UmfpackMatrixRef(); ::new (&mp_matrix) UmfpackMatrixRef(A.derived()); } void grab(const UmfpackMatrixRef &A) { if(&(A.derived()) != &mp_matrix) { mp_matrix.~UmfpackMatrixRef(); ::new (&mp_matrix) UmfpackMatrixRef(A); } } // cached data to reduce reallocation, etc. mutable LUMatrixType m_l; StorageIndex m_fact_errorCode; UmfpackControl m_control; mutable UmfpackInfo m_umfpackInfo; mutable LUMatrixType m_u; mutable IntColVectorType m_p; mutable IntRowVectorType m_q; UmfpackMatrixType m_dummy; UmfpackMatrixRef mp_matrix; void* m_numeric; void* m_symbolic; mutable ComputationInfo m_info; int m_factorizationIsOk; int m_analysisIsOk; mutable bool m_extractedDataAreDirty; private: UmfPackLU(const UmfPackLU& ) { } }; template void UmfPackLU::extractData() const { if (m_extractedDataAreDirty) { // get size of the data StorageIndex lnz, unz, rows, cols, nz_udiag; umfpack_get_lunz(&lnz, &unz, &rows, &cols, &nz_udiag, m_numeric, Scalar()); // allocate data m_l.resize(rows,(std::min)(rows,cols)); m_l.resizeNonZeros(lnz); m_u.resize((std::min)(rows,cols),cols); m_u.resizeNonZeros(unz); m_p.resize(rows); m_q.resize(cols); // extract umfpack_get_numeric(m_l.outerIndexPtr(), m_l.innerIndexPtr(), m_l.valuePtr(), m_u.outerIndexPtr(), m_u.innerIndexPtr(), m_u.valuePtr(), m_p.data(), m_q.data(), 0, 0, 0, m_numeric); m_extractedDataAreDirty = false; } } template typename UmfPackLU::Scalar UmfPackLU::determinant() const { Scalar det; umfpack_get_determinant(&det, 0, m_numeric, 0, StorageIndex()); return det; } template template bool UmfPackLU::_solve_impl(const MatrixBase &b, MatrixBase &x) const { Index rhsCols = b.cols(); eigen_assert((BDerived::Flags&RowMajorBit)==0 && "UmfPackLU backend does not support non col-major rhs yet"); eigen_assert((XDerived::Flags&RowMajorBit)==0 && "UmfPackLU backend does not support non col-major result yet"); eigen_assert(b.derived().data() != x.derived().data() && " Umfpack does not support inplace solve"); Scalar* x_ptr = 0; Matrix x_tmp; if(x.innerStride()!=1) { x_tmp.resize(x.rows()); x_ptr = x_tmp.data(); } for (int j=0; j // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_KLUSUPPORT_H #define EIGEN_KLUSUPPORT_H namespace Eigen { /* TODO extract L, extract U, compute det, etc... */ /** \ingroup KLUSupport_Module * \brief A sparse LU factorization and solver based on KLU * * This class allows to solve for A.X = B sparse linear problems via a LU factorization * using the KLU library. The sparse matrix A must be squared and full rank. * The vectors or matrices X and B can be either dense or sparse. * * \warning The input matrix A should be in a \b compressed and \b column-major form. * Otherwise an expensive copy will be made. You can call the inexpensive makeCompressed() to get a compressed matrix. * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept, class UmfPackLU, class SparseLU */ inline int klu_solve(klu_symbolic *Symbolic, klu_numeric *Numeric, Index ldim, Index nrhs, double B [ ], klu_common *Common, double) { return klu_solve(Symbolic, Numeric, internal::convert_index(ldim), internal::convert_index(nrhs), B, Common); } inline int klu_solve(klu_symbolic *Symbolic, klu_numeric *Numeric, Index ldim, Index nrhs, std::complexB[], klu_common *Common, std::complex) { return klu_z_solve(Symbolic, Numeric, internal::convert_index(ldim), internal::convert_index(nrhs), &numext::real_ref(B[0]), Common); } inline int klu_tsolve(klu_symbolic *Symbolic, klu_numeric *Numeric, Index ldim, Index nrhs, double B[], klu_common *Common, double) { return klu_tsolve(Symbolic, Numeric, internal::convert_index(ldim), internal::convert_index(nrhs), B, Common); } inline int klu_tsolve(klu_symbolic *Symbolic, klu_numeric *Numeric, Index ldim, Index nrhs, std::complexB[], klu_common *Common, std::complex) { return klu_z_tsolve(Symbolic, Numeric, internal::convert_index(ldim), internal::convert_index(nrhs), &numext::real_ref(B[0]), 0, Common); } inline klu_numeric* klu_factor(int Ap [ ], int Ai [ ], double Ax [ ], klu_symbolic *Symbolic, klu_common *Common, double) { return klu_factor(Ap, Ai, Ax, Symbolic, Common); } inline klu_numeric* klu_factor(int Ap[], int Ai[], std::complex Ax[], klu_symbolic *Symbolic, klu_common *Common, std::complex) { return klu_z_factor(Ap, Ai, &numext::real_ref(Ax[0]), Symbolic, Common); } template class KLU : public SparseSolverBase > { protected: typedef SparseSolverBase > Base; using Base::m_isInitialized; public: using Base::_solve_impl; typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef Matrix Vector; typedef Matrix IntRowVectorType; typedef Matrix IntColVectorType; typedef SparseMatrix LUMatrixType; typedef SparseMatrix KLUMatrixType; typedef Ref KLUMatrixRef; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: KLU() : m_dummy(0,0), mp_matrix(m_dummy) { init(); } template explicit KLU(const InputMatrixType& matrix) : mp_matrix(matrix) { init(); compute(matrix); } ~KLU() { if(m_symbolic) klu_free_symbolic(&m_symbolic,&m_common); if(m_numeric) klu_free_numeric(&m_numeric,&m_common); } EIGEN_CONSTEXPR inline Index rows() const EIGEN_NOEXCEPT { return mp_matrix.rows(); } EIGEN_CONSTEXPR inline Index cols() const EIGEN_NOEXCEPT { return mp_matrix.cols(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix.appears to be negative. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } #if 0 // not implemented yet inline const LUMatrixType& matrixL() const { if (m_extractedDataAreDirty) extractData(); return m_l; } inline const LUMatrixType& matrixU() const { if (m_extractedDataAreDirty) extractData(); return m_u; } inline const IntColVectorType& permutationP() const { if (m_extractedDataAreDirty) extractData(); return m_p; } inline const IntRowVectorType& permutationQ() const { if (m_extractedDataAreDirty) extractData(); return m_q; } #endif /** Computes the sparse Cholesky decomposition of \a matrix * Note that the matrix should be column-major, and in compressed format for best performance. * \sa SparseMatrix::makeCompressed(). */ template void compute(const InputMatrixType& matrix) { if(m_symbolic) klu_free_symbolic(&m_symbolic, &m_common); if(m_numeric) klu_free_numeric(&m_numeric, &m_common); grab(matrix.derived()); analyzePattern_impl(); factorize_impl(); } /** Performs a symbolic decomposition on the sparcity of \a matrix. * * This function is particularly useful when solving for several problems having the same structure. * * \sa factorize(), compute() */ template void analyzePattern(const InputMatrixType& matrix) { if(m_symbolic) klu_free_symbolic(&m_symbolic, &m_common); if(m_numeric) klu_free_numeric(&m_numeric, &m_common); grab(matrix.derived()); analyzePattern_impl(); } /** Provides access to the control settings array used by KLU. * * See KLU documentation for details. */ inline const klu_common& kluCommon() const { return m_common; } /** Provides access to the control settings array used by UmfPack. * * If this array contains NaN's, the default values are used. * * See KLU documentation for details. */ inline klu_common& kluCommon() { return m_common; } /** Performs a numeric decomposition of \a matrix * * The given matrix must has the same sparcity than the matrix on which the pattern anylysis has been performed. * * \sa analyzePattern(), compute() */ template void factorize(const InputMatrixType& matrix) { eigen_assert(m_analysisIsOk && "KLU: you must first call analyzePattern()"); if(m_numeric) klu_free_numeric(&m_numeric,&m_common); grab(matrix.derived()); factorize_impl(); } /** \internal */ template bool _solve_impl(const MatrixBase &b, MatrixBase &x) const; #if 0 // not implemented yet Scalar determinant() const; void extractData() const; #endif protected: void init() { m_info = InvalidInput; m_isInitialized = false; m_numeric = 0; m_symbolic = 0; m_extractedDataAreDirty = true; klu_defaults(&m_common); } void analyzePattern_impl() { m_info = InvalidInput; m_analysisIsOk = false; m_factorizationIsOk = false; m_symbolic = klu_analyze(internal::convert_index(mp_matrix.rows()), const_cast(mp_matrix.outerIndexPtr()), const_cast(mp_matrix.innerIndexPtr()), &m_common); if (m_symbolic) { m_isInitialized = true; m_info = Success; m_analysisIsOk = true; m_extractedDataAreDirty = true; } } void factorize_impl() { m_numeric = klu_factor(const_cast(mp_matrix.outerIndexPtr()), const_cast(mp_matrix.innerIndexPtr()), const_cast(mp_matrix.valuePtr()), m_symbolic, &m_common, Scalar()); m_info = m_numeric ? Success : NumericalIssue; m_factorizationIsOk = m_numeric ? 1 : 0; m_extractedDataAreDirty = true; } template void grab(const EigenBase &A) { mp_matrix.~KLUMatrixRef(); ::new (&mp_matrix) KLUMatrixRef(A.derived()); } void grab(const KLUMatrixRef &A) { if(&(A.derived()) != &mp_matrix) { mp_matrix.~KLUMatrixRef(); ::new (&mp_matrix) KLUMatrixRef(A); } } // cached data to reduce reallocation, etc. #if 0 // not implemented yet mutable LUMatrixType m_l; mutable LUMatrixType m_u; mutable IntColVectorType m_p; mutable IntRowVectorType m_q; #endif KLUMatrixType m_dummy; KLUMatrixRef mp_matrix; klu_numeric* m_numeric; klu_symbolic* m_symbolic; klu_common m_common; mutable ComputationInfo m_info; int m_factorizationIsOk; int m_analysisIsOk; mutable bool m_extractedDataAreDirty; private: KLU(const KLU& ) { } }; #if 0 // not implemented yet template void KLU::extractData() const { if (m_extractedDataAreDirty) { eigen_assert(false && "KLU: extractData Not Yet Implemented"); // get size of the data int lnz, unz, rows, cols, nz_udiag; umfpack_get_lunz(&lnz, &unz, &rows, &cols, &nz_udiag, m_numeric, Scalar()); // allocate data m_l.resize(rows,(std::min)(rows,cols)); m_l.resizeNonZeros(lnz); m_u.resize((std::min)(rows,cols),cols); m_u.resizeNonZeros(unz); m_p.resize(rows); m_q.resize(cols); // extract umfpack_get_numeric(m_l.outerIndexPtr(), m_l.innerIndexPtr(), m_l.valuePtr(), m_u.outerIndexPtr(), m_u.innerIndexPtr(), m_u.valuePtr(), m_p.data(), m_q.data(), 0, 0, 0, m_numeric); m_extractedDataAreDirty = false; } } template typename KLU::Scalar KLU::determinant() const { eigen_assert(false && "KLU: extractData Not Yet Implemented"); return Scalar(); } #endif template template bool KLU::_solve_impl(const MatrixBase &b, MatrixBase &x) const { Index rhsCols = b.cols(); EIGEN_STATIC_ASSERT((XDerived::Flags&RowMajorBit)==0, THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES); eigen_assert(m_factorizationIsOk && "The decomposition is not in a valid state for solving, you must first call either compute() or analyzePattern()/factorize()"); x = b; int info = klu_solve(m_symbolic, m_numeric, b.rows(), rhsCols, x.const_cast_derived().data(), const_cast(&m_common), Scalar()); m_info = info!=0 ? Success : NumericalIssue; return true; } } // end namespace Eigen #endif // EIGEN_KLUSUPPORT_H piqp-octave/src/eigen/Eigen/src/SPQRSupport/0000755000175100001770000000000014107270226020532 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SPQRSupport/SuiteSparseQRSupport.h0000644000175100001770000002706214107270226025021 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Desire Nuentsa // Copyright (C) 2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SUITESPARSEQRSUPPORT_H #define EIGEN_SUITESPARSEQRSUPPORT_H namespace Eigen { template class SPQR; template struct SPQRMatrixQReturnType; template struct SPQRMatrixQTransposeReturnType; template struct SPQR_QProduct; namespace internal { template struct traits > { typedef typename SPQRType::MatrixType ReturnType; }; template struct traits > { typedef typename SPQRType::MatrixType ReturnType; }; template struct traits > { typedef typename Derived::PlainObject ReturnType; }; } // End namespace internal /** * \ingroup SPQRSupport_Module * \class SPQR * \brief Sparse QR factorization based on SuiteSparseQR library * * This class is used to perform a multithreaded and multifrontal rank-revealing QR decomposition * of sparse matrices. The result is then used to solve linear leasts_square systems. * Clearly, a QR factorization is returned such that A*P = Q*R where : * * P is the column permutation. Use colsPermutation() to get it. * * Q is the orthogonal matrix represented as Householder reflectors. * Use matrixQ() to get an expression and matrixQ().transpose() to get the transpose. * You can then apply it to a vector. * * R is the sparse triangular factor. Use matrixQR() to get it as SparseMatrix. * NOTE : The Index type of R is always SuiteSparse_long. You can get it with SPQR::Index * * \tparam _MatrixType The type of the sparse matrix A, must be a column-major SparseMatrix<> * * \implsparsesolverconcept * * */ template class SPQR : public SparseSolverBase > { protected: typedef SparseSolverBase > Base; using Base::m_isInitialized; public: typedef typename _MatrixType::Scalar Scalar; typedef typename _MatrixType::RealScalar RealScalar; typedef SuiteSparse_long StorageIndex ; typedef SparseMatrix MatrixType; typedef Map > PermutationType; enum { ColsAtCompileTime = Dynamic, MaxColsAtCompileTime = Dynamic }; public: SPQR() : m_analysisIsOk(false), m_factorizationIsOk(false), m_isRUpToDate(false), m_ordering(SPQR_ORDERING_DEFAULT), m_allow_tol(SPQR_DEFAULT_TOL), m_tolerance (NumTraits::epsilon()), m_cR(0), m_E(0), m_H(0), m_HPinv(0), m_HTau(0), m_useDefaultThreshold(true) { cholmod_l_start(&m_cc); } explicit SPQR(const _MatrixType& matrix) : m_analysisIsOk(false), m_factorizationIsOk(false), m_isRUpToDate(false), m_ordering(SPQR_ORDERING_DEFAULT), m_allow_tol(SPQR_DEFAULT_TOL), m_tolerance (NumTraits::epsilon()), m_cR(0), m_E(0), m_H(0), m_HPinv(0), m_HTau(0), m_useDefaultThreshold(true) { cholmod_l_start(&m_cc); compute(matrix); } ~SPQR() { SPQR_free(); cholmod_l_finish(&m_cc); } void SPQR_free() { cholmod_l_free_sparse(&m_H, &m_cc); cholmod_l_free_sparse(&m_cR, &m_cc); cholmod_l_free_dense(&m_HTau, &m_cc); std::free(m_E); std::free(m_HPinv); } void compute(const _MatrixType& matrix) { if(m_isInitialized) SPQR_free(); MatrixType mat(matrix); /* Compute the default threshold as in MatLab, see: * Tim Davis, "Algorithm 915, SuiteSparseQR: Multifrontal Multithreaded Rank-Revealing * Sparse QR Factorization, ACM Trans. on Math. Soft. 38(1), 2011, Page 8:3 */ RealScalar pivotThreshold = m_tolerance; if(m_useDefaultThreshold) { RealScalar max2Norm = 0.0; for (int j = 0; j < mat.cols(); j++) max2Norm = numext::maxi(max2Norm, mat.col(j).norm()); if(max2Norm==RealScalar(0)) max2Norm = RealScalar(1); pivotThreshold = 20 * (mat.rows() + mat.cols()) * max2Norm * NumTraits::epsilon(); } cholmod_sparse A; A = viewAsCholmod(mat); m_rows = matrix.rows(); Index col = matrix.cols(); m_rank = SuiteSparseQR(m_ordering, pivotThreshold, col, &A, &m_cR, &m_E, &m_H, &m_HPinv, &m_HTau, &m_cc); if (!m_cR) { m_info = NumericalIssue; m_isInitialized = false; return; } m_info = Success; m_isInitialized = true; m_isRUpToDate = false; } /** * Get the number of rows of the input matrix and the Q matrix */ inline Index rows() const {return m_rows; } /** * Get the number of columns of the input matrix. */ inline Index cols() const { return m_cR->ncol; } template void _solve_impl(const MatrixBase &b, MatrixBase &dest) const { eigen_assert(m_isInitialized && " The QR factorization should be computed first, call compute()"); eigen_assert(b.cols()==1 && "This method is for vectors only"); //Compute Q^T * b typename Dest::PlainObject y, y2; y = matrixQ().transpose() * b; // Solves with the triangular matrix R Index rk = this->rank(); y2 = y; y.resize((std::max)(cols(),Index(y.rows())),y.cols()); y.topRows(rk) = this->matrixR().topLeftCorner(rk, rk).template triangularView().solve(y2.topRows(rk)); // Apply the column permutation // colsPermutation() performs a copy of the permutation, // so let's apply it manually: for(Index i = 0; i < rk; ++i) dest.row(m_E[i]) = y.row(i); for(Index i = rk; i < cols(); ++i) dest.row(m_E[i]).setZero(); // y.bottomRows(y.rows()-rk).setZero(); // dest = colsPermutation() * y.topRows(cols()); m_info = Success; } /** \returns the sparse triangular factor R. It is a sparse matrix */ const MatrixType matrixR() const { eigen_assert(m_isInitialized && " The QR factorization should be computed first, call compute()"); if(!m_isRUpToDate) { m_R = viewAsEigen(*m_cR); m_isRUpToDate = true; } return m_R; } /// Get an expression of the matrix Q SPQRMatrixQReturnType matrixQ() const { return SPQRMatrixQReturnType(*this); } /// Get the permutation that was applied to columns of A PermutationType colsPermutation() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return PermutationType(m_E, m_cR->ncol); } /** * Gets the rank of the matrix. * It should be equal to matrixQR().cols if the matrix is full-rank */ Index rank() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_cc.SPQR_istat[4]; } /// Set the fill-reducing ordering method to be used void setSPQROrdering(int ord) { m_ordering = ord;} /// Set the tolerance tol to treat columns with 2-norm < =tol as zero void setPivotThreshold(const RealScalar& tol) { m_useDefaultThreshold = false; m_tolerance = tol; } /** \returns a pointer to the SPQR workspace */ cholmod_common *cholmodCommon() const { return &m_cc; } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the sparse QR can not be computed */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } protected: bool m_analysisIsOk; bool m_factorizationIsOk; mutable bool m_isRUpToDate; mutable ComputationInfo m_info; int m_ordering; // Ordering method to use, see SPQR's manual int m_allow_tol; // Allow to use some tolerance during numerical factorization. RealScalar m_tolerance; // treat columns with 2-norm below this tolerance as zero mutable cholmod_sparse *m_cR; // The sparse R factor in cholmod format mutable MatrixType m_R; // The sparse matrix R in Eigen format mutable StorageIndex *m_E; // The permutation applied to columns mutable cholmod_sparse *m_H; //The householder vectors mutable StorageIndex *m_HPinv; // The row permutation of H mutable cholmod_dense *m_HTau; // The Householder coefficients mutable Index m_rank; // The rank of the matrix mutable cholmod_common m_cc; // Workspace and parameters bool m_useDefaultThreshold; // Use default threshold Index m_rows; template friend struct SPQR_QProduct; }; template struct SPQR_QProduct : ReturnByValue > { typedef typename SPQRType::Scalar Scalar; typedef typename SPQRType::StorageIndex StorageIndex; //Define the constructor to get reference to argument types SPQR_QProduct(const SPQRType& spqr, const Derived& other, bool transpose) : m_spqr(spqr),m_other(other),m_transpose(transpose) {} inline Index rows() const { return m_transpose ? m_spqr.rows() : m_spqr.cols(); } inline Index cols() const { return m_other.cols(); } // Assign to a vector template void evalTo(ResType& res) const { cholmod_dense y_cd; cholmod_dense *x_cd; int method = m_transpose ? SPQR_QTX : SPQR_QX; cholmod_common *cc = m_spqr.cholmodCommon(); y_cd = viewAsCholmod(m_other.const_cast_derived()); x_cd = SuiteSparseQR_qmult(method, m_spqr.m_H, m_spqr.m_HTau, m_spqr.m_HPinv, &y_cd, cc); res = Matrix::Map(reinterpret_cast(x_cd->x), x_cd->nrow, x_cd->ncol); cholmod_l_free_dense(&x_cd, cc); } const SPQRType& m_spqr; const Derived& m_other; bool m_transpose; }; template struct SPQRMatrixQReturnType{ SPQRMatrixQReturnType(const SPQRType& spqr) : m_spqr(spqr) {} template SPQR_QProduct operator*(const MatrixBase& other) { return SPQR_QProduct(m_spqr,other.derived(),false); } SPQRMatrixQTransposeReturnType adjoint() const { return SPQRMatrixQTransposeReturnType(m_spqr); } // To use for operations with the transpose of Q SPQRMatrixQTransposeReturnType transpose() const { return SPQRMatrixQTransposeReturnType(m_spqr); } const SPQRType& m_spqr; }; template struct SPQRMatrixQTransposeReturnType{ SPQRMatrixQTransposeReturnType(const SPQRType& spqr) : m_spqr(spqr) {} template SPQR_QProduct operator*(const MatrixBase& other) { return SPQR_QProduct(m_spqr,other.derived(), true); } const SPQRType& m_spqr; }; }// End namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SparseCore/0000755000175100001770000000000014107270226020416 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SparseCore/ConservativeSparseSparseProduct.h0000644000175100001770000003155614107270226027146 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_CONSERVATIVESPARSESPARSEPRODUCT_H #define EIGEN_CONSERVATIVESPARSESPARSEPRODUCT_H namespace Eigen { namespace internal { template static void conservative_sparse_sparse_product_impl(const Lhs& lhs, const Rhs& rhs, ResultType& res, bool sortedInsertion = false) { typedef typename remove_all::type::Scalar LhsScalar; typedef typename remove_all::type::Scalar RhsScalar; typedef typename remove_all::type::Scalar ResScalar; // make sure to call innerSize/outerSize since we fake the storage order. Index rows = lhs.innerSize(); Index cols = rhs.outerSize(); eigen_assert(lhs.outerSize() == rhs.innerSize()); ei_declare_aligned_stack_constructed_variable(bool, mask, rows, 0); ei_declare_aligned_stack_constructed_variable(ResScalar, values, rows, 0); ei_declare_aligned_stack_constructed_variable(Index, indices, rows, 0); std::memset(mask,0,sizeof(bool)*rows); evaluator lhsEval(lhs); evaluator rhsEval(rhs); // estimate the number of non zero entries // given a rhs column containing Y non zeros, we assume that the respective Y columns // of the lhs differs in average of one non zeros, thus the number of non zeros for // the product of a rhs column with the lhs is X+Y where X is the average number of non zero // per column of the lhs. // Therefore, we have nnz(lhs*rhs) = nnz(lhs) + nnz(rhs) Index estimated_nnz_prod = lhsEval.nonZerosEstimate() + rhsEval.nonZerosEstimate(); res.setZero(); res.reserve(Index(estimated_nnz_prod)); // we compute each column of the result, one after the other for (Index j=0; j::InnerIterator rhsIt(rhsEval, j); rhsIt; ++rhsIt) { RhsScalar y = rhsIt.value(); Index k = rhsIt.index(); for (typename evaluator::InnerIterator lhsIt(lhsEval, k); lhsIt; ++lhsIt) { Index i = lhsIt.index(); LhsScalar x = lhsIt.value(); if(!mask[i]) { mask[i] = true; values[i] = x * y; indices[nnz] = i; ++nnz; } else values[i] += x * y; } } if(!sortedInsertion) { // unordered insertion for(Index k=0; k use a quick sort // otherwise => loop through the entire vector // In order to avoid to perform an expensive log2 when the // result is clearly very sparse we use a linear bound up to 200. if((nnz<200 && nnz1) std::sort(indices,indices+nnz); for(Index k=0; k::Flags&RowMajorBit) ? RowMajor : ColMajor, int RhsStorageOrder = (traits::Flags&RowMajorBit) ? RowMajor : ColMajor, int ResStorageOrder = (traits::Flags&RowMajorBit) ? RowMajor : ColMajor> struct conservative_sparse_sparse_product_selector; template struct conservative_sparse_sparse_product_selector { typedef typename remove_all::type LhsCleaned; typedef typename LhsCleaned::Scalar Scalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix RowMajorMatrix; typedef SparseMatrix ColMajorMatrixAux; typedef typename sparse_eval::type ColMajorMatrix; // If the result is tall and thin (in the extreme case a column vector) // then it is faster to sort the coefficients inplace instead of transposing twice. // FIXME, the following heuristic is probably not very good. if(lhs.rows()>rhs.cols()) { ColMajorMatrix resCol(lhs.rows(),rhs.cols()); // perform sorted insertion internal::conservative_sparse_sparse_product_impl(lhs, rhs, resCol, true); res = resCol.markAsRValue(); } else { ColMajorMatrixAux resCol(lhs.rows(),rhs.cols()); // resort to transpose to sort the entries internal::conservative_sparse_sparse_product_impl(lhs, rhs, resCol, false); RowMajorMatrix resRow(resCol); res = resRow.markAsRValue(); } } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix RowMajorRhs; typedef SparseMatrix RowMajorRes; RowMajorRhs rhsRow = rhs; RowMajorRes resRow(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(rhsRow, lhs, resRow); res = resRow; } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix RowMajorLhs; typedef SparseMatrix RowMajorRes; RowMajorLhs lhsRow = lhs; RowMajorRes resRow(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(rhs, lhsRow, resRow); res = resRow; } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix RowMajorMatrix; RowMajorMatrix resRow(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(rhs, lhs, resRow); res = resRow; } }; template struct conservative_sparse_sparse_product_selector { typedef typename traits::type>::Scalar Scalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix ColMajorMatrix; ColMajorMatrix resCol(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(lhs, rhs, resCol); res = resCol; } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix ColMajorLhs; typedef SparseMatrix ColMajorRes; ColMajorLhs lhsCol = lhs; ColMajorRes resCol(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(lhsCol, rhs, resCol); res = resCol; } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix ColMajorRhs; typedef SparseMatrix ColMajorRes; ColMajorRhs rhsCol = rhs; ColMajorRes resCol(lhs.rows(), rhs.cols()); internal::conservative_sparse_sparse_product_impl(lhs, rhsCol, resCol); res = resCol; } }; template struct conservative_sparse_sparse_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix RowMajorMatrix; typedef SparseMatrix ColMajorMatrix; RowMajorMatrix resRow(lhs.rows(),rhs.cols()); internal::conservative_sparse_sparse_product_impl(rhs, lhs, resRow); // sort the non zeros: ColMajorMatrix resCol(resRow); res = resCol; } }; } // end namespace internal namespace internal { template static void sparse_sparse_to_dense_product_impl(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef typename remove_all::type::Scalar LhsScalar; typedef typename remove_all::type::Scalar RhsScalar; Index cols = rhs.outerSize(); eigen_assert(lhs.outerSize() == rhs.innerSize()); evaluator lhsEval(lhs); evaluator rhsEval(rhs); for (Index j=0; j::InnerIterator rhsIt(rhsEval, j); rhsIt; ++rhsIt) { RhsScalar y = rhsIt.value(); Index k = rhsIt.index(); for (typename evaluator::InnerIterator lhsIt(lhsEval, k); lhsIt; ++lhsIt) { Index i = lhsIt.index(); LhsScalar x = lhsIt.value(); res.coeffRef(i,j) += x * y; } } } } } // end namespace internal namespace internal { template::Flags&RowMajorBit) ? RowMajor : ColMajor, int RhsStorageOrder = (traits::Flags&RowMajorBit) ? RowMajor : ColMajor> struct sparse_sparse_to_dense_product_selector; template struct sparse_sparse_to_dense_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { internal::sparse_sparse_to_dense_product_impl(lhs, rhs, res); } }; template struct sparse_sparse_to_dense_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix ColMajorLhs; ColMajorLhs lhsCol(lhs); internal::sparse_sparse_to_dense_product_impl(lhsCol, rhs, res); } }; template struct sparse_sparse_to_dense_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { typedef SparseMatrix ColMajorRhs; ColMajorRhs rhsCol(rhs); internal::sparse_sparse_to_dense_product_impl(lhs, rhsCol, res); } }; template struct sparse_sparse_to_dense_product_selector { static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res) { Transpose trRes(res); internal::sparse_sparse_to_dense_product_impl >(rhs, lhs, trRes); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_CONSERVATIVESPARSESPARSEPRODUCT_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseSolverBase.h0000644000175100001770000001051014107270226024007 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSESOLVERBASE_H #define EIGEN_SPARSESOLVERBASE_H namespace Eigen { namespace internal { /** \internal * Helper functions to solve with a sparse right-hand-side and result. * The rhs is decomposed into small vertical panels which are solved through dense temporaries. */ template typename enable_if::type solve_sparse_through_dense_panels(const Decomposition &dec, const Rhs& rhs, Dest &dest) { EIGEN_STATIC_ASSERT((Dest::Flags&RowMajorBit)==0,THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES); typedef typename Dest::Scalar DestScalar; // we process the sparse rhs per block of NbColsAtOnce columns temporarily stored into a dense matrix. static const Index NbColsAtOnce = 4; Index rhsCols = rhs.cols(); Index size = rhs.rows(); // the temporary matrices do not need more columns than NbColsAtOnce: Index tmpCols = (std::min)(rhsCols, NbColsAtOnce); Eigen::Matrix tmp(size,tmpCols); Eigen::Matrix tmpX(size,tmpCols); for(Index k=0; k(rhsCols-k, NbColsAtOnce); tmp.leftCols(actualCols) = rhs.middleCols(k,actualCols); tmpX.leftCols(actualCols) = dec.solve(tmp.leftCols(actualCols)); dest.middleCols(k,actualCols) = tmpX.leftCols(actualCols).sparseView(); } } // Overload for vector as rhs template typename enable_if::type solve_sparse_through_dense_panels(const Decomposition &dec, const Rhs& rhs, Dest &dest) { typedef typename Dest::Scalar DestScalar; Index size = rhs.rows(); Eigen::Matrix rhs_dense(rhs); Eigen::Matrix dest_dense(size); dest_dense = dec.solve(rhs_dense); dest = dest_dense.sparseView(); } } // end namespace internal /** \class SparseSolverBase * \ingroup SparseCore_Module * \brief A base class for sparse solvers * * \tparam Derived the actual type of the solver. * */ template class SparseSolverBase : internal::noncopyable { public: /** Default constructor */ SparseSolverBase() : m_isInitialized(false) {} ~SparseSolverBase() {} Derived& derived() { return *static_cast(this); } const Derived& derived() const { return *static_cast(this); } /** \returns an expression of the solution x of \f$ A x = b \f$ using the current decomposition of A. * * \sa compute() */ template inline const Solve solve(const MatrixBase& b) const { eigen_assert(m_isInitialized && "Solver is not initialized."); eigen_assert(derived().rows()==b.rows() && "solve(): invalid number of rows of the right hand side matrix b"); return Solve(derived(), b.derived()); } /** \returns an expression of the solution x of \f$ A x = b \f$ using the current decomposition of A. * * \sa compute() */ template inline const Solve solve(const SparseMatrixBase& b) const { eigen_assert(m_isInitialized && "Solver is not initialized."); eigen_assert(derived().rows()==b.rows() && "solve(): invalid number of rows of the right hand side matrix b"); return Solve(derived(), b.derived()); } #ifndef EIGEN_PARSED_BY_DOXYGEN /** \internal default implementation of solving with a sparse rhs */ template void _solve_impl(const SparseMatrixBase &b, SparseMatrixBase &dest) const { internal::solve_sparse_through_dense_panels(derived(), b.derived(), dest.derived()); } #endif // EIGEN_PARSED_BY_DOXYGEN protected: mutable bool m_isInitialized; }; } // end namespace Eigen #endif // EIGEN_SPARSESOLVERBASE_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseCwiseBinaryOp.h0000644000175100001770000006166414107270226024500 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_CWISE_BINARY_OP_H #define EIGEN_SPARSE_CWISE_BINARY_OP_H namespace Eigen { // Here we have to handle 3 cases: // 1 - sparse op dense // 2 - dense op sparse // 3 - sparse op sparse // We also need to implement a 4th iterator for: // 4 - dense op dense // Finally, we also need to distinguish between the product and other operations : // configuration returned mode // 1 - sparse op dense product sparse // generic dense // 2 - dense op sparse product sparse // generic dense // 3 - sparse op sparse product sparse // generic sparse // 4 - dense op dense product dense // generic dense // // TODO to ease compiler job, we could specialize product/quotient with a scalar // and fallback to cwise-unary evaluator using bind1st_op and bind2nd_op. template class CwiseBinaryOpImpl : public SparseMatrixBase > { public: typedef CwiseBinaryOp Derived; typedef SparseMatrixBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Derived) CwiseBinaryOpImpl() { EIGEN_STATIC_ASSERT(( (!internal::is_same::StorageKind, typename internal::traits::StorageKind>::value) || ((internal::evaluator::Flags&RowMajorBit) == (internal::evaluator::Flags&RowMajorBit))), THE_STORAGE_ORDER_OF_BOTH_SIDES_MUST_MATCH); } }; namespace internal { // Generic "sparse OP sparse" template struct binary_sparse_evaluator; template struct binary_evaluator, IteratorBased, IteratorBased> : evaluator_base > { protected: typedef typename evaluator::InnerIterator LhsIterator; typedef typename evaluator::InnerIterator RhsIterator; typedef CwiseBinaryOp XprType; typedef typename traits::Scalar Scalar; typedef typename XprType::StorageIndex StorageIndex; public: class InnerIterator { public: EIGEN_STRONG_INLINE InnerIterator(const binary_evaluator& aEval, Index outer) : m_lhsIter(aEval.m_lhsImpl,outer), m_rhsIter(aEval.m_rhsImpl,outer), m_functor(aEval.m_functor) { this->operator++(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { if (m_lhsIter && m_rhsIter && (m_lhsIter.index() == m_rhsIter.index())) { m_id = m_lhsIter.index(); m_value = m_functor(m_lhsIter.value(), m_rhsIter.value()); ++m_lhsIter; ++m_rhsIter; } else if (m_lhsIter && (!m_rhsIter || (m_lhsIter.index() < m_rhsIter.index()))) { m_id = m_lhsIter.index(); m_value = m_functor(m_lhsIter.value(), Scalar(0)); ++m_lhsIter; } else if (m_rhsIter && (!m_lhsIter || (m_lhsIter.index() > m_rhsIter.index()))) { m_id = m_rhsIter.index(); m_value = m_functor(Scalar(0), m_rhsIter.value()); ++m_rhsIter; } else { m_value = Scalar(0); // this is to avoid a compilation warning m_id = -1; } return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_value; } EIGEN_STRONG_INLINE StorageIndex index() const { return m_id; } EIGEN_STRONG_INLINE Index outer() const { return m_lhsIter.outer(); } EIGEN_STRONG_INLINE Index row() const { return Lhs::IsRowMajor ? m_lhsIter.row() : index(); } EIGEN_STRONG_INLINE Index col() const { return Lhs::IsRowMajor ? index() : m_lhsIter.col(); } EIGEN_STRONG_INLINE operator bool() const { return m_id>=0; } protected: LhsIterator m_lhsIter; RhsIterator m_rhsIter; const BinaryOp& m_functor; Scalar m_value; StorageIndex m_id; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit binary_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_lhsImpl.nonZerosEstimate() + m_rhsImpl.nonZerosEstimate(); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; }; // dense op sparse template struct binary_evaluator, IndexBased, IteratorBased> : evaluator_base > { protected: typedef typename evaluator::InnerIterator RhsIterator; typedef CwiseBinaryOp XprType; typedef typename traits::Scalar Scalar; typedef typename XprType::StorageIndex StorageIndex; public: class InnerIterator { enum { IsRowMajor = (int(Rhs::Flags)&RowMajorBit)==RowMajorBit }; public: EIGEN_STRONG_INLINE InnerIterator(const binary_evaluator& aEval, Index outer) : m_lhsEval(aEval.m_lhsImpl), m_rhsIter(aEval.m_rhsImpl,outer), m_functor(aEval.m_functor), m_value(0), m_id(-1), m_innerSize(aEval.m_expr.rhs().innerSize()) { this->operator++(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_id; if(m_id &m_lhsEval; RhsIterator m_rhsIter; const BinaryOp& m_functor; Scalar m_value; StorageIndex m_id; StorageIndex m_innerSize; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit binary_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()), m_expr(xpr) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_expr.size(); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; const XprType &m_expr; }; // sparse op dense template struct binary_evaluator, IteratorBased, IndexBased> : evaluator_base > { protected: typedef typename evaluator::InnerIterator LhsIterator; typedef CwiseBinaryOp XprType; typedef typename traits::Scalar Scalar; typedef typename XprType::StorageIndex StorageIndex; public: class InnerIterator { enum { IsRowMajor = (int(Lhs::Flags)&RowMajorBit)==RowMajorBit }; public: EIGEN_STRONG_INLINE InnerIterator(const binary_evaluator& aEval, Index outer) : m_lhsIter(aEval.m_lhsImpl,outer), m_rhsEval(aEval.m_rhsImpl), m_functor(aEval.m_functor), m_value(0), m_id(-1), m_innerSize(aEval.m_expr.lhs().innerSize()) { this->operator++(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_id; if(m_id &m_rhsEval; const BinaryOp& m_functor; Scalar m_value; StorageIndex m_id; StorageIndex m_innerSize; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit binary_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()), m_expr(xpr) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_expr.size(); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; const XprType &m_expr; }; template::Kind, typename RhsKind = typename evaluator_traits::Kind, typename LhsScalar = typename traits::Scalar, typename RhsScalar = typename traits::Scalar> struct sparse_conjunction_evaluator; // "sparse .* sparse" template struct binary_evaluator, Lhs, Rhs>, IteratorBased, IteratorBased> : sparse_conjunction_evaluator, Lhs, Rhs> > { typedef CwiseBinaryOp, Lhs, Rhs> XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "dense .* sparse" template struct binary_evaluator, Lhs, Rhs>, IndexBased, IteratorBased> : sparse_conjunction_evaluator, Lhs, Rhs> > { typedef CwiseBinaryOp, Lhs, Rhs> XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "sparse .* dense" template struct binary_evaluator, Lhs, Rhs>, IteratorBased, IndexBased> : sparse_conjunction_evaluator, Lhs, Rhs> > { typedef CwiseBinaryOp, Lhs, Rhs> XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "sparse ./ dense" template struct binary_evaluator, Lhs, Rhs>, IteratorBased, IndexBased> : sparse_conjunction_evaluator, Lhs, Rhs> > { typedef CwiseBinaryOp, Lhs, Rhs> XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "sparse && sparse" template struct binary_evaluator, IteratorBased, IteratorBased> : sparse_conjunction_evaluator > { typedef CwiseBinaryOp XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "dense && sparse" template struct binary_evaluator, IndexBased, IteratorBased> : sparse_conjunction_evaluator > { typedef CwiseBinaryOp XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "sparse && dense" template struct binary_evaluator, IteratorBased, IndexBased> : sparse_conjunction_evaluator > { typedef CwiseBinaryOp XprType; typedef sparse_conjunction_evaluator Base; explicit binary_evaluator(const XprType& xpr) : Base(xpr) {} }; // "sparse ^ sparse" template struct sparse_conjunction_evaluator : evaluator_base { protected: typedef typename XprType::Functor BinaryOp; typedef typename XprType::Lhs LhsArg; typedef typename XprType::Rhs RhsArg; typedef typename evaluator::InnerIterator LhsIterator; typedef typename evaluator::InnerIterator RhsIterator; typedef typename XprType::StorageIndex StorageIndex; typedef typename traits::Scalar Scalar; public: class InnerIterator { public: EIGEN_STRONG_INLINE InnerIterator(const sparse_conjunction_evaluator& aEval, Index outer) : m_lhsIter(aEval.m_lhsImpl,outer), m_rhsIter(aEval.m_rhsImpl,outer), m_functor(aEval.m_functor) { while (m_lhsIter && m_rhsIter && (m_lhsIter.index() != m_rhsIter.index())) { if (m_lhsIter.index() < m_rhsIter.index()) ++m_lhsIter; else ++m_rhsIter; } } EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_lhsIter; ++m_rhsIter; while (m_lhsIter && m_rhsIter && (m_lhsIter.index() != m_rhsIter.index())) { if (m_lhsIter.index() < m_rhsIter.index()) ++m_lhsIter; else ++m_rhsIter; } return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_functor(m_lhsIter.value(), m_rhsIter.value()); } EIGEN_STRONG_INLINE StorageIndex index() const { return m_lhsIter.index(); } EIGEN_STRONG_INLINE Index outer() const { return m_lhsIter.outer(); } EIGEN_STRONG_INLINE Index row() const { return m_lhsIter.row(); } EIGEN_STRONG_INLINE Index col() const { return m_lhsIter.col(); } EIGEN_STRONG_INLINE operator bool() const { return (m_lhsIter && m_rhsIter); } protected: LhsIterator m_lhsIter; RhsIterator m_rhsIter; const BinaryOp& m_functor; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit sparse_conjunction_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return (std::min)(m_lhsImpl.nonZerosEstimate(), m_rhsImpl.nonZerosEstimate()); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; }; // "dense ^ sparse" template struct sparse_conjunction_evaluator : evaluator_base { protected: typedef typename XprType::Functor BinaryOp; typedef typename XprType::Lhs LhsArg; typedef typename XprType::Rhs RhsArg; typedef evaluator LhsEvaluator; typedef typename evaluator::InnerIterator RhsIterator; typedef typename XprType::StorageIndex StorageIndex; typedef typename traits::Scalar Scalar; public: class InnerIterator { enum { IsRowMajor = (int(RhsArg::Flags)&RowMajorBit)==RowMajorBit }; public: EIGEN_STRONG_INLINE InnerIterator(const sparse_conjunction_evaluator& aEval, Index outer) : m_lhsEval(aEval.m_lhsImpl), m_rhsIter(aEval.m_rhsImpl,outer), m_functor(aEval.m_functor), m_outer(outer) {} EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_rhsIter; return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_functor(m_lhsEval.coeff(IsRowMajor?m_outer:m_rhsIter.index(),IsRowMajor?m_rhsIter.index():m_outer), m_rhsIter.value()); } EIGEN_STRONG_INLINE StorageIndex index() const { return m_rhsIter.index(); } EIGEN_STRONG_INLINE Index outer() const { return m_rhsIter.outer(); } EIGEN_STRONG_INLINE Index row() const { return m_rhsIter.row(); } EIGEN_STRONG_INLINE Index col() const { return m_rhsIter.col(); } EIGEN_STRONG_INLINE operator bool() const { return m_rhsIter; } protected: const LhsEvaluator &m_lhsEval; RhsIterator m_rhsIter; const BinaryOp& m_functor; const Index m_outer; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit sparse_conjunction_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_rhsImpl.nonZerosEstimate(); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; }; // "sparse ^ dense" template struct sparse_conjunction_evaluator : evaluator_base { protected: typedef typename XprType::Functor BinaryOp; typedef typename XprType::Lhs LhsArg; typedef typename XprType::Rhs RhsArg; typedef typename evaluator::InnerIterator LhsIterator; typedef evaluator RhsEvaluator; typedef typename XprType::StorageIndex StorageIndex; typedef typename traits::Scalar Scalar; public: class InnerIterator { enum { IsRowMajor = (int(LhsArg::Flags)&RowMajorBit)==RowMajorBit }; public: EIGEN_STRONG_INLINE InnerIterator(const sparse_conjunction_evaluator& aEval, Index outer) : m_lhsIter(aEval.m_lhsImpl,outer), m_rhsEval(aEval.m_rhsImpl), m_functor(aEval.m_functor), m_outer(outer) {} EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_lhsIter; return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_functor(m_lhsIter.value(), m_rhsEval.coeff(IsRowMajor?m_outer:m_lhsIter.index(),IsRowMajor?m_lhsIter.index():m_outer)); } EIGEN_STRONG_INLINE StorageIndex index() const { return m_lhsIter.index(); } EIGEN_STRONG_INLINE Index outer() const { return m_lhsIter.outer(); } EIGEN_STRONG_INLINE Index row() const { return m_lhsIter.row(); } EIGEN_STRONG_INLINE Index col() const { return m_lhsIter.col(); } EIGEN_STRONG_INLINE operator bool() const { return m_lhsIter; } protected: LhsIterator m_lhsIter; const evaluator &m_rhsEval; const BinaryOp& m_functor; const Index m_outer; }; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit sparse_conjunction_evaluator(const XprType& xpr) : m_functor(xpr.functor()), m_lhsImpl(xpr.lhs()), m_rhsImpl(xpr.rhs()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_lhsImpl.nonZerosEstimate(); } protected: const BinaryOp m_functor; evaluator m_lhsImpl; evaluator m_rhsImpl; }; } /*************************************************************************** * Implementation of SparseMatrixBase and SparseCwise functions/operators ***************************************************************************/ template template Derived& SparseMatrixBase::operator+=(const EigenBase &other) { call_assignment(derived(), other.derived(), internal::add_assign_op()); return derived(); } template template Derived& SparseMatrixBase::operator-=(const EigenBase &other) { call_assignment(derived(), other.derived(), internal::assign_op()); return derived(); } template template EIGEN_STRONG_INLINE Derived & SparseMatrixBase::operator-=(const SparseMatrixBase &other) { return derived() = derived() - other.derived(); } template template EIGEN_STRONG_INLINE Derived & SparseMatrixBase::operator+=(const SparseMatrixBase& other) { return derived() = derived() + other.derived(); } template template Derived& SparseMatrixBase::operator+=(const DiagonalBase& other) { call_assignment_no_alias(derived(), other.derived(), internal::add_assign_op()); return derived(); } template template Derived& SparseMatrixBase::operator-=(const DiagonalBase& other) { call_assignment_no_alias(derived(), other.derived(), internal::sub_assign_op()); return derived(); } template template EIGEN_STRONG_INLINE const typename SparseMatrixBase::template CwiseProductDenseReturnType::Type SparseMatrixBase::cwiseProduct(const MatrixBase &other) const { return typename CwiseProductDenseReturnType::Type(derived(), other.derived()); } template EIGEN_STRONG_INLINE const CwiseBinaryOp, const DenseDerived, const SparseDerived> operator+(const MatrixBase &a, const SparseMatrixBase &b) { return CwiseBinaryOp, const DenseDerived, const SparseDerived>(a.derived(), b.derived()); } template EIGEN_STRONG_INLINE const CwiseBinaryOp, const SparseDerived, const DenseDerived> operator+(const SparseMatrixBase &a, const MatrixBase &b) { return CwiseBinaryOp, const SparseDerived, const DenseDerived>(a.derived(), b.derived()); } template EIGEN_STRONG_INLINE const CwiseBinaryOp, const DenseDerived, const SparseDerived> operator-(const MatrixBase &a, const SparseMatrixBase &b) { return CwiseBinaryOp, const DenseDerived, const SparseDerived>(a.derived(), b.derived()); } template EIGEN_STRONG_INLINE const CwiseBinaryOp, const SparseDerived, const DenseDerived> operator-(const SparseMatrixBase &a, const MatrixBase &b) { return CwiseBinaryOp, const SparseDerived, const DenseDerived>(a.derived(), b.derived()); } } // end namespace Eigen #endif // EIGEN_SPARSE_CWISE_BINARY_OP_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseMatrix.h0000644000175100001770000016020314107270226023213 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEMATRIX_H #define EIGEN_SPARSEMATRIX_H namespace Eigen { /** \ingroup SparseCore_Module * * \class SparseMatrix * * \brief A versatible sparse matrix representation * * This class implements a more versatile variants of the common \em compressed row/column storage format. * Each colmun's (resp. row) non zeros are stored as a pair of value with associated row (resp. colmiun) index. * All the non zeros are stored in a single large buffer. Unlike the \em compressed format, there might be extra * space in between the nonzeros of two successive colmuns (resp. rows) such that insertion of new non-zero * can be done with limited memory reallocation and copies. * * A call to the function makeCompressed() turns the matrix into the standard \em compressed format * compatible with many library. * * More details on this storage sceheme are given in the \ref TutorialSparse "manual pages". * * \tparam _Scalar the scalar type, i.e. the type of the coefficients * \tparam _Options Union of bit flags controlling the storage scheme. Currently the only possibility * is ColMajor or RowMajor. The default is 0 which means column-major. * \tparam _StorageIndex the type of the indices. It has to be a \b signed type (e.g., short, int, std::ptrdiff_t). Default is \c int. * * \warning In %Eigen 3.2, the undocumented type \c SparseMatrix::Index was improperly defined as the storage index type (e.g., int), * whereas it is now (starting from %Eigen 3.3) deprecated and always defined as Eigen::Index. * Codes making use of \c SparseMatrix::Index, might thus likely have to be changed to use \c SparseMatrix::StorageIndex instead. * * This class can be extended with the help of the plugin mechanism described on the page * \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_SPARSEMATRIX_PLUGIN. */ namespace internal { template struct traits > { typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; typedef Sparse StorageKind; typedef MatrixXpr XprKind; enum { RowsAtCompileTime = Dynamic, ColsAtCompileTime = Dynamic, MaxRowsAtCompileTime = Dynamic, MaxColsAtCompileTime = Dynamic, Flags = _Options | NestByRefBit | LvalueBit | CompressedAccessBit, SupportedAccessPatterns = InnerRandomAccessPattern }; }; template struct traits, DiagIndex> > { typedef SparseMatrix<_Scalar, _Options, _StorageIndex> MatrixType; typedef typename ref_selector::type MatrixTypeNested; typedef typename remove_reference::type _MatrixTypeNested; typedef _Scalar Scalar; typedef Dense StorageKind; typedef _StorageIndex StorageIndex; typedef MatrixXpr XprKind; enum { RowsAtCompileTime = Dynamic, ColsAtCompileTime = 1, MaxRowsAtCompileTime = Dynamic, MaxColsAtCompileTime = 1, Flags = LvalueBit }; }; template struct traits, DiagIndex> > : public traits, DiagIndex> > { enum { Flags = 0 }; }; } // end namespace internal template class SparseMatrix : public SparseCompressedBase > { typedef SparseCompressedBase Base; using Base::convert_index; friend class SparseVector<_Scalar,0,_StorageIndex>; template friend struct internal::Assignment; public: using Base::isCompressed; using Base::nonZeros; EIGEN_SPARSE_PUBLIC_INTERFACE(SparseMatrix) using Base::operator+=; using Base::operator-=; typedef MappedSparseMatrix Map; typedef Diagonal DiagonalReturnType; typedef Diagonal ConstDiagonalReturnType; typedef typename Base::InnerIterator InnerIterator; typedef typename Base::ReverseInnerIterator ReverseInnerIterator; using Base::IsRowMajor; typedef internal::CompressedStorage Storage; enum { Options = _Options }; typedef typename Base::IndexVector IndexVector; typedef typename Base::ScalarVector ScalarVector; protected: typedef SparseMatrix TransposedSparseMatrix; Index m_outerSize; Index m_innerSize; StorageIndex* m_outerIndex; StorageIndex* m_innerNonZeros; // optional, if null then the data is compressed Storage m_data; public: /** \returns the number of rows of the matrix */ inline Index rows() const { return IsRowMajor ? m_outerSize : m_innerSize; } /** \returns the number of columns of the matrix */ inline Index cols() const { return IsRowMajor ? m_innerSize : m_outerSize; } /** \returns the number of rows (resp. columns) of the matrix if the storage order column major (resp. row major) */ inline Index innerSize() const { return m_innerSize; } /** \returns the number of columns (resp. rows) of the matrix if the storage order column major (resp. row major) */ inline Index outerSize() const { return m_outerSize; } /** \returns a const pointer to the array of values. * This function is aimed at interoperability with other libraries. * \sa innerIndexPtr(), outerIndexPtr() */ inline const Scalar* valuePtr() const { return m_data.valuePtr(); } /** \returns a non-const pointer to the array of values. * This function is aimed at interoperability with other libraries. * \sa innerIndexPtr(), outerIndexPtr() */ inline Scalar* valuePtr() { return m_data.valuePtr(); } /** \returns a const pointer to the array of inner indices. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), outerIndexPtr() */ inline const StorageIndex* innerIndexPtr() const { return m_data.indexPtr(); } /** \returns a non-const pointer to the array of inner indices. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), outerIndexPtr() */ inline StorageIndex* innerIndexPtr() { return m_data.indexPtr(); } /** \returns a const pointer to the array of the starting positions of the inner vectors. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), innerIndexPtr() */ inline const StorageIndex* outerIndexPtr() const { return m_outerIndex; } /** \returns a non-const pointer to the array of the starting positions of the inner vectors. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), innerIndexPtr() */ inline StorageIndex* outerIndexPtr() { return m_outerIndex; } /** \returns a const pointer to the array of the number of non zeros of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 in compressed mode */ inline const StorageIndex* innerNonZeroPtr() const { return m_innerNonZeros; } /** \returns a non-const pointer to the array of the number of non zeros of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 in compressed mode */ inline StorageIndex* innerNonZeroPtr() { return m_innerNonZeros; } /** \internal */ inline Storage& data() { return m_data; } /** \internal */ inline const Storage& data() const { return m_data; } /** \returns the value of the matrix at position \a i, \a j * This function returns Scalar(0) if the element is an explicit \em zero */ inline Scalar coeff(Index row, Index col) const { eigen_assert(row>=0 && row=0 && col=0 && row=0 && col=start && "you probably called coeffRef on a non finalized matrix"); if(end<=start) return insert(row,col); const Index p = m_data.searchLowerIndex(start,end-1,StorageIndex(inner)); if((pinnerSize() non zeros if reserve(Index) has not been called earlier. * In this case, the insertion procedure is optimized for a \e sequential insertion mode where elements are assumed to be * inserted by increasing outer-indices. * * If that's not the case, then it is strongly recommended to either use a triplet-list to assemble the matrix, or to first * call reserve(const SizesType &) to reserve the appropriate number of non-zero elements per inner vector. * * Assuming memory has been appropriately reserved, this function performs a sorted insertion in O(1) * if the elements of each inner vector are inserted in increasing inner index order, and in O(nnz_j) for a random insertion. * */ Scalar& insert(Index row, Index col); public: /** Removes all non zeros but keep allocated memory * * This function does not free the currently allocated memory. To release as much as memory as possible, * call \code mat.data().squeeze(); \endcode after resizing it. * * \sa resize(Index,Index), data() */ inline void setZero() { m_data.clear(); memset(m_outerIndex, 0, (m_outerSize+1)*sizeof(StorageIndex)); if(m_innerNonZeros) memset(m_innerNonZeros, 0, (m_outerSize)*sizeof(StorageIndex)); } /** Preallocates \a reserveSize non zeros. * * Precondition: the matrix must be in compressed mode. */ inline void reserve(Index reserveSize) { eigen_assert(isCompressed() && "This function does not make sense in non compressed mode."); m_data.reserve(reserveSize); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** Preallocates \a reserveSize[\c j] non zeros for each column (resp. row) \c j. * * This function turns the matrix in non-compressed mode. * * The type \c SizesType must expose the following interface: \code typedef value_type; const value_type& operator[](i) const; \endcode * for \c i in the [0,this->outerSize()[ range. * Typical choices include std::vector, Eigen::VectorXi, Eigen::VectorXi::Constant, etc. */ template inline void reserve(const SizesType& reserveSizes); #else template inline void reserve(const SizesType& reserveSizes, const typename SizesType::value_type& enableif = #if (!EIGEN_COMP_MSVC) || (EIGEN_COMP_MSVC>=1500) // MSVC 2005 fails to compile with this typename typename #endif SizesType::value_type()) { EIGEN_UNUSED_VARIABLE(enableif); reserveInnerVectors(reserveSizes); } #endif // EIGEN_PARSED_BY_DOXYGEN protected: template inline void reserveInnerVectors(const SizesType& reserveSizes) { if(isCompressed()) { Index totalReserveSize = 0; // turn the matrix into non-compressed mode m_innerNonZeros = static_cast(std::malloc(m_outerSize * sizeof(StorageIndex))); if (!m_innerNonZeros) internal::throw_std_bad_alloc(); // temporarily use m_innerSizes to hold the new starting points. StorageIndex* newOuterIndex = m_innerNonZeros; StorageIndex count = 0; for(Index j=0; j=0; --j) { StorageIndex innerNNZ = previousOuterIndex - m_outerIndex[j]; for(Index i=innerNNZ-1; i>=0; --i) { m_data.index(newOuterIndex[j]+i) = m_data.index(m_outerIndex[j]+i); m_data.value(newOuterIndex[j]+i) = m_data.value(m_outerIndex[j]+i); } previousOuterIndex = m_outerIndex[j]; m_outerIndex[j] = newOuterIndex[j]; m_innerNonZeros[j] = innerNNZ; } if(m_outerSize>0) m_outerIndex[m_outerSize] = m_outerIndex[m_outerSize-1] + m_innerNonZeros[m_outerSize-1] + reserveSizes[m_outerSize-1]; m_data.resize(m_outerIndex[m_outerSize]); } else { StorageIndex* newOuterIndex = static_cast(std::malloc((m_outerSize+1)*sizeof(StorageIndex))); if (!newOuterIndex) internal::throw_std_bad_alloc(); StorageIndex count = 0; for(Index j=0; j(reserveSizes[j], alreadyReserved); count += toReserve + m_innerNonZeros[j]; } newOuterIndex[m_outerSize] = count; m_data.resize(count); for(Index j=m_outerSize-1; j>=0; --j) { Index offset = newOuterIndex[j] - m_outerIndex[j]; if(offset>0) { StorageIndex innerNNZ = m_innerNonZeros[j]; for(Index i=innerNNZ-1; i>=0; --i) { m_data.index(newOuterIndex[j]+i) = m_data.index(m_outerIndex[j]+i); m_data.value(newOuterIndex[j]+i) = m_data.value(m_outerIndex[j]+i); } } } std::swap(m_outerIndex, newOuterIndex); std::free(newOuterIndex); } } public: //--- low level purely coherent filling --- /** \internal * \returns a reference to the non zero coefficient at position \a row, \a col assuming that: * - the nonzero does not already exist * - the new coefficient is the last one according to the storage order * * Before filling a given inner vector you must call the statVec(Index) function. * * After an insertion session, you should call the finalize() function. * * \sa insert, insertBackByOuterInner, startVec */ inline Scalar& insertBack(Index row, Index col) { return insertBackByOuterInner(IsRowMajor?row:col, IsRowMajor?col:row); } /** \internal * \sa insertBack, startVec */ inline Scalar& insertBackByOuterInner(Index outer, Index inner) { eigen_assert(Index(m_outerIndex[outer+1]) == m_data.size() && "Invalid ordered insertion (invalid outer index)"); eigen_assert( (m_outerIndex[outer+1]-m_outerIndex[outer]==0 || m_data.index(m_data.size()-1)(m_data.size()); Index i = m_outerSize; // find the last filled column while (i>=0 && m_outerIndex[i]==0) --i; ++i; while (i<=m_outerSize) { m_outerIndex[i] = size; ++i; } } } //--- template void setFromTriplets(const InputIterators& begin, const InputIterators& end); template void setFromTriplets(const InputIterators& begin, const InputIterators& end, DupFunctor dup_func); void sumupDuplicates() { collapseDuplicates(internal::scalar_sum_op()); } template void collapseDuplicates(DupFunctor dup_func = DupFunctor()); //--- /** \internal * same as insert(Index,Index) except that the indices are given relative to the storage order */ Scalar& insertByOuterInner(Index j, Index i) { return insert(IsRowMajor ? j : i, IsRowMajor ? i : j); } /** Turns the matrix into the \em compressed format. */ void makeCompressed() { if(isCompressed()) return; eigen_internal_assert(m_outerIndex!=0 && m_outerSize>0); Index oldStart = m_outerIndex[1]; m_outerIndex[1] = m_innerNonZeros[0]; for(Index j=1; j0) { for(Index k=0; k(std::malloc(m_outerSize * sizeof(StorageIndex))); for (Index i = 0; i < m_outerSize; i++) { m_innerNonZeros[i] = m_outerIndex[i+1] - m_outerIndex[i]; } } /** Suppresses all nonzeros which are \b much \b smaller \b than \a reference under the tolerance \a epsilon */ void prune(const Scalar& reference, const RealScalar& epsilon = NumTraits::dummy_precision()) { prune(default_prunning_func(reference,epsilon)); } /** Turns the matrix into compressed format, and suppresses all nonzeros which do not satisfy the predicate \a keep. * The functor type \a KeepFunc must implement the following function: * \code * bool operator() (const Index& row, const Index& col, const Scalar& value) const; * \endcode * \sa prune(Scalar,RealScalar) */ template void prune(const KeepFunc& keep = KeepFunc()) { // TODO optimize the uncompressed mode to avoid moving and allocating the data twice makeCompressed(); StorageIndex k = 0; for(Index j=0; jrows() == rows && this->cols() == cols) return; // If one dimension is null, then there is nothing to be preserved if(rows==0 || cols==0) return resize(rows,cols); Index innerChange = IsRowMajor ? cols - this->cols() : rows - this->rows(); Index outerChange = IsRowMajor ? rows - this->rows() : cols - this->cols(); StorageIndex newInnerSize = convert_index(IsRowMajor ? cols : rows); // Deals with inner non zeros if (m_innerNonZeros) { // Resize m_innerNonZeros StorageIndex *newInnerNonZeros = static_cast(std::realloc(m_innerNonZeros, (m_outerSize + outerChange) * sizeof(StorageIndex))); if (!newInnerNonZeros) internal::throw_std_bad_alloc(); m_innerNonZeros = newInnerNonZeros; for(Index i=m_outerSize; i(std::malloc((m_outerSize + outerChange) * sizeof(StorageIndex))); if (!m_innerNonZeros) internal::throw_std_bad_alloc(); for(Index i = 0; i < m_outerSize + (std::min)(outerChange, Index(0)); i++) m_innerNonZeros[i] = m_outerIndex[i+1] - m_outerIndex[i]; for(Index i = m_outerSize; i < m_outerSize + outerChange; i++) m_innerNonZeros[i] = 0; } // Change the m_innerNonZeros in case of a decrease of inner size if (m_innerNonZeros && innerChange < 0) { for(Index i = 0; i < m_outerSize + (std::min)(outerChange, Index(0)); i++) { StorageIndex &n = m_innerNonZeros[i]; StorageIndex start = m_outerIndex[i]; while (n > 0 && m_data.index(start+n-1) >= newInnerSize) --n; } } m_innerSize = newInnerSize; // Re-allocate outer index structure if necessary if (outerChange == 0) return; StorageIndex *newOuterIndex = static_cast(std::realloc(m_outerIndex, (m_outerSize + outerChange + 1) * sizeof(StorageIndex))); if (!newOuterIndex) internal::throw_std_bad_alloc(); m_outerIndex = newOuterIndex; if (outerChange > 0) { StorageIndex lastIdx = m_outerSize == 0 ? 0 : m_outerIndex[m_outerSize]; for(Index i=m_outerSize; i(std::malloc((outerSize + 1) * sizeof(StorageIndex))); if (!m_outerIndex) internal::throw_std_bad_alloc(); m_outerSize = outerSize; } if(m_innerNonZeros) { std::free(m_innerNonZeros); m_innerNonZeros = 0; } memset(m_outerIndex, 0, (m_outerSize+1)*sizeof(StorageIndex)); } /** \internal * Resize the nonzero vector to \a size */ void resizeNonZeros(Index size) { m_data.resize(size); } /** \returns a const expression of the diagonal coefficients. */ const ConstDiagonalReturnType diagonal() const { return ConstDiagonalReturnType(*this); } /** \returns a read-write expression of the diagonal coefficients. * \warning If the diagonal entries are written, then all diagonal * entries \b must already exist, otherwise an assertion will be raised. */ DiagonalReturnType diagonal() { return DiagonalReturnType(*this); } /** Default constructor yielding an empty \c 0 \c x \c 0 matrix */ inline SparseMatrix() : m_outerSize(-1), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); resize(0, 0); } /** Constructs a \a rows \c x \a cols empty matrix */ inline SparseMatrix(Index rows, Index cols) : m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); resize(rows, cols); } /** Constructs a sparse matrix from the sparse expression \a other */ template inline SparseMatrix(const SparseMatrixBase& other) : m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) check_template_parameters(); const bool needToTranspose = (Flags & RowMajorBit) != (internal::evaluator::Flags & RowMajorBit); if (needToTranspose) *this = other.derived(); else { #ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN #endif internal::call_assignment_no_alias(*this, other.derived()); } } /** Constructs a sparse matrix from the sparse selfadjoint view \a other */ template inline SparseMatrix(const SparseSelfAdjointView& other) : m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); Base::operator=(other); } /** Copy constructor (it performs a deep copy) */ inline SparseMatrix(const SparseMatrix& other) : Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); *this = other.derived(); } /** \brief Copy constructor with in-place evaluation */ template SparseMatrix(const ReturnByValue& other) : Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); initAssignment(other); other.evalTo(*this); } /** \brief Copy constructor with in-place evaluation */ template explicit SparseMatrix(const DiagonalBase& other) : Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0) { check_template_parameters(); *this = other.derived(); } /** Swaps the content of two sparse matrices of the same type. * This is a fast operation that simply swaps the underlying pointers and parameters. */ inline void swap(SparseMatrix& other) { //EIGEN_DBG_SPARSE(std::cout << "SparseMatrix:: swap\n"); std::swap(m_outerIndex, other.m_outerIndex); std::swap(m_innerSize, other.m_innerSize); std::swap(m_outerSize, other.m_outerSize); std::swap(m_innerNonZeros, other.m_innerNonZeros); m_data.swap(other.m_data); } /** Sets *this to the identity matrix. * This function also turns the matrix into compressed mode, and drop any reserved memory. */ inline void setIdentity() { eigen_assert(rows() == cols() && "ONLY FOR SQUARED MATRICES"); this->m_data.resize(rows()); Eigen::Map(this->m_data.indexPtr(), rows()).setLinSpaced(0, StorageIndex(rows()-1)); Eigen::Map(this->m_data.valuePtr(), rows()).setOnes(); Eigen::Map(this->m_outerIndex, rows()+1).setLinSpaced(0, StorageIndex(rows())); std::free(m_innerNonZeros); m_innerNonZeros = 0; } inline SparseMatrix& operator=(const SparseMatrix& other) { if (other.isRValue()) { swap(other.const_cast_derived()); } else if(this!=&other) { #ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN #endif initAssignment(other); if(other.isCompressed()) { internal::smart_copy(other.m_outerIndex, other.m_outerIndex + m_outerSize + 1, m_outerIndex); m_data = other.m_data; } else { Base::operator=(other); } } return *this; } #ifndef EIGEN_PARSED_BY_DOXYGEN template inline SparseMatrix& operator=(const EigenBase& other) { return Base::operator=(other.derived()); } template inline SparseMatrix& operator=(const Product& other); #endif // EIGEN_PARSED_BY_DOXYGEN template EIGEN_DONT_INLINE SparseMatrix& operator=(const SparseMatrixBase& other); friend std::ostream & operator << (std::ostream & s, const SparseMatrix& m) { EIGEN_DBG_SPARSE( s << "Nonzero entries:\n"; if(m.isCompressed()) { for (Index i=0; i&>(m); return s; } /** Destructor */ inline ~SparseMatrix() { std::free(m_outerIndex); std::free(m_innerNonZeros); } /** Overloaded for performance */ Scalar sum() const; # ifdef EIGEN_SPARSEMATRIX_PLUGIN # include EIGEN_SPARSEMATRIX_PLUGIN # endif protected: template void initAssignment(const Other& other) { resize(other.rows(), other.cols()); if(m_innerNonZeros) { std::free(m_innerNonZeros); m_innerNonZeros = 0; } } /** \internal * \sa insert(Index,Index) */ EIGEN_DONT_INLINE Scalar& insertCompressed(Index row, Index col); /** \internal * A vector object that is equal to 0 everywhere but v at the position i */ class SingletonVector { StorageIndex m_index; StorageIndex m_value; public: typedef StorageIndex value_type; SingletonVector(Index i, Index v) : m_index(convert_index(i)), m_value(convert_index(v)) {} StorageIndex operator[](Index i) const { return i==m_index ? m_value : 0; } }; /** \internal * \sa insert(Index,Index) */ EIGEN_DONT_INLINE Scalar& insertUncompressed(Index row, Index col); public: /** \internal * \sa insert(Index,Index) */ EIGEN_STRONG_INLINE Scalar& insertBackUncompressed(Index row, Index col) { const Index outer = IsRowMajor ? row : col; const Index inner = IsRowMajor ? col : row; eigen_assert(!isCompressed()); eigen_assert(m_innerNonZeros[outer]<=(m_outerIndex[outer+1] - m_outerIndex[outer])); Index p = m_outerIndex[outer] + m_innerNonZeros[outer]++; m_data.index(p) = convert_index(inner); return (m_data.value(p) = Scalar(0)); } protected: struct IndexPosPair { IndexPosPair(Index a_i, Index a_p) : i(a_i), p(a_p) {} Index i; Index p; }; /** \internal assign \a diagXpr to the diagonal of \c *this * There are different strategies: * 1 - if *this is overwritten (Func==assign_op) or *this is empty, then we can work treat *this as a dense vector expression. * 2 - otherwise, for each diagonal coeff, * 2.a - if it already exists, then we update it, * 2.b - otherwise, if *this is uncompressed and that the current inner-vector has empty room for at least 1 element, then we perform an in-place insertion. * 2.c - otherwise, we'll have to reallocate and copy everything, so instead of doing so for each new element, it is recorded in a std::vector. * 3 - at the end, if some entries failed to be inserted in-place, then we alloc a new buffer, copy each chunk at the right position, and insert the new elements. * * TODO: some piece of code could be isolated and reused for a general in-place update strategy. * TODO: if we start to defer the insertion of some elements (i.e., case 2.c executed once), * then it *might* be better to disable case 2.b since they will have to be copied anyway. */ template void assignDiagonal(const DiagXpr diagXpr, const Func& assignFunc) { Index n = diagXpr.size(); const bool overwrite = internal::is_same >::value; if(overwrite) { if((this->rows()!=n) || (this->cols()!=n)) this->resize(n, n); } if(m_data.size()==0 || overwrite) { typedef Array ArrayXI; this->makeCompressed(); this->resizeNonZeros(n); Eigen::Map(this->innerIndexPtr(), n).setLinSpaced(0,StorageIndex(n)-1); Eigen::Map(this->outerIndexPtr(), n+1).setLinSpaced(0,StorageIndex(n)); Eigen::Map > values = this->coeffs(); values.setZero(); internal::call_assignment_no_alias(values, diagXpr, assignFunc); } else { bool isComp = isCompressed(); internal::evaluator diaEval(diagXpr); std::vector newEntries; // 1 - try in-place update and record insertion failures for(Index i = 0; ilower_bound(i,i); Index p = lb.value; if(lb.found) { // the coeff already exists assignFunc.assignCoeff(m_data.value(p), diaEval.coeff(i)); } else if((!isComp) && m_innerNonZeros[i] < (m_outerIndex[i+1]-m_outerIndex[i])) { // non compressed mode with local room for inserting one element m_data.moveChunk(p, p+1, m_outerIndex[i]+m_innerNonZeros[i]-p); m_innerNonZeros[i]++; m_data.value(p) = Scalar(0); m_data.index(p) = StorageIndex(i); assignFunc.assignCoeff(m_data.value(p), diaEval.coeff(i)); } else { // defer insertion newEntries.push_back(IndexPosPair(i,p)); } } // 2 - insert deferred entries Index n_entries = Index(newEntries.size()); if(n_entries>0) { Storage newData(m_data.size()+n_entries); Index prev_p = 0; Index prev_i = 0; for(Index k=0; k::IsSigned,THE_INDEX_TYPE_MUST_BE_A_SIGNED_TYPE); EIGEN_STATIC_ASSERT((Options&(ColMajor|RowMajor))==Options,INVALID_MATRIX_TEMPLATE_PARAMETERS); } struct default_prunning_func { default_prunning_func(const Scalar& ref, const RealScalar& eps) : reference(ref), epsilon(eps) {} inline bool operator() (const Index&, const Index&, const Scalar& value) const { return !internal::isMuchSmallerThan(value, reference, epsilon); } Scalar reference; RealScalar epsilon; }; }; namespace internal { template void set_from_triplets(const InputIterator& begin, const InputIterator& end, SparseMatrixType& mat, DupFunctor dup_func) { enum { IsRowMajor = SparseMatrixType::IsRowMajor }; typedef typename SparseMatrixType::Scalar Scalar; typedef typename SparseMatrixType::StorageIndex StorageIndex; SparseMatrix trMat(mat.rows(),mat.cols()); if(begin!=end) { // pass 1: count the nnz per inner-vector typename SparseMatrixType::IndexVector wi(trMat.outerSize()); wi.setZero(); for(InputIterator it(begin); it!=end; ++it) { eigen_assert(it->row()>=0 && it->row()col()>=0 && it->col()col() : it->row())++; } // pass 2: insert all the elements into trMat trMat.reserve(wi); for(InputIterator it(begin); it!=end; ++it) trMat.insertBackUncompressed(it->row(),it->col()) = it->value(); // pass 3: trMat.collapseDuplicates(dup_func); } // pass 4: transposed copy -> implicit sorting mat = trMat; } } /** Fill the matrix \c *this with the list of \em triplets defined by the iterator range \a begin - \a end. * * A \em triplet is a tuple (i,j,value) defining a non-zero element. * The input list of triplets does not have to be sorted, and can contains duplicated elements. * In any case, the result is a \b sorted and \b compressed sparse matrix where the duplicates have been summed up. * This is a \em O(n) operation, with \em n the number of triplet elements. * The initial contents of \c *this is destroyed. * The matrix \c *this must be properly resized beforehand using the SparseMatrix(Index,Index) constructor, * or the resize(Index,Index) method. The sizes are not extracted from the triplet list. * * The \a InputIterators value_type must provide the following interface: * \code * Scalar value() const; // the value * Scalar row() const; // the row index i * Scalar col() const; // the column index j * \endcode * See for instance the Eigen::Triplet template class. * * Here is a typical usage example: * \code typedef Triplet T; std::vector tripletList; tripletList.reserve(estimation_of_entries); for(...) { // ... tripletList.push_back(T(i,j,v_ij)); } SparseMatrixType m(rows,cols); m.setFromTriplets(tripletList.begin(), tripletList.end()); // m is ready to go! * \endcode * * \warning The list of triplets is read multiple times (at least twice). Therefore, it is not recommended to define * an abstract iterator over a complex data-structure that would be expensive to evaluate. The triplets should rather * be explicitly stored into a std::vector for instance. */ template template void SparseMatrix::setFromTriplets(const InputIterators& begin, const InputIterators& end) { internal::set_from_triplets >(begin, end, *this, internal::scalar_sum_op()); } /** The same as setFromTriplets but when duplicates are met the functor \a dup_func is applied: * \code * value = dup_func(OldValue, NewValue) * \endcode * Here is a C++11 example keeping the latest entry only: * \code * mat.setFromTriplets(triplets.begin(), triplets.end(), [] (const Scalar&,const Scalar &b) { return b; }); * \endcode */ template template void SparseMatrix::setFromTriplets(const InputIterators& begin, const InputIterators& end, DupFunctor dup_func) { internal::set_from_triplets, DupFunctor>(begin, end, *this, dup_func); } /** \internal */ template template void SparseMatrix::collapseDuplicates(DupFunctor dup_func) { eigen_assert(!isCompressed()); // TODO, in practice we should be able to use m_innerNonZeros for that task IndexVector wi(innerSize()); wi.fill(-1); StorageIndex count = 0; // for each inner-vector, wi[inner_index] will hold the position of first element into the index/value buffers for(Index j=0; j=start) { // we already meet this entry => accumulate it m_data.value(wi(i)) = dup_func(m_data.value(wi(i)), m_data.value(k)); } else { m_data.value(count) = m_data.value(k); m_data.index(count) = m_data.index(k); wi(i) = count; ++count; } } m_outerIndex[j] = start; } m_outerIndex[m_outerSize] = count; // turn the matrix into compressed form std::free(m_innerNonZeros); m_innerNonZeros = 0; m_data.resize(m_outerIndex[m_outerSize]); } template template EIGEN_DONT_INLINE SparseMatrix& SparseMatrix::operator=(const SparseMatrixBase& other) { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) #ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN #endif const bool needToTranspose = (Flags & RowMajorBit) != (internal::evaluator::Flags & RowMajorBit); if (needToTranspose) { #ifdef EIGEN_SPARSE_TRANSPOSED_COPY_PLUGIN EIGEN_SPARSE_TRANSPOSED_COPY_PLUGIN #endif // two passes algorithm: // 1 - compute the number of coeffs per dest inner vector // 2 - do the actual copy/eval // Since each coeff of the rhs has to be evaluated twice, let's evaluate it if needed typedef typename internal::nested_eval::type >::type OtherCopy; typedef typename internal::remove_all::type _OtherCopy; typedef internal::evaluator<_OtherCopy> OtherCopyEval; OtherCopy otherCopy(other.derived()); OtherCopyEval otherCopyEval(otherCopy); SparseMatrix dest(other.rows(),other.cols()); Eigen::Map (dest.m_outerIndex,dest.outerSize()).setZero(); // pass 1 // FIXME the above copy could be merged with that pass for (Index j=0; jswap(dest); return *this; } else { if(other.isRValue()) { initAssignment(other.derived()); } // there is no special optimization return Base::operator=(other.derived()); } } template typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insert(Index row, Index col) { eigen_assert(row>=0 && row=0 && col(std::malloc(m_outerSize * sizeof(StorageIndex))); if(!m_innerNonZeros) internal::throw_std_bad_alloc(); memset(m_innerNonZeros, 0, (m_outerSize)*sizeof(StorageIndex)); // pack all inner-vectors to the end of the pre-allocated space // and allocate the entire free-space to the first inner-vector StorageIndex end = convert_index(m_data.allocatedSize()); for(Index j=1; j<=m_outerSize; ++j) m_outerIndex[j] = end; } else { // turn the matrix into non-compressed mode m_innerNonZeros = static_cast(std::malloc(m_outerSize * sizeof(StorageIndex))); if(!m_innerNonZeros) internal::throw_std_bad_alloc(); for(Index j=0; j=0 && m_innerNonZeros[j]==0) m_outerIndex[j--] = p; // push back the new element ++m_innerNonZeros[outer]; m_data.append(Scalar(0), inner); // check for reallocation if(data_end != m_data.allocatedSize()) { // m_data has been reallocated // -> move remaining inner-vectors back to the end of the free-space // so that the entire free-space is allocated to the current inner-vector. eigen_internal_assert(data_end < m_data.allocatedSize()); StorageIndex new_end = convert_index(m_data.allocatedSize()); for(Index k=outer+1; k<=m_outerSize; ++k) if(m_outerIndex[k]==data_end) m_outerIndex[k] = new_end; } return m_data.value(p); } // Second case: the next inner-vector is packed to the end // and the current inner-vector end match the used-space. if(m_outerIndex[outer+1]==data_end && m_outerIndex[outer]+m_innerNonZeros[outer]==m_data.size()) { eigen_internal_assert(outer+1==m_outerSize || m_innerNonZeros[outer+1]==0); // add space for the new element ++m_innerNonZeros[outer]; m_data.resize(m_data.size()+1); // check for reallocation if(data_end != m_data.allocatedSize()) { // m_data has been reallocated // -> move remaining inner-vectors back to the end of the free-space // so that the entire free-space is allocated to the current inner-vector. eigen_internal_assert(data_end < m_data.allocatedSize()); StorageIndex new_end = convert_index(m_data.allocatedSize()); for(Index k=outer+1; k<=m_outerSize; ++k) if(m_outerIndex[k]==data_end) m_outerIndex[k] = new_end; } // and insert it at the right position (sorted insertion) Index startId = m_outerIndex[outer]; Index p = m_outerIndex[outer]+m_innerNonZeros[outer]-1; while ( (p > startId) && (m_data.index(p-1) > inner) ) { m_data.index(p) = m_data.index(p-1); m_data.value(p) = m_data.value(p-1); --p; } m_data.index(p) = convert_index(inner); return (m_data.value(p) = Scalar(0)); } if(m_data.size() != m_data.allocatedSize()) { // make sure the matrix is compatible to random un-compressed insertion: m_data.resize(m_data.allocatedSize()); this->reserveInnerVectors(Array::Constant(m_outerSize, 2)); } return insertUncompressed(row,col); } template EIGEN_DONT_INLINE typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insertUncompressed(Index row, Index col) { eigen_assert(!isCompressed()); const Index outer = IsRowMajor ? row : col; const StorageIndex inner = convert_index(IsRowMajor ? col : row); Index room = m_outerIndex[outer+1] - m_outerIndex[outer]; StorageIndex innerNNZ = m_innerNonZeros[outer]; if(innerNNZ>=room) { // this inner vector is full, we need to reallocate the whole buffer :( reserve(SingletonVector(outer,std::max(2,innerNNZ))); } Index startId = m_outerIndex[outer]; Index p = startId + m_innerNonZeros[outer]; while ( (p > startId) && (m_data.index(p-1) > inner) ) { m_data.index(p) = m_data.index(p-1); m_data.value(p) = m_data.value(p-1); --p; } eigen_assert((p<=startId || m_data.index(p-1)!=inner) && "you cannot insert an element that already exists, you must call coeffRef to this end"); m_innerNonZeros[outer]++; m_data.index(p) = inner; return (m_data.value(p) = Scalar(0)); } template EIGEN_DONT_INLINE typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insertCompressed(Index row, Index col) { eigen_assert(isCompressed()); const Index outer = IsRowMajor ? row : col; const Index inner = IsRowMajor ? col : row; Index previousOuter = outer; if (m_outerIndex[outer+1]==0) { // we start a new inner vector while (previousOuter>=0 && m_outerIndex[previousOuter]==0) { m_outerIndex[previousOuter] = convert_index(m_data.size()); --previousOuter; } m_outerIndex[outer+1] = m_outerIndex[outer]; } // here we have to handle the tricky case where the outerIndex array // starts with: [ 0 0 0 0 0 1 ...] and we are inserted in, e.g., // the 2nd inner vector... bool isLastVec = (!(previousOuter==-1 && m_data.size()!=0)) && (std::size_t(m_outerIndex[outer+1]) == m_data.size()); std::size_t startId = m_outerIndex[outer]; // FIXME let's make sure sizeof(long int) == sizeof(std::size_t) std::size_t p = m_outerIndex[outer+1]; ++m_outerIndex[outer+1]; double reallocRatio = 1; if (m_data.allocatedSize()<=m_data.size()) { // if there is no preallocated memory, let's reserve a minimum of 32 elements if (m_data.size()==0) { m_data.reserve(32); } else { // we need to reallocate the data, to reduce multiple reallocations // we use a smart resize algorithm based on the current filling ratio // in addition, we use double to avoid integers overflows double nnzEstimate = double(m_outerIndex[outer])*double(m_outerSize)/double(outer+1); reallocRatio = (nnzEstimate-double(m_data.size()))/double(m_data.size()); // furthermore we bound the realloc ratio to: // 1) reduce multiple minor realloc when the matrix is almost filled // 2) avoid to allocate too much memory when the matrix is almost empty reallocRatio = (std::min)((std::max)(reallocRatio,1.5),8.); } } m_data.resize(m_data.size()+1,reallocRatio); if (!isLastVec) { if (previousOuter==-1) { // oops wrong guess. // let's correct the outer offsets for (Index k=0; k<=(outer+1); ++k) m_outerIndex[k] = 0; Index k=outer+1; while(m_outerIndex[k]==0) m_outerIndex[k++] = 1; while (k<=m_outerSize && m_outerIndex[k]!=0) m_outerIndex[k++]++; p = 0; --k; k = m_outerIndex[k]-1; while (k>0) { m_data.index(k) = m_data.index(k-1); m_data.value(k) = m_data.value(k-1); k--; } } else { // we are not inserting into the last inner vec // update outer indices: Index j = outer+2; while (j<=m_outerSize && m_outerIndex[j]!=0) m_outerIndex[j++]++; --j; // shift data of last vecs: Index k = m_outerIndex[j]-1; while (k>=Index(p)) { m_data.index(k) = m_data.index(k-1); m_data.value(k) = m_data.value(k-1); k--; } } } while ( (p > startId) && (m_data.index(p-1) > inner) ) { m_data.index(p) = m_data.index(p-1); m_data.value(p) = m_data.value(p-1); --p; } m_data.index(p) = inner; return (m_data.value(p) = Scalar(0)); } namespace internal { template struct evaluator > : evaluator > > { typedef evaluator > > Base; typedef SparseMatrix<_Scalar,_Options,_StorageIndex> SparseMatrixType; evaluator() : Base() {} explicit evaluator(const SparseMatrixType &mat) : Base(mat) {} }; } } // end namespace Eigen #endif // EIGEN_SPARSEMATRIX_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseUtil.h0000644000175100001770000001525314107270226022670 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEUTIL_H #define EIGEN_SPARSEUTIL_H namespace Eigen { #ifdef NDEBUG #define EIGEN_DBG_SPARSE(X) #else #define EIGEN_DBG_SPARSE(X) X #endif #define EIGEN_SPARSE_INHERIT_ASSIGNMENT_OPERATOR(Derived, Op) \ template \ EIGEN_STRONG_INLINE Derived& operator Op(const Eigen::SparseMatrixBase& other) \ { \ return Base::operator Op(other.derived()); \ } \ EIGEN_STRONG_INLINE Derived& operator Op(const Derived& other) \ { \ return Base::operator Op(other); \ } #define EIGEN_SPARSE_INHERIT_SCALAR_ASSIGNMENT_OPERATOR(Derived, Op) \ template \ EIGEN_STRONG_INLINE Derived& operator Op(const Other& scalar) \ { \ return Base::operator Op(scalar); \ } #define EIGEN_SPARSE_INHERIT_ASSIGNMENT_OPERATORS(Derived) \ EIGEN_SPARSE_INHERIT_ASSIGNMENT_OPERATOR(Derived, =) #define EIGEN_SPARSE_PUBLIC_INTERFACE(Derived) \ EIGEN_GENERIC_PUBLIC_INTERFACE(Derived) const int CoherentAccessPattern = 0x1; const int InnerRandomAccessPattern = 0x2 | CoherentAccessPattern; const int OuterRandomAccessPattern = 0x4 | CoherentAccessPattern; const int RandomAccessPattern = 0x8 | OuterRandomAccessPattern | InnerRandomAccessPattern; template class SparseMatrix; template class DynamicSparseMatrix; template class SparseVector; template class MappedSparseMatrix; template class SparseSelfAdjointView; template class SparseDiagonalProduct; template class SparseView; template class SparseSparseProduct; template class SparseTimeDenseProduct; template class DenseTimeSparseProduct; template class SparseDenseOuterProduct; template struct SparseSparseProductReturnType; template::ColsAtCompileTime,internal::traits::RowsAtCompileTime)> struct DenseSparseProductReturnType; template::ColsAtCompileTime,internal::traits::RowsAtCompileTime)> struct SparseDenseProductReturnType; template class SparseSymmetricPermutationProduct; namespace internal { template struct sparse_eval; template struct eval : sparse_eval::RowsAtCompileTime,traits::ColsAtCompileTime,traits::Flags> {}; template struct sparse_eval { typedef typename traits::Scalar _Scalar; typedef typename traits::StorageIndex _StorageIndex; public: typedef SparseVector<_Scalar, RowMajor, _StorageIndex> type; }; template struct sparse_eval { typedef typename traits::Scalar _Scalar; typedef typename traits::StorageIndex _StorageIndex; public: typedef SparseVector<_Scalar, ColMajor, _StorageIndex> type; }; // TODO this seems almost identical to plain_matrix_type template struct sparse_eval { typedef typename traits::Scalar _Scalar; typedef typename traits::StorageIndex _StorageIndex; enum { _Options = ((Flags&RowMajorBit)==RowMajorBit) ? RowMajor : ColMajor }; public: typedef SparseMatrix<_Scalar, _Options, _StorageIndex> type; }; template struct sparse_eval { typedef typename traits::Scalar _Scalar; public: typedef Matrix<_Scalar, 1, 1> type; }; template struct plain_matrix_type { typedef typename traits::Scalar _Scalar; typedef typename traits::StorageIndex _StorageIndex; enum { _Options = ((evaluator::Flags&RowMajorBit)==RowMajorBit) ? RowMajor : ColMajor }; public: typedef SparseMatrix<_Scalar, _Options, _StorageIndex> type; }; template struct plain_object_eval : sparse_eval::RowsAtCompileTime,traits::ColsAtCompileTime, evaluator::Flags> {}; template struct solve_traits { typedef typename sparse_eval::Flags>::type PlainObject; }; template struct generic_xpr_base { typedef SparseMatrixBase type; }; struct SparseTriangularShape { static std::string debugName() { return "SparseTriangularShape"; } }; struct SparseSelfAdjointShape { static std::string debugName() { return "SparseSelfAdjointShape"; } }; template<> struct glue_shapes { typedef SparseSelfAdjointShape type; }; template<> struct glue_shapes { typedef SparseTriangularShape type; }; // return type of SparseCompressedBase::lower_bound; struct LowerBoundIndex { LowerBoundIndex() : value(-1), found(false) {} LowerBoundIndex(Index val, bool ok) : value(val), found(ok) {} Index value; bool found; }; } // end namespace internal /** \ingroup SparseCore_Module * * \class Triplet * * \brief A small structure to hold a non zero as a triplet (i,j,value). * * \sa SparseMatrix::setFromTriplets() */ template::StorageIndex > class Triplet { public: Triplet() : m_row(0), m_col(0), m_value(0) {} Triplet(const StorageIndex& i, const StorageIndex& j, const Scalar& v = Scalar(0)) : m_row(i), m_col(j), m_value(v) {} /** \returns the row index of the element */ const StorageIndex& row() const { return m_row; } /** \returns the column index of the element */ const StorageIndex& col() const { return m_col; } /** \returns the value of the element */ const Scalar& value() const { return m_value; } protected: StorageIndex m_row, m_col; Scalar m_value; }; } // end namespace Eigen #endif // EIGEN_SPARSEUTIL_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseBlock.h0000644000175100001770000005745014107270226023012 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_BLOCK_H #define EIGEN_SPARSE_BLOCK_H namespace Eigen { // Subset of columns or rows template class BlockImpl : public SparseMatrixBase > { typedef typename internal::remove_all::type _MatrixTypeNested; typedef Block BlockType; public: enum { IsRowMajor = internal::traits::IsRowMajor }; protected: enum { OuterSize = IsRowMajor ? BlockRows : BlockCols }; typedef SparseMatrixBase Base; using Base::convert_index; public: EIGEN_SPARSE_PUBLIC_INTERFACE(BlockType) inline BlockImpl(XprType& xpr, Index i) : m_matrix(xpr), m_outerStart(convert_index(i)), m_outerSize(OuterSize) {} inline BlockImpl(XprType& xpr, Index startRow, Index startCol, Index blockRows, Index blockCols) : m_matrix(xpr), m_outerStart(convert_index(IsRowMajor ? startRow : startCol)), m_outerSize(convert_index(IsRowMajor ? blockRows : blockCols)) {} EIGEN_STRONG_INLINE Index rows() const { return IsRowMajor ? m_outerSize.value() : m_matrix.rows(); } EIGEN_STRONG_INLINE Index cols() const { return IsRowMajor ? m_matrix.cols() : m_outerSize.value(); } Index nonZeros() const { typedef internal::evaluator EvaluatorType; EvaluatorType matEval(m_matrix); Index nnz = 0; Index end = m_outerStart + m_outerSize.value(); for(Index j=m_outerStart; j::non_const_type m_matrix; Index m_outerStart; const internal::variable_if_dynamic m_outerSize; protected: // Disable assignment with clear error message. // Note that simply removing operator= yields compilation errors with ICC+MSVC template BlockImpl& operator=(const T&) { EIGEN_STATIC_ASSERT(sizeof(T)==0, THIS_SPARSE_BLOCK_SUBEXPRESSION_IS_READ_ONLY); return *this; } }; /*************************************************************************** * specialization for SparseMatrix ***************************************************************************/ namespace internal { template class sparse_matrix_block_impl : public SparseCompressedBase > { typedef typename internal::remove_all::type _MatrixTypeNested; typedef Block BlockType; typedef SparseCompressedBase > Base; using Base::convert_index; public: enum { IsRowMajor = internal::traits::IsRowMajor }; EIGEN_SPARSE_PUBLIC_INTERFACE(BlockType) protected: typedef typename Base::IndexVector IndexVector; enum { OuterSize = IsRowMajor ? BlockRows : BlockCols }; public: inline sparse_matrix_block_impl(SparseMatrixType& xpr, Index i) : m_matrix(xpr), m_outerStart(convert_index(i)), m_outerSize(OuterSize) {} inline sparse_matrix_block_impl(SparseMatrixType& xpr, Index startRow, Index startCol, Index blockRows, Index blockCols) : m_matrix(xpr), m_outerStart(convert_index(IsRowMajor ? startRow : startCol)), m_outerSize(convert_index(IsRowMajor ? blockRows : blockCols)) {} template inline BlockType& operator=(const SparseMatrixBase& other) { typedef typename internal::remove_all::type _NestedMatrixType; _NestedMatrixType& matrix = m_matrix; // This assignment is slow if this vector set is not empty // and/or it is not at the end of the nonzeros of the underlying matrix. // 1 - eval to a temporary to avoid transposition and/or aliasing issues Ref > tmp(other.derived()); eigen_internal_assert(tmp.outerSize()==m_outerSize.value()); // 2 - let's check whether there is enough allocated memory Index nnz = tmp.nonZeros(); Index start = m_outerStart==0 ? 0 : m_matrix.outerIndexPtr()[m_outerStart]; // starting position of the current block Index end = m_matrix.outerIndexPtr()[m_outerStart+m_outerSize.value()]; // ending position of the current block Index block_size = end - start; // available room in the current block Index tail_size = m_matrix.outerIndexPtr()[m_matrix.outerSize()] - end; Index free_size = m_matrix.isCompressed() ? Index(matrix.data().allocatedSize()) + block_size : block_size; Index tmp_start = tmp.outerIndexPtr()[0]; bool update_trailing_pointers = false; if(nnz>free_size) { // realloc manually to reduce copies typename SparseMatrixType::Storage newdata(m_matrix.data().allocatedSize() - block_size + nnz); internal::smart_copy(m_matrix.valuePtr(), m_matrix.valuePtr() + start, newdata.valuePtr()); internal::smart_copy(m_matrix.innerIndexPtr(), m_matrix.innerIndexPtr() + start, newdata.indexPtr()); internal::smart_copy(tmp.valuePtr() + tmp_start, tmp.valuePtr() + tmp_start + nnz, newdata.valuePtr() + start); internal::smart_copy(tmp.innerIndexPtr() + tmp_start, tmp.innerIndexPtr() + tmp_start + nnz, newdata.indexPtr() + start); internal::smart_copy(matrix.valuePtr()+end, matrix.valuePtr()+end + tail_size, newdata.valuePtr()+start+nnz); internal::smart_copy(matrix.innerIndexPtr()+end, matrix.innerIndexPtr()+end + tail_size, newdata.indexPtr()+start+nnz); newdata.resize(m_matrix.outerIndexPtr()[m_matrix.outerSize()] - block_size + nnz); matrix.data().swap(newdata); update_trailing_pointers = true; } else { if(m_matrix.isCompressed() && nnz!=block_size) { // no need to realloc, simply copy the tail at its respective position and insert tmp matrix.data().resize(start + nnz + tail_size); internal::smart_memmove(matrix.valuePtr()+end, matrix.valuePtr() + end+tail_size, matrix.valuePtr() + start+nnz); internal::smart_memmove(matrix.innerIndexPtr()+end, matrix.innerIndexPtr() + end+tail_size, matrix.innerIndexPtr() + start+nnz); update_trailing_pointers = true; } internal::smart_copy(tmp.valuePtr() + tmp_start, tmp.valuePtr() + tmp_start + nnz, matrix.valuePtr() + start); internal::smart_copy(tmp.innerIndexPtr() + tmp_start, tmp.innerIndexPtr() + tmp_start + nnz, matrix.innerIndexPtr() + start); } // update outer index pointers and innerNonZeros if(IsVectorAtCompileTime) { if(!m_matrix.isCompressed()) matrix.innerNonZeroPtr()[m_outerStart] = StorageIndex(nnz); matrix.outerIndexPtr()[m_outerStart] = StorageIndex(start); } else { StorageIndex p = StorageIndex(start); for(Index k=0; k(tmp.innerVector(k).nonZeros()); if(!m_matrix.isCompressed()) matrix.innerNonZeroPtr()[m_outerStart+k] = nnz_k; matrix.outerIndexPtr()[m_outerStart+k] = p; p += nnz_k; } } if(update_trailing_pointers) { StorageIndex offset = internal::convert_index(nnz - block_size); for(Index k = m_outerStart + m_outerSize.value(); k<=matrix.outerSize(); ++k) { matrix.outerIndexPtr()[k] += offset; } } return derived(); } inline BlockType& operator=(const BlockType& other) { return operator=(other); } inline const Scalar* valuePtr() const { return m_matrix.valuePtr(); } inline Scalar* valuePtr() { return m_matrix.valuePtr(); } inline const StorageIndex* innerIndexPtr() const { return m_matrix.innerIndexPtr(); } inline StorageIndex* innerIndexPtr() { return m_matrix.innerIndexPtr(); } inline const StorageIndex* outerIndexPtr() const { return m_matrix.outerIndexPtr() + m_outerStart; } inline StorageIndex* outerIndexPtr() { return m_matrix.outerIndexPtr() + m_outerStart; } inline const StorageIndex* innerNonZeroPtr() const { return isCompressed() ? 0 : (m_matrix.innerNonZeroPtr()+m_outerStart); } inline StorageIndex* innerNonZeroPtr() { return isCompressed() ? 0 : (m_matrix.innerNonZeroPtr()+m_outerStart); } bool isCompressed() const { return m_matrix.innerNonZeroPtr()==0; } inline Scalar& coeffRef(Index row, Index col) { return m_matrix.coeffRef(row + (IsRowMajor ? m_outerStart : 0), col + (IsRowMajor ? 0 : m_outerStart)); } inline const Scalar coeff(Index row, Index col) const { return m_matrix.coeff(row + (IsRowMajor ? m_outerStart : 0), col + (IsRowMajor ? 0 : m_outerStart)); } inline const Scalar coeff(Index index) const { return m_matrix.coeff(IsRowMajor ? m_outerStart : index, IsRowMajor ? index : m_outerStart); } const Scalar& lastCoeff() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(sparse_matrix_block_impl); eigen_assert(Base::nonZeros()>0); if(m_matrix.isCompressed()) return m_matrix.valuePtr()[m_matrix.outerIndexPtr()[m_outerStart+1]-1]; else return m_matrix.valuePtr()[m_matrix.outerIndexPtr()[m_outerStart]+m_matrix.innerNonZeroPtr()[m_outerStart]-1]; } EIGEN_STRONG_INLINE Index rows() const { return IsRowMajor ? m_outerSize.value() : m_matrix.rows(); } EIGEN_STRONG_INLINE Index cols() const { return IsRowMajor ? m_matrix.cols() : m_outerSize.value(); } inline const SparseMatrixType& nestedExpression() const { return m_matrix; } inline SparseMatrixType& nestedExpression() { return m_matrix; } Index startRow() const { return IsRowMajor ? m_outerStart : 0; } Index startCol() const { return IsRowMajor ? 0 : m_outerStart; } Index blockRows() const { return IsRowMajor ? m_outerSize.value() : m_matrix.rows(); } Index blockCols() const { return IsRowMajor ? m_matrix.cols() : m_outerSize.value(); } protected: typename internal::ref_selector::non_const_type m_matrix; Index m_outerStart; const internal::variable_if_dynamic m_outerSize; }; } // namespace internal template class BlockImpl,BlockRows,BlockCols,true,Sparse> : public internal::sparse_matrix_block_impl,BlockRows,BlockCols> { public: typedef _StorageIndex StorageIndex; typedef SparseMatrix<_Scalar, _Options, _StorageIndex> SparseMatrixType; typedef internal::sparse_matrix_block_impl Base; inline BlockImpl(SparseMatrixType& xpr, Index i) : Base(xpr, i) {} inline BlockImpl(SparseMatrixType& xpr, Index startRow, Index startCol, Index blockRows, Index blockCols) : Base(xpr, startRow, startCol, blockRows, blockCols) {} using Base::operator=; }; template class BlockImpl,BlockRows,BlockCols,true,Sparse> : public internal::sparse_matrix_block_impl,BlockRows,BlockCols> { public: typedef _StorageIndex StorageIndex; typedef const SparseMatrix<_Scalar, _Options, _StorageIndex> SparseMatrixType; typedef internal::sparse_matrix_block_impl Base; inline BlockImpl(SparseMatrixType& xpr, Index i) : Base(xpr, i) {} inline BlockImpl(SparseMatrixType& xpr, Index startRow, Index startCol, Index blockRows, Index blockCols) : Base(xpr, startRow, startCol, blockRows, blockCols) {} using Base::operator=; private: template BlockImpl(const SparseMatrixBase& xpr, Index i); template BlockImpl(const SparseMatrixBase& xpr); }; //---------- /** Generic implementation of sparse Block expression. * Real-only. */ template class BlockImpl : public SparseMatrixBase >, internal::no_assignment_operator { typedef Block BlockType; typedef SparseMatrixBase Base; using Base::convert_index; public: enum { IsRowMajor = internal::traits::IsRowMajor }; EIGEN_SPARSE_PUBLIC_INTERFACE(BlockType) typedef typename internal::remove_all::type _MatrixTypeNested; /** Column or Row constructor */ inline BlockImpl(XprType& xpr, Index i) : m_matrix(xpr), m_startRow( (BlockRows==1) && (BlockCols==XprType::ColsAtCompileTime) ? convert_index(i) : 0), m_startCol( (BlockRows==XprType::RowsAtCompileTime) && (BlockCols==1) ? convert_index(i) : 0), m_blockRows(BlockRows==1 ? 1 : xpr.rows()), m_blockCols(BlockCols==1 ? 1 : xpr.cols()) {} /** Dynamic-size constructor */ inline BlockImpl(XprType& xpr, Index startRow, Index startCol, Index blockRows, Index blockCols) : m_matrix(xpr), m_startRow(convert_index(startRow)), m_startCol(convert_index(startCol)), m_blockRows(convert_index(blockRows)), m_blockCols(convert_index(blockCols)) {} inline Index rows() const { return m_blockRows.value(); } inline Index cols() const { return m_blockCols.value(); } inline Scalar& coeffRef(Index row, Index col) { return m_matrix.coeffRef(row + m_startRow.value(), col + m_startCol.value()); } inline const Scalar coeff(Index row, Index col) const { return m_matrix.coeff(row + m_startRow.value(), col + m_startCol.value()); } inline Scalar& coeffRef(Index index) { return m_matrix.coeffRef(m_startRow.value() + (RowsAtCompileTime == 1 ? 0 : index), m_startCol.value() + (RowsAtCompileTime == 1 ? index : 0)); } inline const Scalar coeff(Index index) const { return m_matrix.coeff(m_startRow.value() + (RowsAtCompileTime == 1 ? 0 : index), m_startCol.value() + (RowsAtCompileTime == 1 ? index : 0)); } inline const XprType& nestedExpression() const { return m_matrix; } inline XprType& nestedExpression() { return m_matrix; } Index startRow() const { return m_startRow.value(); } Index startCol() const { return m_startCol.value(); } Index blockRows() const { return m_blockRows.value(); } Index blockCols() const { return m_blockCols.value(); } protected: // friend class internal::GenericSparseBlockInnerIteratorImpl; friend struct internal::unary_evaluator, internal::IteratorBased, Scalar >; Index nonZeros() const { return Dynamic; } typename internal::ref_selector::non_const_type m_matrix; const internal::variable_if_dynamic m_startRow; const internal::variable_if_dynamic m_startCol; const internal::variable_if_dynamic m_blockRows; const internal::variable_if_dynamic m_blockCols; protected: // Disable assignment with clear error message. // Note that simply removing operator= yields compilation errors with ICC+MSVC template BlockImpl& operator=(const T&) { EIGEN_STATIC_ASSERT(sizeof(T)==0, THIS_SPARSE_BLOCK_SUBEXPRESSION_IS_READ_ONLY); return *this; } }; namespace internal { template struct unary_evaluator, IteratorBased > : public evaluator_base > { class InnerVectorInnerIterator; class OuterVectorInnerIterator; public: typedef Block XprType; typedef typename XprType::StorageIndex StorageIndex; typedef typename XprType::Scalar Scalar; enum { IsRowMajor = XprType::IsRowMajor, OuterVector = (BlockCols==1 && ArgType::IsRowMajor) | // FIXME | instead of || to please GCC 4.4.0 stupid warning "suggest parentheses around &&". // revert to || as soon as not needed anymore. (BlockRows==1 && !ArgType::IsRowMajor), CoeffReadCost = evaluator::CoeffReadCost, Flags = XprType::Flags }; typedef typename internal::conditional::type InnerIterator; explicit unary_evaluator(const XprType& op) : m_argImpl(op.nestedExpression()), m_block(op) {} inline Index nonZerosEstimate() const { const Index nnz = m_block.nonZeros(); if(nnz < 0) { // Scale the non-zero estimate for the underlying expression linearly with block size. // Return zero if the underlying block is empty. const Index nested_sz = m_block.nestedExpression().size(); return nested_sz == 0 ? 0 : m_argImpl.nonZerosEstimate() * m_block.size() / nested_sz; } return nnz; } protected: typedef typename evaluator::InnerIterator EvalIterator; evaluator m_argImpl; const XprType &m_block; }; template class unary_evaluator, IteratorBased>::InnerVectorInnerIterator : public EvalIterator { // NOTE MSVC fails to compile if we don't explicitely "import" IsRowMajor from unary_evaluator // because the base class EvalIterator has a private IsRowMajor enum too. (bug #1786) // NOTE We cannot call it IsRowMajor because it would shadow unary_evaluator::IsRowMajor enum { XprIsRowMajor = unary_evaluator::IsRowMajor }; const XprType& m_block; Index m_end; public: EIGEN_STRONG_INLINE InnerVectorInnerIterator(const unary_evaluator& aEval, Index outer) : EvalIterator(aEval.m_argImpl, outer + (XprIsRowMajor ? aEval.m_block.startRow() : aEval.m_block.startCol())), m_block(aEval.m_block), m_end(XprIsRowMajor ? aEval.m_block.startCol()+aEval.m_block.blockCols() : aEval.m_block.startRow()+aEval.m_block.blockRows()) { while( (EvalIterator::operator bool()) && (EvalIterator::index() < (XprIsRowMajor ? m_block.startCol() : m_block.startRow())) ) EvalIterator::operator++(); } inline StorageIndex index() const { return EvalIterator::index() - convert_index(XprIsRowMajor ? m_block.startCol() : m_block.startRow()); } inline Index outer() const { return EvalIterator::outer() - (XprIsRowMajor ? m_block.startRow() : m_block.startCol()); } inline Index row() const { return EvalIterator::row() - m_block.startRow(); } inline Index col() const { return EvalIterator::col() - m_block.startCol(); } inline operator bool() const { return EvalIterator::operator bool() && EvalIterator::index() < m_end; } }; template class unary_evaluator, IteratorBased>::OuterVectorInnerIterator { // NOTE see above enum { XprIsRowMajor = unary_evaluator::IsRowMajor }; const unary_evaluator& m_eval; Index m_outerPos; const Index m_innerIndex; Index m_end; EvalIterator m_it; public: EIGEN_STRONG_INLINE OuterVectorInnerIterator(const unary_evaluator& aEval, Index outer) : m_eval(aEval), m_outerPos( (XprIsRowMajor ? aEval.m_block.startCol() : aEval.m_block.startRow()) ), m_innerIndex(XprIsRowMajor ? aEval.m_block.startRow() : aEval.m_block.startCol()), m_end(XprIsRowMajor ? aEval.m_block.startCol()+aEval.m_block.blockCols() : aEval.m_block.startRow()+aEval.m_block.blockRows()), m_it(m_eval.m_argImpl, m_outerPos) { EIGEN_UNUSED_VARIABLE(outer); eigen_assert(outer==0); while(m_it && m_it.index() < m_innerIndex) ++m_it; if((!m_it) || (m_it.index()!=m_innerIndex)) ++(*this); } inline StorageIndex index() const { return convert_index(m_outerPos - (XprIsRowMajor ? m_eval.m_block.startCol() : m_eval.m_block.startRow())); } inline Index outer() const { return 0; } inline Index row() const { return XprIsRowMajor ? 0 : index(); } inline Index col() const { return XprIsRowMajor ? index() : 0; } inline Scalar value() const { return m_it.value(); } inline Scalar& valueRef() { return m_it.valueRef(); } inline OuterVectorInnerIterator& operator++() { // search next non-zero entry while(++m_outerPos struct unary_evaluator,BlockRows,BlockCols,true>, IteratorBased> : evaluator,BlockRows,BlockCols,true> > > { typedef Block,BlockRows,BlockCols,true> XprType; typedef evaluator > Base; explicit unary_evaluator(const XprType &xpr) : Base(xpr) {} }; template struct unary_evaluator,BlockRows,BlockCols,true>, IteratorBased> : evaluator,BlockRows,BlockCols,true> > > { typedef Block,BlockRows,BlockCols,true> XprType; typedef evaluator > Base; explicit unary_evaluator(const XprType &xpr) : Base(xpr) {} }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSE_BLOCK_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseVector.h0000644000175100001770000003476014107270226023221 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEVECTOR_H #define EIGEN_SPARSEVECTOR_H namespace Eigen { /** \ingroup SparseCore_Module * \class SparseVector * * \brief a sparse vector class * * \tparam _Scalar the scalar type, i.e. the type of the coefficients * * See http://www.netlib.org/linalg/html_templates/node91.html for details on the storage scheme. * * This class can be extended with the help of the plugin mechanism described on the page * \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_SPARSEVECTOR_PLUGIN. */ namespace internal { template struct traits > { typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; typedef Sparse StorageKind; typedef MatrixXpr XprKind; enum { IsColVector = (_Options & RowMajorBit) ? 0 : 1, RowsAtCompileTime = IsColVector ? Dynamic : 1, ColsAtCompileTime = IsColVector ? 1 : Dynamic, MaxRowsAtCompileTime = RowsAtCompileTime, MaxColsAtCompileTime = ColsAtCompileTime, Flags = _Options | NestByRefBit | LvalueBit | (IsColVector ? 0 : RowMajorBit) | CompressedAccessBit, SupportedAccessPatterns = InnerRandomAccessPattern }; }; // Sparse-Vector-Assignment kinds: enum { SVA_RuntimeSwitch, SVA_Inner, SVA_Outer }; template< typename Dest, typename Src, int AssignmentKind = !bool(Src::IsVectorAtCompileTime) ? SVA_RuntimeSwitch : Src::InnerSizeAtCompileTime==1 ? SVA_Outer : SVA_Inner> struct sparse_vector_assign_selector; } template class SparseVector : public SparseCompressedBase > { typedef SparseCompressedBase Base; using Base::convert_index; public: EIGEN_SPARSE_PUBLIC_INTERFACE(SparseVector) EIGEN_SPARSE_INHERIT_ASSIGNMENT_OPERATOR(SparseVector, +=) EIGEN_SPARSE_INHERIT_ASSIGNMENT_OPERATOR(SparseVector, -=) typedef internal::CompressedStorage Storage; enum { IsColVector = internal::traits::IsColVector }; enum { Options = _Options }; EIGEN_STRONG_INLINE Index rows() const { return IsColVector ? m_size : 1; } EIGEN_STRONG_INLINE Index cols() const { return IsColVector ? 1 : m_size; } EIGEN_STRONG_INLINE Index innerSize() const { return m_size; } EIGEN_STRONG_INLINE Index outerSize() const { return 1; } EIGEN_STRONG_INLINE const Scalar* valuePtr() const { return m_data.valuePtr(); } EIGEN_STRONG_INLINE Scalar* valuePtr() { return m_data.valuePtr(); } EIGEN_STRONG_INLINE const StorageIndex* innerIndexPtr() const { return m_data.indexPtr(); } EIGEN_STRONG_INLINE StorageIndex* innerIndexPtr() { return m_data.indexPtr(); } inline const StorageIndex* outerIndexPtr() const { return 0; } inline StorageIndex* outerIndexPtr() { return 0; } inline const StorageIndex* innerNonZeroPtr() const { return 0; } inline StorageIndex* innerNonZeroPtr() { return 0; } /** \internal */ inline Storage& data() { return m_data; } /** \internal */ inline const Storage& data() const { return m_data; } inline Scalar coeff(Index row, Index col) const { eigen_assert(IsColVector ? (col==0 && row>=0 && row=0 && col=0 && i=0 && row=0 && col=0 && i=0 && row=0 && col=0 && i= startId) && (m_data.index(p) > i) ) { m_data.index(p+1) = m_data.index(p); m_data.value(p+1) = m_data.value(p); --p; } m_data.index(p+1) = convert_index(i); m_data.value(p+1) = 0; return m_data.value(p+1); } /** */ inline void reserve(Index reserveSize) { m_data.reserve(reserveSize); } inline void finalize() {} /** \copydoc SparseMatrix::prune(const Scalar&,const RealScalar&) */ void prune(const Scalar& reference, const RealScalar& epsilon = NumTraits::dummy_precision()) { m_data.prune(reference,epsilon); } /** Resizes the sparse vector to \a rows x \a cols * * This method is provided for compatibility with matrices. * For a column vector, \a cols must be equal to 1. * For a row vector, \a rows must be equal to 1. * * \sa resize(Index) */ void resize(Index rows, Index cols) { eigen_assert((IsColVector ? cols : rows)==1 && "Outer dimension must equal 1"); resize(IsColVector ? rows : cols); } /** Resizes the sparse vector to \a newSize * This method deletes all entries, thus leaving an empty sparse vector * * \sa conservativeResize(), setZero() */ void resize(Index newSize) { m_size = newSize; m_data.clear(); } /** Resizes the sparse vector to \a newSize, while leaving old values untouched. * * If the size of the vector is decreased, then the storage of the out-of bounds coefficients is kept and reserved. * Call .data().squeeze() to free extra memory. * * \sa reserve(), setZero() */ void conservativeResize(Index newSize) { if (newSize < m_size) { Index i = 0; while (i inline SparseVector(const SparseMatrixBase& other) : m_size(0) { #ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN #endif check_template_parameters(); *this = other.derived(); } inline SparseVector(const SparseVector& other) : Base(other), m_size(0) { check_template_parameters(); *this = other.derived(); } /** Swaps the values of \c *this and \a other. * Overloaded for performance: this version performs a \em shallow swap by swapping pointers and attributes only. * \sa SparseMatrixBase::swap() */ inline void swap(SparseVector& other) { std::swap(m_size, other.m_size); m_data.swap(other.m_data); } template inline void swap(SparseMatrix& other) { eigen_assert(other.outerSize()==1); std::swap(m_size, other.m_innerSize); m_data.swap(other.m_data); } inline SparseVector& operator=(const SparseVector& other) { if (other.isRValue()) { swap(other.const_cast_derived()); } else { resize(other.size()); m_data = other.m_data; } return *this; } template inline SparseVector& operator=(const SparseMatrixBase& other) { SparseVector tmp(other.size()); internal::sparse_vector_assign_selector::run(tmp,other.derived()); this->swap(tmp); return *this; } #ifndef EIGEN_PARSED_BY_DOXYGEN template inline SparseVector& operator=(const SparseSparseProduct& product) { return Base::operator=(product); } #endif friend std::ostream & operator << (std::ostream & s, const SparseVector& m) { for (Index i=0; i::IsSigned,THE_INDEX_TYPE_MUST_BE_A_SIGNED_TYPE); EIGEN_STATIC_ASSERT((_Options&(ColMajor|RowMajor))==Options,INVALID_MATRIX_TEMPLATE_PARAMETERS); } Storage m_data; Index m_size; }; namespace internal { template struct evaluator > : evaluator_base > { typedef SparseVector<_Scalar,_Options,_Index> SparseVectorType; typedef evaluator_base Base; typedef typename SparseVectorType::InnerIterator InnerIterator; typedef typename SparseVectorType::ReverseInnerIterator ReverseInnerIterator; enum { CoeffReadCost = NumTraits<_Scalar>::ReadCost, Flags = SparseVectorType::Flags }; evaluator() : Base() {} explicit evaluator(const SparseVectorType &mat) : m_matrix(&mat) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_matrix->nonZeros(); } operator SparseVectorType&() { return m_matrix->const_cast_derived(); } operator const SparseVectorType&() const { return *m_matrix; } const SparseVectorType *m_matrix; }; template< typename Dest, typename Src> struct sparse_vector_assign_selector { static void run(Dest& dst, const Src& src) { eigen_internal_assert(src.innerSize()==src.size()); typedef internal::evaluator SrcEvaluatorType; SrcEvaluatorType srcEval(src); for(typename SrcEvaluatorType::InnerIterator it(srcEval, 0); it; ++it) dst.insert(it.index()) = it.value(); } }; template< typename Dest, typename Src> struct sparse_vector_assign_selector { static void run(Dest& dst, const Src& src) { eigen_internal_assert(src.outerSize()==src.size()); typedef internal::evaluator SrcEvaluatorType; SrcEvaluatorType srcEval(src); for(Index i=0; i struct sparse_vector_assign_selector { static void run(Dest& dst, const Src& src) { if(src.outerSize()==1) sparse_vector_assign_selector::run(dst, src); else sparse_vector_assign_selector::run(dst, src); } }; } } // end namespace Eigen #endif // EIGEN_SPARSEVECTOR_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseDenseProduct.h0000644000175100001770000003171014107270226024346 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEDENSEPRODUCT_H #define EIGEN_SPARSEDENSEPRODUCT_H namespace Eigen { namespace internal { template <> struct product_promote_storage_type { typedef Sparse ret; }; template <> struct product_promote_storage_type { typedef Sparse ret; }; template struct sparse_time_dense_product_impl; template struct sparse_time_dense_product_impl { typedef typename internal::remove_all::type Lhs; typedef typename internal::remove_all::type Rhs; typedef typename internal::remove_all::type Res; typedef typename evaluator::InnerIterator LhsInnerIterator; typedef evaluator LhsEval; static void run(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const typename Res::Scalar& alpha) { LhsEval lhsEval(lhs); Index n = lhs.outerSize(); #ifdef EIGEN_HAS_OPENMP Eigen::initParallel(); Index threads = Eigen::nbThreads(); #endif for(Index c=0; c1 && lhsEval.nonZerosEstimate() > 20000) { #pragma omp parallel for schedule(dynamic,(n+threads*4-1)/(threads*4)) num_threads(threads) for(Index i=0; i let's disable it for now as it is conflicting with generic scalar*matrix and matrix*scalar operators // template // struct ScalarBinaryOpTraits > // { // enum { // Defined = 1 // }; // typedef typename CwiseUnaryOp, T2>::PlainObject ReturnType; // }; template struct sparse_time_dense_product_impl { typedef typename internal::remove_all::type Lhs; typedef typename internal::remove_all::type Rhs; typedef typename internal::remove_all::type Res; typedef evaluator LhsEval; typedef typename LhsEval::InnerIterator LhsInnerIterator; static void run(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const AlphaType& alpha) { LhsEval lhsEval(lhs); for(Index c=0; c::ReturnType rhs_j(alpha * rhs.coeff(j,c)); for(LhsInnerIterator it(lhsEval,j); it ;++it) res.coeffRef(it.index(),c) += it.value() * rhs_j; } } } }; template struct sparse_time_dense_product_impl { typedef typename internal::remove_all::type Lhs; typedef typename internal::remove_all::type Rhs; typedef typename internal::remove_all::type Res; typedef evaluator LhsEval; typedef typename LhsEval::InnerIterator LhsInnerIterator; static void run(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const typename Res::Scalar& alpha) { Index n = lhs.rows(); LhsEval lhsEval(lhs); #ifdef EIGEN_HAS_OPENMP Eigen::initParallel(); Index threads = Eigen::nbThreads(); // This 20000 threshold has been found experimentally on 2D and 3D Poisson problems. // It basically represents the minimal amount of work to be done to be worth it. if(threads>1 && lhsEval.nonZerosEstimate()*rhs.cols() > 20000) { #pragma omp parallel for schedule(dynamic,(n+threads*4-1)/(threads*4)) num_threads(threads) for(Index i=0; i struct sparse_time_dense_product_impl { typedef typename internal::remove_all::type Lhs; typedef typename internal::remove_all::type Rhs; typedef typename internal::remove_all::type Res; typedef typename evaluator::InnerIterator LhsInnerIterator; static void run(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const typename Res::Scalar& alpha) { evaluator lhsEval(lhs); for(Index j=0; j inline void sparse_time_dense_product(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const AlphaType& alpha) { sparse_time_dense_product_impl::run(lhs, rhs, res, alpha); } } // end namespace internal namespace internal { template struct generic_product_impl : generic_product_impl_base > { typedef typename Product::Scalar Scalar; template static void scaleAndAddTo(Dest& dst, const Lhs& lhs, const Rhs& rhs, const Scalar& alpha) { typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhs); RhsNested rhsNested(rhs); internal::sparse_time_dense_product(lhsNested, rhsNested, dst, alpha); } }; template struct generic_product_impl : generic_product_impl {}; template struct generic_product_impl : generic_product_impl_base > { typedef typename Product::Scalar Scalar; template static void scaleAndAddTo(Dst& dst, const Lhs& lhs, const Rhs& rhs, const Scalar& alpha) { typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhs); RhsNested rhsNested(rhs); // transpose everything Transpose dstT(dst); internal::sparse_time_dense_product(rhsNested.transpose(), lhsNested.transpose(), dstT, alpha); } }; template struct generic_product_impl : generic_product_impl {}; template struct sparse_dense_outer_product_evaluator { protected: typedef typename conditional::type Lhs1; typedef typename conditional::type ActualRhs; typedef Product ProdXprType; // if the actual left-hand side is a dense vector, // then build a sparse-view so that we can seamlessly iterate over it. typedef typename conditional::StorageKind,Sparse>::value, Lhs1, SparseView >::type ActualLhs; typedef typename conditional::StorageKind,Sparse>::value, Lhs1 const&, SparseView >::type LhsArg; typedef evaluator LhsEval; typedef evaluator RhsEval; typedef typename evaluator::InnerIterator LhsIterator; typedef typename ProdXprType::Scalar Scalar; public: enum { Flags = NeedToTranspose ? RowMajorBit : 0, CoeffReadCost = HugeCost }; class InnerIterator : public LhsIterator { public: InnerIterator(const sparse_dense_outer_product_evaluator &xprEval, Index outer) : LhsIterator(xprEval.m_lhsXprImpl, 0), m_outer(outer), m_empty(false), m_factor(get(xprEval.m_rhsXprImpl, outer, typename internal::traits::StorageKind() )) {} EIGEN_STRONG_INLINE Index outer() const { return m_outer; } EIGEN_STRONG_INLINE Index row() const { return NeedToTranspose ? m_outer : LhsIterator::index(); } EIGEN_STRONG_INLINE Index col() const { return NeedToTranspose ? LhsIterator::index() : m_outer; } EIGEN_STRONG_INLINE Scalar value() const { return LhsIterator::value() * m_factor; } EIGEN_STRONG_INLINE operator bool() const { return LhsIterator::operator bool() && (!m_empty); } protected: Scalar get(const RhsEval &rhs, Index outer, Dense = Dense()) const { return rhs.coeff(outer); } Scalar get(const RhsEval &rhs, Index outer, Sparse = Sparse()) { typename RhsEval::InnerIterator it(rhs, outer); if (it && it.index()==0 && it.value()!=Scalar(0)) return it.value(); m_empty = true; return Scalar(0); } Index m_outer; bool m_empty; Scalar m_factor; }; sparse_dense_outer_product_evaluator(const Lhs1 &lhs, const ActualRhs &rhs) : m_lhs(lhs), m_lhsXprImpl(m_lhs), m_rhsXprImpl(rhs) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } // transpose case sparse_dense_outer_product_evaluator(const ActualRhs &rhs, const Lhs1 &lhs) : m_lhs(lhs), m_lhsXprImpl(m_lhs), m_rhsXprImpl(rhs) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } protected: const LhsArg m_lhs; evaluator m_lhsXprImpl; evaluator m_rhsXprImpl; }; // sparse * dense outer product template struct product_evaluator, OuterProduct, SparseShape, DenseShape> : sparse_dense_outer_product_evaluator { typedef sparse_dense_outer_product_evaluator Base; typedef Product XprType; typedef typename XprType::PlainObject PlainObject; explicit product_evaluator(const XprType& xpr) : Base(xpr.lhs(), xpr.rhs()) {} }; template struct product_evaluator, OuterProduct, DenseShape, SparseShape> : sparse_dense_outer_product_evaluator { typedef sparse_dense_outer_product_evaluator Base; typedef Product XprType; typedef typename XprType::PlainObject PlainObject; explicit product_evaluator(const XprType& xpr) : Base(xpr.lhs(), xpr.rhs()) {} }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSEDENSEPRODUCT_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseTriangularView.h0000644000175100001770000001444514107270226024720 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2015 Gael Guennebaud // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_TRIANGULARVIEW_H #define EIGEN_SPARSE_TRIANGULARVIEW_H namespace Eigen { /** \ingroup SparseCore_Module * * \brief Base class for a triangular part in a \b sparse matrix * * This class is an abstract base class of class TriangularView, and objects of type TriangularViewImpl cannot be instantiated. * It extends class TriangularView with additional methods which are available for sparse expressions only. * * \sa class TriangularView, SparseMatrixBase::triangularView() */ template class TriangularViewImpl : public SparseMatrixBase > { enum { SkipFirst = ((Mode&Lower) && !(MatrixType::Flags&RowMajorBit)) || ((Mode&Upper) && (MatrixType::Flags&RowMajorBit)), SkipLast = !SkipFirst, SkipDiag = (Mode&ZeroDiag) ? 1 : 0, HasUnitDiag = (Mode&UnitDiag) ? 1 : 0 }; typedef TriangularView TriangularViewType; protected: // dummy solve function to make TriangularView happy. void solve() const; typedef SparseMatrixBase Base; public: EIGEN_SPARSE_PUBLIC_INTERFACE(TriangularViewType) typedef typename MatrixType::Nested MatrixTypeNested; typedef typename internal::remove_reference::type MatrixTypeNestedNonRef; typedef typename internal::remove_all::type MatrixTypeNestedCleaned; template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void _solve_impl(const RhsType &rhs, DstType &dst) const { if(!(internal::is_same::value && internal::extract_data(dst) == internal::extract_data(rhs))) dst = rhs; this->solveInPlace(dst); } /** Applies the inverse of \c *this to the dense vector or matrix \a other, "in-place" */ template void solveInPlace(MatrixBase& other) const; /** Applies the inverse of \c *this to the sparse vector or matrix \a other, "in-place" */ template void solveInPlace(SparseMatrixBase& other) const; }; namespace internal { template struct unary_evaluator, IteratorBased> : evaluator_base > { typedef TriangularView XprType; protected: typedef typename XprType::Scalar Scalar; typedef typename XprType::StorageIndex StorageIndex; typedef typename evaluator::InnerIterator EvalIterator; enum { SkipFirst = ((Mode&Lower) && !(ArgType::Flags&RowMajorBit)) || ((Mode&Upper) && (ArgType::Flags&RowMajorBit)), SkipLast = !SkipFirst, SkipDiag = (Mode&ZeroDiag) ? 1 : 0, HasUnitDiag = (Mode&UnitDiag) ? 1 : 0 }; public: enum { CoeffReadCost = evaluator::CoeffReadCost, Flags = XprType::Flags }; explicit unary_evaluator(const XprType &xpr) : m_argImpl(xpr.nestedExpression()), m_arg(xpr.nestedExpression()) {} inline Index nonZerosEstimate() const { return m_argImpl.nonZerosEstimate(); } class InnerIterator : public EvalIterator { typedef EvalIterator Base; public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& xprEval, Index outer) : Base(xprEval.m_argImpl,outer), m_returnOne(false), m_containsDiag(Base::outer()index()<=outer : this->index()=Base::outer())) { if((!SkipFirst) && Base::operator bool()) Base::operator++(); m_returnOne = m_containsDiag; } } EIGEN_STRONG_INLINE InnerIterator& operator++() { if(HasUnitDiag && m_returnOne) m_returnOne = false; else { Base::operator++(); if(HasUnitDiag && (!SkipFirst) && ((!Base::operator bool()) || Base::index()>=Base::outer())) { if((!SkipFirst) && Base::operator bool()) Base::operator++(); m_returnOne = m_containsDiag; } } return *this; } EIGEN_STRONG_INLINE operator bool() const { if(HasUnitDiag && m_returnOne) return true; if(SkipFirst) return Base::operator bool(); else { if (SkipDiag) return (Base::operator bool() && this->index() < this->outer()); else return (Base::operator bool() && this->index() <= this->outer()); } } // inline Index row() const { return (ArgType::Flags&RowMajorBit ? Base::outer() : this->index()); } // inline Index col() const { return (ArgType::Flags&RowMajorBit ? this->index() : Base::outer()); } inline StorageIndex index() const { if(HasUnitDiag && m_returnOne) return internal::convert_index(Base::outer()); else return Base::index(); } inline Scalar value() const { if(HasUnitDiag && m_returnOne) return Scalar(1); else return Base::value(); } protected: bool m_returnOne; bool m_containsDiag; private: Scalar& valueRef(); }; protected: evaluator m_argImpl; const ArgType& m_arg; }; } // end namespace internal template template inline const TriangularView SparseMatrixBase::triangularView() const { return TriangularView(derived()); } } // end namespace Eigen #endif // EIGEN_SPARSE_TRIANGULARVIEW_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparsePermutation.h0000644000175100001770000001624114107270226024260 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_PERMUTATION_H #define EIGEN_SPARSE_PERMUTATION_H // This file implements sparse * permutation products namespace Eigen { namespace internal { template struct permutation_matrix_product { typedef typename nested_eval::type MatrixType; typedef typename remove_all::type MatrixTypeCleaned; typedef typename MatrixTypeCleaned::Scalar Scalar; typedef typename MatrixTypeCleaned::StorageIndex StorageIndex; enum { SrcStorageOrder = MatrixTypeCleaned::Flags&RowMajorBit ? RowMajor : ColMajor, MoveOuter = SrcStorageOrder==RowMajor ? Side==OnTheLeft : Side==OnTheRight }; typedef typename internal::conditional, SparseMatrix >::type ReturnType; template static inline void run(Dest& dst, const PermutationType& perm, const ExpressionType& xpr) { MatrixType mat(xpr); if(MoveOuter) { SparseMatrix tmp(mat.rows(), mat.cols()); Matrix sizes(mat.outerSize()); for(Index j=0; j tmp(mat.rows(), mat.cols()); Matrix sizes(tmp.outerSize()); sizes.setZero(); PermutationMatrix perm_cpy; if((Side==OnTheLeft) ^ Transposed) perm_cpy = perm; else perm_cpy = perm.transpose(); for(Index j=0; j struct product_promote_storage_type { typedef Sparse ret; }; template struct product_promote_storage_type { typedef Sparse ret; }; // TODO, the following two overloads are only needed to define the right temporary type through // typename traits >::ReturnType // whereas it should be correctly handled by traits >::PlainObject template struct product_evaluator, ProductTag, PermutationShape, SparseShape> : public evaluator::ReturnType> { typedef Product XprType; typedef typename permutation_matrix_product::ReturnType PlainObject; typedef evaluator Base; enum { Flags = Base::Flags | EvalBeforeNestingBit }; explicit product_evaluator(const XprType& xpr) : m_result(xpr.rows(), xpr.cols()) { ::new (static_cast(this)) Base(m_result); generic_product_impl::evalTo(m_result, xpr.lhs(), xpr.rhs()); } protected: PlainObject m_result; }; template struct product_evaluator, ProductTag, SparseShape, PermutationShape > : public evaluator::ReturnType> { typedef Product XprType; typedef typename permutation_matrix_product::ReturnType PlainObject; typedef evaluator Base; enum { Flags = Base::Flags | EvalBeforeNestingBit }; explicit product_evaluator(const XprType& xpr) : m_result(xpr.rows(), xpr.cols()) { ::new (static_cast(this)) Base(m_result); generic_product_impl::evalTo(m_result, xpr.lhs(), xpr.rhs()); } protected: PlainObject m_result; }; } // end namespace internal /** \returns the matrix with the permutation applied to the columns */ template inline const Product operator*(const SparseMatrixBase& matrix, const PermutationBase& perm) { return Product(matrix.derived(), perm.derived()); } /** \returns the matrix with the permutation applied to the rows */ template inline const Product operator*( const PermutationBase& perm, const SparseMatrixBase& matrix) { return Product(perm.derived(), matrix.derived()); } /** \returns the matrix with the inverse permutation applied to the columns. */ template inline const Product, AliasFreeProduct> operator*(const SparseMatrixBase& matrix, const InverseImpl& tperm) { return Product, AliasFreeProduct>(matrix.derived(), tperm.derived()); } /** \returns the matrix with the inverse permutation applied to the rows. */ template inline const Product, SparseDerived, AliasFreeProduct> operator*(const InverseImpl& tperm, const SparseMatrixBase& matrix) { return Product, SparseDerived, AliasFreeProduct>(tperm.derived(), matrix.derived()); } } // end namespace Eigen #endif // EIGEN_SPARSE_SELFADJOINTVIEW_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseDiagonalProduct.h0000644000175100001770000001326014107270226025026 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_DIAGONAL_PRODUCT_H #define EIGEN_SPARSE_DIAGONAL_PRODUCT_H namespace Eigen { // The product of a diagonal matrix with a sparse matrix can be easily // implemented using expression template. // We have two consider very different cases: // 1 - diag * row-major sparse // => each inner vector <=> scalar * sparse vector product // => so we can reuse CwiseUnaryOp::InnerIterator // 2 - diag * col-major sparse // => each inner vector <=> densevector * sparse vector cwise product // => again, we can reuse specialization of CwiseBinaryOp::InnerIterator // for that particular case // The two other cases are symmetric. namespace internal { enum { SDP_AsScalarProduct, SDP_AsCwiseProduct }; template struct sparse_diagonal_product_evaluator; template struct product_evaluator, ProductTag, DiagonalShape, SparseShape> : public sparse_diagonal_product_evaluator { typedef Product XprType; enum { CoeffReadCost = HugeCost, Flags = Rhs::Flags&RowMajorBit, Alignment = 0 }; // FIXME CoeffReadCost & Flags typedef sparse_diagonal_product_evaluator Base; explicit product_evaluator(const XprType& xpr) : Base(xpr.rhs(), xpr.lhs().diagonal()) {} }; template struct product_evaluator, ProductTag, SparseShape, DiagonalShape> : public sparse_diagonal_product_evaluator, Lhs::Flags&RowMajorBit?SDP_AsCwiseProduct:SDP_AsScalarProduct> { typedef Product XprType; enum { CoeffReadCost = HugeCost, Flags = Lhs::Flags&RowMajorBit, Alignment = 0 }; // FIXME CoeffReadCost & Flags typedef sparse_diagonal_product_evaluator, Lhs::Flags&RowMajorBit?SDP_AsCwiseProduct:SDP_AsScalarProduct> Base; explicit product_evaluator(const XprType& xpr) : Base(xpr.lhs(), xpr.rhs().diagonal().transpose()) {} }; template struct sparse_diagonal_product_evaluator { protected: typedef typename evaluator::InnerIterator SparseXprInnerIterator; typedef typename SparseXprType::Scalar Scalar; public: class InnerIterator : public SparseXprInnerIterator { public: InnerIterator(const sparse_diagonal_product_evaluator &xprEval, Index outer) : SparseXprInnerIterator(xprEval.m_sparseXprImpl, outer), m_coeff(xprEval.m_diagCoeffImpl.coeff(outer)) {} EIGEN_STRONG_INLINE Scalar value() const { return m_coeff * SparseXprInnerIterator::value(); } protected: typename DiagonalCoeffType::Scalar m_coeff; }; sparse_diagonal_product_evaluator(const SparseXprType &sparseXpr, const DiagonalCoeffType &diagCoeff) : m_sparseXprImpl(sparseXpr), m_diagCoeffImpl(diagCoeff) {} Index nonZerosEstimate() const { return m_sparseXprImpl.nonZerosEstimate(); } protected: evaluator m_sparseXprImpl; evaluator m_diagCoeffImpl; }; template struct sparse_diagonal_product_evaluator { typedef typename SparseXprType::Scalar Scalar; typedef typename SparseXprType::StorageIndex StorageIndex; typedef typename nested_eval::type DiagCoeffNested; class InnerIterator { typedef typename evaluator::InnerIterator SparseXprIter; public: InnerIterator(const sparse_diagonal_product_evaluator &xprEval, Index outer) : m_sparseIter(xprEval.m_sparseXprEval, outer), m_diagCoeffNested(xprEval.m_diagCoeffNested) {} inline Scalar value() const { return m_sparseIter.value() * m_diagCoeffNested.coeff(index()); } inline StorageIndex index() const { return m_sparseIter.index(); } inline Index outer() const { return m_sparseIter.outer(); } inline Index col() const { return SparseXprType::IsRowMajor ? m_sparseIter.index() : m_sparseIter.outer(); } inline Index row() const { return SparseXprType::IsRowMajor ? m_sparseIter.outer() : m_sparseIter.index(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { ++m_sparseIter; return *this; } inline operator bool() const { return m_sparseIter; } protected: SparseXprIter m_sparseIter; DiagCoeffNested m_diagCoeffNested; }; sparse_diagonal_product_evaluator(const SparseXprType &sparseXpr, const DiagCoeffType &diagCoeff) : m_sparseXprEval(sparseXpr), m_diagCoeffNested(diagCoeff) {} Index nonZerosEstimate() const { return m_sparseXprEval.nonZerosEstimate(); } protected: evaluator m_sparseXprEval; DiagCoeffNested m_diagCoeffNested; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSE_DIAGONAL_PRODUCT_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseView.h0000644000175100001770000001767714107270226022701 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011-2014 Gael Guennebaud // Copyright (C) 2010 Daniel Lowengrub // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEVIEW_H #define EIGEN_SPARSEVIEW_H namespace Eigen { namespace internal { template struct traits > : traits { typedef typename MatrixType::StorageIndex StorageIndex; typedef Sparse StorageKind; enum { Flags = int(traits::Flags) & (RowMajorBit) }; }; } // end namespace internal /** \ingroup SparseCore_Module * \class SparseView * * \brief Expression of a dense or sparse matrix with zero or too small values removed * * \tparam MatrixType the type of the object of which we are removing the small entries * * This class represents an expression of a given dense or sparse matrix with * entries smaller than \c reference * \c epsilon are removed. * It is the return type of MatrixBase::sparseView() and SparseMatrixBase::pruned() * and most of the time this is the only way it is used. * * \sa MatrixBase::sparseView(), SparseMatrixBase::pruned() */ template class SparseView : public SparseMatrixBase > { typedef typename MatrixType::Nested MatrixTypeNested; typedef typename internal::remove_all::type _MatrixTypeNested; typedef SparseMatrixBase Base; public: EIGEN_SPARSE_PUBLIC_INTERFACE(SparseView) typedef typename internal::remove_all::type NestedExpression; explicit SparseView(const MatrixType& mat, const Scalar& reference = Scalar(0), const RealScalar &epsilon = NumTraits::dummy_precision()) : m_matrix(mat), m_reference(reference), m_epsilon(epsilon) {} inline Index rows() const { return m_matrix.rows(); } inline Index cols() const { return m_matrix.cols(); } inline Index innerSize() const { return m_matrix.innerSize(); } inline Index outerSize() const { return m_matrix.outerSize(); } /** \returns the nested expression */ const typename internal::remove_all::type& nestedExpression() const { return m_matrix; } Scalar reference() const { return m_reference; } RealScalar epsilon() const { return m_epsilon; } protected: MatrixTypeNested m_matrix; Scalar m_reference; RealScalar m_epsilon; }; namespace internal { // TODO find a way to unify the two following variants // This is tricky because implementing an inner iterator on top of an IndexBased evaluator is // not easy because the evaluators do not expose the sizes of the underlying expression. template struct unary_evaluator, IteratorBased> : public evaluator_base > { typedef typename evaluator::InnerIterator EvalIterator; public: typedef SparseView XprType; class InnerIterator : public EvalIterator { protected: typedef typename XprType::Scalar Scalar; public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& sve, Index outer) : EvalIterator(sve.m_argImpl,outer), m_view(sve.m_view) { incrementToNonZero(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { EvalIterator::operator++(); incrementToNonZero(); return *this; } using EvalIterator::value; protected: const XprType &m_view; private: void incrementToNonZero() { while((bool(*this)) && internal::isMuchSmallerThan(value(), m_view.reference(), m_view.epsilon())) { EvalIterator::operator++(); } } }; enum { CoeffReadCost = evaluator::CoeffReadCost, Flags = XprType::Flags }; explicit unary_evaluator(const XprType& xpr) : m_argImpl(xpr.nestedExpression()), m_view(xpr) {} protected: evaluator m_argImpl; const XprType &m_view; }; template struct unary_evaluator, IndexBased> : public evaluator_base > { public: typedef SparseView XprType; protected: enum { IsRowMajor = (XprType::Flags&RowMajorBit)==RowMajorBit }; typedef typename XprType::Scalar Scalar; typedef typename XprType::StorageIndex StorageIndex; public: class InnerIterator { public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& sve, Index outer) : m_sve(sve), m_inner(0), m_outer(outer), m_end(sve.m_view.innerSize()) { incrementToNonZero(); } EIGEN_STRONG_INLINE InnerIterator& operator++() { m_inner++; incrementToNonZero(); return *this; } EIGEN_STRONG_INLINE Scalar value() const { return (IsRowMajor) ? m_sve.m_argImpl.coeff(m_outer, m_inner) : m_sve.m_argImpl.coeff(m_inner, m_outer); } EIGEN_STRONG_INLINE StorageIndex index() const { return m_inner; } inline Index row() const { return IsRowMajor ? m_outer : index(); } inline Index col() const { return IsRowMajor ? index() : m_outer; } EIGEN_STRONG_INLINE operator bool() const { return m_inner < m_end && m_inner>=0; } protected: const unary_evaluator &m_sve; Index m_inner; const Index m_outer; const Index m_end; private: void incrementToNonZero() { while((bool(*this)) && internal::isMuchSmallerThan(value(), m_sve.m_view.reference(), m_sve.m_view.epsilon())) { m_inner++; } } }; enum { CoeffReadCost = evaluator::CoeffReadCost, Flags = XprType::Flags }; explicit unary_evaluator(const XprType& xpr) : m_argImpl(xpr.nestedExpression()), m_view(xpr) {} protected: evaluator m_argImpl; const XprType &m_view; }; } // end namespace internal /** \ingroup SparseCore_Module * * \returns a sparse expression of the dense expression \c *this with values smaller than * \a reference * \a epsilon removed. * * This method is typically used when prototyping to convert a quickly assembled dense Matrix \c D to a SparseMatrix \c S: * \code * MatrixXd D(n,m); * SparseMatrix S; * S = D.sparseView(); // suppress numerical zeros (exact) * S = D.sparseView(reference); * S = D.sparseView(reference,epsilon); * \endcode * where \a reference is a meaningful non zero reference value, * and \a epsilon is a tolerance factor defaulting to NumTraits::dummy_precision(). * * \sa SparseMatrixBase::pruned(), class SparseView */ template const SparseView MatrixBase::sparseView(const Scalar& reference, const typename NumTraits::Real& epsilon) const { return SparseView(derived(), reference, epsilon); } /** \returns an expression of \c *this with values smaller than * \a reference * \a epsilon removed. * * This method is typically used in conjunction with the product of two sparse matrices * to automatically prune the smallest values as follows: * \code * C = (A*B).pruned(); // suppress numerical zeros (exact) * C = (A*B).pruned(ref); * C = (A*B).pruned(ref,epsilon); * \endcode * where \c ref is a meaningful non zero reference value. * */ template const SparseView SparseMatrixBase::pruned(const Scalar& reference, const RealScalar& epsilon) const { return SparseView(derived(), reference, epsilon); } } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SparseCore/SparseRedux.h0000644000175100001770000000324314107270226023036 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEREDUX_H #define EIGEN_SPARSEREDUX_H namespace Eigen { template typename internal::traits::Scalar SparseMatrixBase::sum() const { eigen_assert(rows()>0 && cols()>0 && "you are using a non initialized matrix"); Scalar res(0); internal::evaluator thisEval(derived()); for (Index j=0; j::InnerIterator iter(thisEval,j); iter; ++iter) res += iter.value(); return res; } template typename internal::traits >::Scalar SparseMatrix<_Scalar,_Options,_Index>::sum() const { eigen_assert(rows()>0 && cols()>0 && "you are using a non initialized matrix"); if(this->isCompressed()) return Matrix::Map(m_data.valuePtr(), m_data.size()).sum(); else return Base::sum(); } template typename internal::traits >::Scalar SparseVector<_Scalar,_Options,_Index>::sum() const { eigen_assert(rows()>0 && cols()>0 && "you are using a non initialized matrix"); return Matrix::Map(m_data.valuePtr(), m_data.size()).sum(); } } // end namespace Eigen #endif // EIGEN_SPARSEREDUX_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseAssign.h0000644000175100001770000002615014107270226023175 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEASSIGN_H #define EIGEN_SPARSEASSIGN_H namespace Eigen { template template Derived& SparseMatrixBase::operator=(const EigenBase &other) { internal::call_assignment_no_alias(derived(), other.derived()); return derived(); } template template Derived& SparseMatrixBase::operator=(const ReturnByValue& other) { // TODO use the evaluator mechanism other.evalTo(derived()); return derived(); } template template inline Derived& SparseMatrixBase::operator=(const SparseMatrixBase& other) { // by default sparse evaluation do not alias, so we can safely bypass the generic call_assignment routine internal::Assignment > ::run(derived(), other.derived(), internal::assign_op()); return derived(); } template inline Derived& SparseMatrixBase::operator=(const Derived& other) { internal::call_assignment_no_alias(derived(), other.derived()); return derived(); } namespace internal { template<> struct storage_kind_to_evaluator_kind { typedef IteratorBased Kind; }; template<> struct storage_kind_to_shape { typedef SparseShape Shape; }; struct Sparse2Sparse {}; struct Sparse2Dense {}; template<> struct AssignmentKind { typedef Sparse2Sparse Kind; }; template<> struct AssignmentKind { typedef Sparse2Sparse Kind; }; template<> struct AssignmentKind { typedef Sparse2Dense Kind; }; template<> struct AssignmentKind { typedef Sparse2Dense Kind; }; template void assign_sparse_to_sparse(DstXprType &dst, const SrcXprType &src) { typedef typename DstXprType::Scalar Scalar; typedef internal::evaluator DstEvaluatorType; typedef internal::evaluator SrcEvaluatorType; SrcEvaluatorType srcEvaluator(src); const bool transpose = (DstEvaluatorType::Flags & RowMajorBit) != (SrcEvaluatorType::Flags & RowMajorBit); const Index outerEvaluationSize = (SrcEvaluatorType::Flags&RowMajorBit) ? src.rows() : src.cols(); if ((!transpose) && src.isRValue()) { // eval without temporary dst.resize(src.rows(), src.cols()); dst.setZero(); dst.reserve((std::min)(src.rows()*src.cols(), (std::max)(src.rows(),src.cols())*2)); for (Index j=0; j::SupportedAccessPatterns & OuterRandomAccessPattern)==OuterRandomAccessPattern) || (!((DstEvaluatorType::Flags & RowMajorBit) != (SrcEvaluatorType::Flags & RowMajorBit)))) && "the transpose operation is supposed to be handled in SparseMatrix::operator="); enum { Flip = (DstEvaluatorType::Flags & RowMajorBit) != (SrcEvaluatorType::Flags & RowMajorBit) }; DstXprType temp(src.rows(), src.cols()); temp.reserve((std::min)(src.rows()*src.cols(), (std::max)(src.rows(),src.cols())*2)); for (Index j=0; j struct Assignment { static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &/*func*/) { assign_sparse_to_sparse(dst.derived(), src.derived()); } }; // Generic Sparse to Dense assignment template< typename DstXprType, typename SrcXprType, typename Functor, typename Weak> struct Assignment { static void run(DstXprType &dst, const SrcXprType &src, const Functor &func) { if(internal::is_same >::value) dst.setZero(); internal::evaluator srcEval(src); resize_if_allowed(dst, src, func); internal::evaluator dstEval(dst); const Index outerEvaluationSize = (internal::evaluator::Flags&RowMajorBit) ? src.rows() : src.cols(); for (Index j=0; j::InnerIterator i(srcEval,j); i; ++i) func.assignCoeff(dstEval.coeffRef(i.row(),i.col()), i.value()); } }; // Specialization for dense ?= dense +/- sparse and dense ?= sparse +/- dense template struct assignment_from_dense_op_sparse { template static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void run(DstXprType &dst, const SrcXprType &src, const InitialFunc& /*func*/) { #ifdef EIGEN_SPARSE_ASSIGNMENT_FROM_DENSE_OP_SPARSE_PLUGIN EIGEN_SPARSE_ASSIGNMENT_FROM_DENSE_OP_SPARSE_PLUGIN #endif call_assignment_no_alias(dst, src.lhs(), Func1()); call_assignment_no_alias(dst, src.rhs(), Func2()); } // Specialization for dense1 = sparse + dense2; -> dense1 = dense2; dense1 += sparse; template static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::enable_if::Shape,DenseShape>::value>::type run(DstXprType &dst, const CwiseBinaryOp, const Lhs, const Rhs> &src, const internal::assign_op& /*func*/) { #ifdef EIGEN_SPARSE_ASSIGNMENT_FROM_SPARSE_ADD_DENSE_PLUGIN EIGEN_SPARSE_ASSIGNMENT_FROM_SPARSE_ADD_DENSE_PLUGIN #endif // Apply the dense matrix first, then the sparse one. call_assignment_no_alias(dst, src.rhs(), Func1()); call_assignment_no_alias(dst, src.lhs(), Func2()); } // Specialization for dense1 = sparse - dense2; -> dense1 = -dense2; dense1 += sparse; template static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::enable_if::Shape,DenseShape>::value>::type run(DstXprType &dst, const CwiseBinaryOp, const Lhs, const Rhs> &src, const internal::assign_op& /*func*/) { #ifdef EIGEN_SPARSE_ASSIGNMENT_FROM_SPARSE_SUB_DENSE_PLUGIN EIGEN_SPARSE_ASSIGNMENT_FROM_SPARSE_SUB_DENSE_PLUGIN #endif // Apply the dense matrix first, then the sparse one. call_assignment_no_alias(dst, -src.rhs(), Func1()); call_assignment_no_alias(dst, src.lhs(), add_assign_op()); } }; #define EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(ASSIGN_OP,BINOP,ASSIGN_OP2) \ template< typename DstXprType, typename Lhs, typename Rhs, typename Scalar> \ struct Assignment, const Lhs, const Rhs>, internal::ASSIGN_OP, \ Sparse2Dense, \ typename internal::enable_if< internal::is_same::Shape,DenseShape>::value \ || internal::is_same::Shape,DenseShape>::value>::type> \ : assignment_from_dense_op_sparse, internal::ASSIGN_OP2 > \ {} EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(assign_op, scalar_sum_op,add_assign_op); EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(add_assign_op,scalar_sum_op,add_assign_op); EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(sub_assign_op,scalar_sum_op,sub_assign_op); EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(assign_op, scalar_difference_op,sub_assign_op); EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(add_assign_op,scalar_difference_op,sub_assign_op); EIGEN_CATCH_ASSIGN_DENSE_OP_SPARSE(sub_assign_op,scalar_difference_op,add_assign_op); // Specialization for "dst = dec.solve(rhs)" // NOTE we need to specialize it for Sparse2Sparse to avoid ambiguous specialization error template struct Assignment, internal::assign_op, Sparse2Sparse> { typedef Solve SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { Index dstRows = src.rows(); Index dstCols = src.cols(); if((dst.rows()!=dstRows) || (dst.cols()!=dstCols)) dst.resize(dstRows, dstCols); src.dec()._solve_impl(src.rhs(), dst); } }; struct Diagonal2Sparse {}; template<> struct AssignmentKind { typedef Diagonal2Sparse Kind; }; template< typename DstXprType, typename SrcXprType, typename Functor> struct Assignment { typedef typename DstXprType::StorageIndex StorageIndex; typedef typename DstXprType::Scalar Scalar; template static void run(SparseMatrix &dst, const SrcXprType &src, const AssignFunc &func) { dst.assignDiagonal(src.diagonal(), func); } template static void run(SparseMatrixBase &dst, const SrcXprType &src, const internal::assign_op &/*func*/) { dst.derived().diagonal() = src.diagonal(); } template static void run(SparseMatrixBase &dst, const SrcXprType &src, const internal::add_assign_op &/*func*/) { dst.derived().diagonal() += src.diagonal(); } template static void run(SparseMatrixBase &dst, const SrcXprType &src, const internal::sub_assign_op &/*func*/) { dst.derived().diagonal() -= src.diagonal(); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSEASSIGN_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseProduct.h0000644000175100001770000001665114107270226023376 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEPRODUCT_H #define EIGEN_SPARSEPRODUCT_H namespace Eigen { /** \returns an expression of the product of two sparse matrices. * By default a conservative product preserving the symbolic non zeros is performed. * The automatic pruning of the small values can be achieved by calling the pruned() function * in which case a totally different product algorithm is employed: * \code * C = (A*B).pruned(); // suppress numerical zeros (exact) * C = (A*B).pruned(ref); * C = (A*B).pruned(ref,epsilon); * \endcode * where \c ref is a meaningful non zero reference value. * */ template template inline const Product SparseMatrixBase::operator*(const SparseMatrixBase &other) const { return Product(derived(), other.derived()); } namespace internal { // sparse * sparse template struct generic_product_impl { template static void evalTo(Dest& dst, const Lhs& lhs, const Rhs& rhs) { evalTo(dst, lhs, rhs, typename evaluator_traits::Shape()); } // dense += sparse * sparse template static void addTo(Dest& dst, const ActualLhs& lhs, const Rhs& rhs, typename enable_if::Shape,DenseShape>::value,int*>::type* = 0) { typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhs); RhsNested rhsNested(rhs); internal::sparse_sparse_to_dense_product_selector::type, typename remove_all::type, Dest>::run(lhsNested,rhsNested,dst); } // dense -= sparse * sparse template static void subTo(Dest& dst, const Lhs& lhs, const Rhs& rhs, typename enable_if::Shape,DenseShape>::value,int*>::type* = 0) { addTo(dst, -lhs, rhs); } protected: // sparse = sparse * sparse template static void evalTo(Dest& dst, const Lhs& lhs, const Rhs& rhs, SparseShape) { typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhs); RhsNested rhsNested(rhs); internal::conservative_sparse_sparse_product_selector::type, typename remove_all::type, Dest>::run(lhsNested,rhsNested,dst); } // dense = sparse * sparse template static void evalTo(Dest& dst, const Lhs& lhs, const Rhs& rhs, DenseShape) { dst.setZero(); addTo(dst, lhs, rhs); } }; // sparse * sparse-triangular template struct generic_product_impl : public generic_product_impl {}; // sparse-triangular * sparse template struct generic_product_impl : public generic_product_impl {}; // dense = sparse-product (can be sparse*sparse, sparse*perm, etc.) template< typename DstXprType, typename Lhs, typename Rhs> struct Assignment, internal::assign_op::Scalar>, Sparse2Dense> { typedef Product SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { Index dstRows = src.rows(); Index dstCols = src.cols(); if((dst.rows()!=dstRows) || (dst.cols()!=dstCols)) dst.resize(dstRows, dstCols); generic_product_impl::evalTo(dst,src.lhs(),src.rhs()); } }; // dense += sparse-product (can be sparse*sparse, sparse*perm, etc.) template< typename DstXprType, typename Lhs, typename Rhs> struct Assignment, internal::add_assign_op::Scalar>, Sparse2Dense> { typedef Product SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::add_assign_op &) { generic_product_impl::addTo(dst,src.lhs(),src.rhs()); } }; // dense -= sparse-product (can be sparse*sparse, sparse*perm, etc.) template< typename DstXprType, typename Lhs, typename Rhs> struct Assignment, internal::sub_assign_op::Scalar>, Sparse2Dense> { typedef Product SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::sub_assign_op &) { generic_product_impl::subTo(dst,src.lhs(),src.rhs()); } }; template struct unary_evaluator >, IteratorBased> : public evaluator::PlainObject> { typedef SparseView > XprType; typedef typename XprType::PlainObject PlainObject; typedef evaluator Base; explicit unary_evaluator(const XprType& xpr) : m_result(xpr.rows(), xpr.cols()) { using std::abs; ::new (static_cast(this)) Base(m_result); typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(xpr.nestedExpression().lhs()); RhsNested rhsNested(xpr.nestedExpression().rhs()); internal::sparse_sparse_product_with_pruning_selector::type, typename remove_all::type, PlainObject>::run(lhsNested,rhsNested,m_result, abs(xpr.reference())*xpr.epsilon()); } protected: PlainObject m_result; }; } // end namespace internal // sparse matrix = sparse-product (can be sparse*sparse, sparse*perm, etc.) template template SparseMatrix& SparseMatrix::operator=(const Product& src) { // std::cout << "in Assignment : " << DstOptions << "\n"; SparseMatrix dst(src.rows(),src.cols()); internal::generic_product_impl::evalTo(dst,src.lhs(),src.rhs()); this->swap(dst); return *this; } } // end namespace Eigen #endif // EIGEN_SPARSEPRODUCT_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseRef.h0000644000175100001770000003636014107270226022471 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_REF_H #define EIGEN_SPARSE_REF_H namespace Eigen { enum { StandardCompressedFormat = 2 /**< used by Ref to specify whether the input storage must be in standard compressed form */ }; namespace internal { template class SparseRefBase; template struct traits, _Options, _StrideType> > : public traits > { typedef SparseMatrix PlainObjectType; enum { Options = _Options, Flags = traits::Flags | CompressedAccessBit | NestByRefBit }; template struct match { enum { StorageOrderMatch = PlainObjectType::IsVectorAtCompileTime || Derived::IsVectorAtCompileTime || ((PlainObjectType::Flags&RowMajorBit)==(Derived::Flags&RowMajorBit)), MatchAtCompileTime = (Derived::Flags&CompressedAccessBit) && StorageOrderMatch }; typedef typename internal::conditional::type type; }; }; template struct traits, _Options, _StrideType> > : public traits, _Options, _StrideType> > { enum { Flags = (traits >::Flags | CompressedAccessBit | NestByRefBit) & ~LvalueBit }; }; template struct traits, _Options, _StrideType> > : public traits > { typedef SparseVector PlainObjectType; enum { Options = _Options, Flags = traits::Flags | CompressedAccessBit | NestByRefBit }; template struct match { enum { MatchAtCompileTime = (Derived::Flags&CompressedAccessBit) && Derived::IsVectorAtCompileTime }; typedef typename internal::conditional::type type; }; }; template struct traits, _Options, _StrideType> > : public traits, _Options, _StrideType> > { enum { Flags = (traits >::Flags | CompressedAccessBit | NestByRefBit) & ~LvalueBit }; }; template struct traits > : public traits {}; template class SparseRefBase : public SparseMapBase { public: typedef SparseMapBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(SparseRefBase) SparseRefBase() : Base(RowsAtCompileTime==Dynamic?0:RowsAtCompileTime,ColsAtCompileTime==Dynamic?0:ColsAtCompileTime, 0, 0, 0, 0, 0) {} protected: template void construct(Expression& expr) { if(expr.outerIndexPtr()==0) ::new (static_cast(this)) Base(expr.size(), expr.nonZeros(), expr.innerIndexPtr(), expr.valuePtr()); else ::new (static_cast(this)) Base(expr.rows(), expr.cols(), expr.nonZeros(), expr.outerIndexPtr(), expr.innerIndexPtr(), expr.valuePtr(), expr.innerNonZeroPtr()); } }; } // namespace internal /** * \ingroup SparseCore_Module * * \brief A sparse matrix expression referencing an existing sparse expression * * \tparam SparseMatrixType the equivalent sparse matrix type of the referenced data, it must be a template instance of class SparseMatrix. * \tparam Options specifies whether the a standard compressed format is required \c Options is \c #StandardCompressedFormat, or \c 0. * The default is \c 0. * * \sa class Ref */ #ifndef EIGEN_PARSED_BY_DOXYGEN template class Ref, Options, StrideType > : public internal::SparseRefBase, Options, StrideType > > #else template class Ref : public SparseMapBase // yes, that's weird to use Derived here, but that works! #endif { typedef SparseMatrix PlainObjectType; typedef internal::traits Traits; template inline Ref(const SparseMatrix& expr); template inline Ref(const MappedSparseMatrix& expr); public: typedef internal::SparseRefBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Ref) #ifndef EIGEN_PARSED_BY_DOXYGEN template inline Ref(SparseMatrix& expr) { EIGEN_STATIC_ASSERT(bool(Traits::template match >::MatchAtCompileTime), STORAGE_LAYOUT_DOES_NOT_MATCH); eigen_assert( ((Options & int(StandardCompressedFormat))==0) || (expr.isCompressed()) ); Base::construct(expr.derived()); } template inline Ref(MappedSparseMatrix& expr) { EIGEN_STATIC_ASSERT(bool(Traits::template match >::MatchAtCompileTime), STORAGE_LAYOUT_DOES_NOT_MATCH); eigen_assert( ((Options & int(StandardCompressedFormat))==0) || (expr.isCompressed()) ); Base::construct(expr.derived()); } template inline Ref(const SparseCompressedBase& expr) #else /** Implicit constructor from any sparse expression (2D matrix or 1D vector) */ template inline Ref(SparseCompressedBase& expr) #endif { EIGEN_STATIC_ASSERT(bool(internal::is_lvalue::value), THIS_EXPRESSION_IS_NOT_A_LVALUE__IT_IS_READ_ONLY); EIGEN_STATIC_ASSERT(bool(Traits::template match::MatchAtCompileTime), STORAGE_LAYOUT_DOES_NOT_MATCH); eigen_assert( ((Options & int(StandardCompressedFormat))==0) || (expr.isCompressed()) ); Base::construct(expr.const_cast_derived()); } }; // this is the const ref version template class Ref, Options, StrideType> : public internal::SparseRefBase, Options, StrideType> > { typedef SparseMatrix TPlainObjectType; typedef internal::traits Traits; public: typedef internal::SparseRefBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Ref) template inline Ref(const SparseMatrixBase& expr) : m_hasCopy(false) { construct(expr.derived(), typename Traits::template match::type()); } inline Ref(const Ref& other) : Base(other), m_hasCopy(false) { // copy constructor shall not copy the m_object, to avoid unnecessary malloc and copy } template inline Ref(const RefBase& other) : m_hasCopy(false) { construct(other.derived(), typename Traits::template match::type()); } ~Ref() { if(m_hasCopy) { TPlainObjectType* obj = reinterpret_cast(&m_storage); obj->~TPlainObjectType(); } } protected: template void construct(const Expression& expr,internal::true_type) { if((Options & int(StandardCompressedFormat)) && (!expr.isCompressed())) { TPlainObjectType* obj = reinterpret_cast(&m_storage); ::new (obj) TPlainObjectType(expr); m_hasCopy = true; Base::construct(*obj); } else { Base::construct(expr); } } template void construct(const Expression& expr, internal::false_type) { TPlainObjectType* obj = reinterpret_cast(&m_storage); ::new (obj) TPlainObjectType(expr); m_hasCopy = true; Base::construct(*obj); } protected: typename internal::aligned_storage::type m_storage; bool m_hasCopy; }; /** * \ingroup SparseCore_Module * * \brief A sparse vector expression referencing an existing sparse vector expression * * \tparam SparseVectorType the equivalent sparse vector type of the referenced data, it must be a template instance of class SparseVector. * * \sa class Ref */ #ifndef EIGEN_PARSED_BY_DOXYGEN template class Ref, Options, StrideType > : public internal::SparseRefBase, Options, StrideType > > #else template class Ref : public SparseMapBase #endif { typedef SparseVector PlainObjectType; typedef internal::traits Traits; template inline Ref(const SparseVector& expr); public: typedef internal::SparseRefBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Ref) #ifndef EIGEN_PARSED_BY_DOXYGEN template inline Ref(SparseVector& expr) { EIGEN_STATIC_ASSERT(bool(Traits::template match >::MatchAtCompileTime), STORAGE_LAYOUT_DOES_NOT_MATCH); Base::construct(expr.derived()); } template inline Ref(const SparseCompressedBase& expr) #else /** Implicit constructor from any 1D sparse vector expression */ template inline Ref(SparseCompressedBase& expr) #endif { EIGEN_STATIC_ASSERT(bool(internal::is_lvalue::value), THIS_EXPRESSION_IS_NOT_A_LVALUE__IT_IS_READ_ONLY); EIGEN_STATIC_ASSERT(bool(Traits::template match::MatchAtCompileTime), STORAGE_LAYOUT_DOES_NOT_MATCH); Base::construct(expr.const_cast_derived()); } }; // this is the const ref version template class Ref, Options, StrideType> : public internal::SparseRefBase, Options, StrideType> > { typedef SparseVector TPlainObjectType; typedef internal::traits Traits; public: typedef internal::SparseRefBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Ref) template inline Ref(const SparseMatrixBase& expr) : m_hasCopy(false) { construct(expr.derived(), typename Traits::template match::type()); } inline Ref(const Ref& other) : Base(other), m_hasCopy(false) { // copy constructor shall not copy the m_object, to avoid unnecessary malloc and copy } template inline Ref(const RefBase& other) : m_hasCopy(false) { construct(other.derived(), typename Traits::template match::type()); } ~Ref() { if(m_hasCopy) { TPlainObjectType* obj = reinterpret_cast(&m_storage); obj->~TPlainObjectType(); } } protected: template void construct(const Expression& expr,internal::true_type) { Base::construct(expr); } template void construct(const Expression& expr, internal::false_type) { TPlainObjectType* obj = reinterpret_cast(&m_storage); ::new (obj) TPlainObjectType(expr); m_hasCopy = true; Base::construct(*obj); } protected: typename internal::aligned_storage::type m_storage; bool m_hasCopy; }; namespace internal { // FIXME shall we introduce a general evaluatior_ref that we can specialize for any sparse object once, and thus remove this copy-pasta thing... template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Ref, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Ref, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Ref, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Ref, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; } } // end namespace Eigen #endif // EIGEN_SPARSE_REF_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseColEtree.h0000644000175100001770000001452514107270226023456 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* * NOTE: This file is the modified version of sp_coletree.c file in SuperLU * -- SuperLU routine (version 3.1) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * August 1, 2008 * * Copyright (c) 1994 by Xerox Corporation. All rights reserved. * * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY * EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. * * Permission is hereby granted to use or copy this program for any * purpose, provided the above notices are retained on all copies. * Permission to modify the code and to distribute modified code is * granted, provided the above notices are retained, and a notice that * the code was modified is included with the above copyright notice. */ #ifndef SPARSE_COLETREE_H #define SPARSE_COLETREE_H namespace Eigen { namespace internal { /** Find the root of the tree/set containing the vertex i : Use Path halving */ template Index etree_find (Index i, IndexVector& pp) { Index p = pp(i); // Parent Index gp = pp(p); // Grand parent while (gp != p) { pp(i) = gp; // Parent pointer on find path is changed to former grand parent i = gp; p = pp(i); gp = pp(p); } return p; } /** Compute the column elimination tree of a sparse matrix * \param mat The matrix in column-major format. * \param parent The elimination tree * \param firstRowElt The column index of the first element in each row * \param perm The permutation to apply to the column of \b mat */ template int coletree(const MatrixType& mat, IndexVector& parent, IndexVector& firstRowElt, typename MatrixType::StorageIndex *perm=0) { typedef typename MatrixType::StorageIndex StorageIndex; StorageIndex nc = convert_index(mat.cols()); // Number of columns StorageIndex m = convert_index(mat.rows()); StorageIndex diagSize = (std::min)(nc,m); IndexVector root(nc); // root of subtree of etree root.setZero(); IndexVector pp(nc); // disjoint sets pp.setZero(); // Initialize disjoint sets parent.resize(mat.cols()); //Compute first nonzero column in each row firstRowElt.resize(m); firstRowElt.setConstant(nc); firstRowElt.segment(0, diagSize).setLinSpaced(diagSize, 0, diagSize-1); bool found_diag; for (StorageIndex col = 0; col < nc; col++) { StorageIndex pcol = col; if(perm) pcol = perm[col]; for (typename MatrixType::InnerIterator it(mat, pcol); it; ++it) { Index row = it.row(); firstRowElt(row) = (std::min)(firstRowElt(row), col); } } /* Compute etree by Liu's algorithm for symmetric matrices, except use (firstRowElt[r],c) in place of an edge (r,c) of A. Thus each row clique in A'*A is replaced by a star centered at its first vertex, which has the same fill. */ StorageIndex rset, cset, rroot; for (StorageIndex col = 0; col < nc; col++) { found_diag = col>=m; pp(col) = col; cset = col; root(cset) = col; parent(col) = nc; /* The diagonal element is treated here even if it does not exist in the matrix * hence the loop is executed once more */ StorageIndex pcol = col; if(perm) pcol = perm[col]; for (typename MatrixType::InnerIterator it(mat, pcol); it||!found_diag; ++it) { // A sequence of interleaved find and union is performed Index i = col; if(it) i = it.index(); if (i == col) found_diag = true; StorageIndex row = firstRowElt(i); if (row >= col) continue; rset = internal::etree_find(row, pp); // Find the name of the set containing row rroot = root(rset); if (rroot != col) { parent(rroot) = col; pp(cset) = rset; cset = rset; root(cset) = col; } } } return 0; } /** * Depth-first search from vertex n. No recursion. * This routine was contributed by Cédric Doucet, CEDRAT Group, Meylan, France. */ template void nr_etdfs (typename IndexVector::Scalar n, IndexVector& parent, IndexVector& first_kid, IndexVector& next_kid, IndexVector& post, typename IndexVector::Scalar postnum) { typedef typename IndexVector::Scalar StorageIndex; StorageIndex current = n, first, next; while (postnum != n) { // No kid for the current node first = first_kid(current); // no kid for the current node if (first == -1) { // Numbering this node because it has no kid post(current) = postnum++; // looking for the next kid next = next_kid(current); while (next == -1) { // No more kids : back to the parent node current = parent(current); // numbering the parent node post(current) = postnum++; // Get the next kid next = next_kid(current); } // stopping criterion if (postnum == n+1) return; // Updating current node current = next; } else { current = first; } } } /** * \brief Post order a tree * \param n the number of nodes * \param parent Input tree * \param post postordered tree */ template void treePostorder(typename IndexVector::Scalar n, IndexVector& parent, IndexVector& post) { typedef typename IndexVector::Scalar StorageIndex; IndexVector first_kid, next_kid; // Linked list of children StorageIndex postnum; // Allocate storage for working arrays and results first_kid.resize(n+1); next_kid.setZero(n+1); post.setZero(n+1); // Set up structure describing children first_kid.setConstant(-1); for (StorageIndex v = n-1; v >= 0; v--) { StorageIndex dad = parent(v); next_kid(v) = first_kid(dad); first_kid(dad) = v; } // Depth-first search from dummy root vertex #n postnum = 0; internal::nr_etdfs(n, parent, first_kid, next_kid, post, postnum); } } // end namespace internal } // end namespace Eigen #endif // SPARSE_COLETREE_H piqp-octave/src/eigen/Eigen/src/SparseCore/CompressedStorage.h0000644000175100001770000002104714107270226024224 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COMPRESSED_STORAGE_H #define EIGEN_COMPRESSED_STORAGE_H namespace Eigen { namespace internal { /** \internal * Stores a sparse set of values as a list of values and a list of indices. * */ template class CompressedStorage { public: typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; protected: typedef typename NumTraits::Real RealScalar; public: CompressedStorage() : m_values(0), m_indices(0), m_size(0), m_allocatedSize(0) {} explicit CompressedStorage(Index size) : m_values(0), m_indices(0), m_size(0), m_allocatedSize(0) { resize(size); } CompressedStorage(const CompressedStorage& other) : m_values(0), m_indices(0), m_size(0), m_allocatedSize(0) { *this = other; } CompressedStorage& operator=(const CompressedStorage& other) { resize(other.size()); if(other.size()>0) { internal::smart_copy(other.m_values, other.m_values + m_size, m_values); internal::smart_copy(other.m_indices, other.m_indices + m_size, m_indices); } return *this; } void swap(CompressedStorage& other) { std::swap(m_values, other.m_values); std::swap(m_indices, other.m_indices); std::swap(m_size, other.m_size); std::swap(m_allocatedSize, other.m_allocatedSize); } ~CompressedStorage() { delete[] m_values; delete[] m_indices; } void reserve(Index size) { Index newAllocatedSize = m_size + size; if (newAllocatedSize > m_allocatedSize) reallocate(newAllocatedSize); } void squeeze() { if (m_allocatedSize>m_size) reallocate(m_size); } void resize(Index size, double reserveSizeFactor = 0) { if (m_allocatedSize)(NumTraits::highest(), size + Index(reserveSizeFactor*double(size))); if(realloc_size(i); } inline Index size() const { return m_size; } inline Index allocatedSize() const { return m_allocatedSize; } inline void clear() { m_size = 0; } const Scalar* valuePtr() const { return m_values; } Scalar* valuePtr() { return m_values; } const StorageIndex* indexPtr() const { return m_indices; } StorageIndex* indexPtr() { return m_indices; } inline Scalar& value(Index i) { eigen_internal_assert(m_values!=0); return m_values[i]; } inline const Scalar& value(Index i) const { eigen_internal_assert(m_values!=0); return m_values[i]; } inline StorageIndex& index(Index i) { eigen_internal_assert(m_indices!=0); return m_indices[i]; } inline const StorageIndex& index(Index i) const { eigen_internal_assert(m_indices!=0); return m_indices[i]; } /** \returns the largest \c k such that for all \c j in [0,k) index[\c j]\<\a key */ inline Index searchLowerIndex(Index key) const { return searchLowerIndex(0, m_size, key); } /** \returns the largest \c k in [start,end) such that for all \c j in [start,k) index[\c j]\<\a key */ inline Index searchLowerIndex(Index start, Index end, Index key) const { while(end>start) { Index mid = (end+start)>>1; if (m_indices[mid]=end) return defaultValue; else if (end>start && key==m_indices[end-1]) return m_values[end-1]; // ^^ optimization: let's first check if it is the last coefficient // (very common in high level algorithms) const Index id = searchLowerIndex(start,end-1,key); return ((id=m_size || m_indices[id]!=key) { if (m_allocatedSize newValues(m_allocatedSize); internal::scoped_array newIndices(m_allocatedSize); // copy first chunk internal::smart_copy(m_values, m_values +id, newValues.ptr()); internal::smart_copy(m_indices, m_indices+id, newIndices.ptr()); // copy the rest if(m_size>id) { internal::smart_copy(m_values +id, m_values +m_size, newValues.ptr() +id+1); internal::smart_copy(m_indices+id, m_indices+m_size, newIndices.ptr()+id+1); } std::swap(m_values,newValues.ptr()); std::swap(m_indices,newIndices.ptr()); } else if(m_size>id) { internal::smart_memmove(m_values +id, m_values +m_size, m_values +id+1); internal::smart_memmove(m_indices+id, m_indices+m_size, m_indices+id+1); } m_size++; m_indices[id] = internal::convert_index(key); m_values[id] = defaultValue; } return m_values[id]; } void moveChunk(Index from, Index to, Index chunkSize) { eigen_internal_assert(to+chunkSize <= m_size); if(to>from && from+chunkSize>to) { // move backward internal::smart_memmove(m_values+from, m_values+from+chunkSize, m_values+to); internal::smart_memmove(m_indices+from, m_indices+from+chunkSize, m_indices+to); } else { internal::smart_copy(m_values+from, m_values+from+chunkSize, m_values+to); internal::smart_copy(m_indices+from, m_indices+from+chunkSize, m_indices+to); } } void prune(const Scalar& reference, const RealScalar& epsilon = NumTraits::dummy_precision()) { Index k = 0; Index n = size(); for (Index i=0; i newValues(size); internal::scoped_array newIndices(size); Index copySize = (std::min)(size, m_size); if (copySize>0) { internal::smart_copy(m_values, m_values+copySize, newValues.ptr()); internal::smart_copy(m_indices, m_indices+copySize, newIndices.ptr()); } std::swap(m_values,newValues.ptr()); std::swap(m_indices,newIndices.ptr()); m_allocatedSize = size; } protected: Scalar* m_values; StorageIndex* m_indices; Index m_size; Index m_allocatedSize; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_COMPRESSED_STORAGE_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseSparseProductWithPruning.h0000644000175100001770000002100014107270226026733 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSESPARSEPRODUCTWITHPRUNING_H #define EIGEN_SPARSESPARSEPRODUCTWITHPRUNING_H namespace Eigen { namespace internal { // perform a pseudo in-place sparse * sparse product assuming all matrices are col major template static void sparse_sparse_product_with_pruning_impl(const Lhs& lhs, const Rhs& rhs, ResultType& res, const typename ResultType::RealScalar& tolerance) { // return sparse_sparse_product_with_pruning_impl2(lhs,rhs,res); typedef typename remove_all::type::Scalar RhsScalar; typedef typename remove_all::type::Scalar ResScalar; typedef typename remove_all::type::StorageIndex StorageIndex; // make sure to call innerSize/outerSize since we fake the storage order. Index rows = lhs.innerSize(); Index cols = rhs.outerSize(); //Index size = lhs.outerSize(); eigen_assert(lhs.outerSize() == rhs.innerSize()); // allocate a temporary buffer AmbiVector tempVector(rows); // mimics a resizeByInnerOuter: if(ResultType::IsRowMajor) res.resize(cols, rows); else res.resize(rows, cols); evaluator lhsEval(lhs); evaluator rhsEval(rhs); // estimate the number of non zero entries // given a rhs column containing Y non zeros, we assume that the respective Y columns // of the lhs differs in average of one non zeros, thus the number of non zeros for // the product of a rhs column with the lhs is X+Y where X is the average number of non zero // per column of the lhs. // Therefore, we have nnz(lhs*rhs) = nnz(lhs) + nnz(rhs) Index estimated_nnz_prod = lhsEval.nonZerosEstimate() + rhsEval.nonZerosEstimate(); res.reserve(estimated_nnz_prod); double ratioColRes = double(estimated_nnz_prod)/(double(lhs.rows())*double(rhs.cols())); for (Index j=0; j::InnerIterator rhsIt(rhsEval, j); rhsIt; ++rhsIt) { // FIXME should be written like this: tmp += rhsIt.value() * lhs.col(rhsIt.index()) tempVector.restart(); RhsScalar x = rhsIt.value(); for (typename evaluator::InnerIterator lhsIt(lhsEval, rhsIt.index()); lhsIt; ++lhsIt) { tempVector.coeffRef(lhsIt.index()) += lhsIt.value() * x; } } res.startVec(j); for (typename AmbiVector::Iterator it(tempVector,tolerance); it; ++it) res.insertBackByOuterInner(j,it.index()) = it.value(); } res.finalize(); } template::Flags&RowMajorBit, int RhsStorageOrder = traits::Flags&RowMajorBit, int ResStorageOrder = traits::Flags&RowMajorBit> struct sparse_sparse_product_with_pruning_selector; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typename remove_all::type _res(res.rows(), res.cols()); internal::sparse_sparse_product_with_pruning_impl(lhs, rhs, _res, tolerance); res.swap(_res); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { // we need a col-major matrix to hold the result typedef SparseMatrix SparseTemporaryType; SparseTemporaryType _res(res.rows(), res.cols()); internal::sparse_sparse_product_with_pruning_impl(lhs, rhs, _res, tolerance); res = _res; } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { // let's transpose the product to get a column x column product typename remove_all::type _res(res.rows(), res.cols()); internal::sparse_sparse_product_with_pruning_impl(rhs, lhs, _res, tolerance); res.swap(_res); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typedef SparseMatrix ColMajorMatrixLhs; typedef SparseMatrix ColMajorMatrixRhs; ColMajorMatrixLhs colLhs(lhs); ColMajorMatrixRhs colRhs(rhs); internal::sparse_sparse_product_with_pruning_impl(colLhs, colRhs, res, tolerance); // let's transpose the product to get a column x column product // typedef SparseMatrix SparseTemporaryType; // SparseTemporaryType _res(res.cols(), res.rows()); // sparse_sparse_product_with_pruning_impl(rhs, lhs, _res); // res = _res.transpose(); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typedef SparseMatrix RowMajorMatrixLhs; RowMajorMatrixLhs rowLhs(lhs); sparse_sparse_product_with_pruning_selector(rowLhs,rhs,res,tolerance); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typedef SparseMatrix RowMajorMatrixRhs; RowMajorMatrixRhs rowRhs(rhs); sparse_sparse_product_with_pruning_selector(lhs,rowRhs,res,tolerance); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typedef SparseMatrix ColMajorMatrixRhs; ColMajorMatrixRhs colRhs(rhs); internal::sparse_sparse_product_with_pruning_impl(lhs, colRhs, res, tolerance); } }; template struct sparse_sparse_product_with_pruning_selector { typedef typename ResultType::RealScalar RealScalar; static void run(const Lhs& lhs, const Rhs& rhs, ResultType& res, const RealScalar& tolerance) { typedef SparseMatrix ColMajorMatrixLhs; ColMajorMatrixLhs colLhs(lhs); internal::sparse_sparse_product_with_pruning_impl(colLhs, rhs, res, tolerance); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSESPARSEPRODUCTWITHPRUNING_H piqp-octave/src/eigen/Eigen/src/SparseCore/AmbiVector.h0000644000175100001770000002465614107270226022637 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_AMBIVECTOR_H #define EIGEN_AMBIVECTOR_H namespace Eigen { namespace internal { /** \internal * Hybrid sparse/dense vector class designed for intensive read-write operations. * * See BasicSparseLLT and SparseProduct for usage examples. */ template class AmbiVector { public: typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; typedef typename NumTraits::Real RealScalar; explicit AmbiVector(Index size) : m_buffer(0), m_zero(0), m_size(0), m_end(0), m_allocatedSize(0), m_allocatedElements(0), m_mode(-1) { resize(size); } void init(double estimatedDensity); void init(int mode); Index nonZeros() const; /** Specifies a sub-vector to work on */ void setBounds(Index start, Index end) { m_start = convert_index(start); m_end = convert_index(end); } void setZero(); void restart(); Scalar& coeffRef(Index i); Scalar& coeff(Index i); class Iterator; ~AmbiVector() { delete[] m_buffer; } void resize(Index size) { if (m_allocatedSize < size) reallocate(size); m_size = convert_index(size); } StorageIndex size() const { return m_size; } protected: StorageIndex convert_index(Index idx) { return internal::convert_index(idx); } void reallocate(Index size) { // if the size of the matrix is not too large, let's allocate a bit more than needed such // that we can handle dense vector even in sparse mode. delete[] m_buffer; if (size<1000) { Index allocSize = (size * sizeof(ListEl) + sizeof(Scalar) - 1)/sizeof(Scalar); m_allocatedElements = convert_index((allocSize*sizeof(Scalar))/sizeof(ListEl)); m_buffer = new Scalar[allocSize]; } else { m_allocatedElements = convert_index((size*sizeof(Scalar))/sizeof(ListEl)); m_buffer = new Scalar[size]; } m_size = convert_index(size); m_start = 0; m_end = m_size; } void reallocateSparse() { Index copyElements = m_allocatedElements; m_allocatedElements = (std::min)(StorageIndex(m_allocatedElements*1.5),m_size); Index allocSize = m_allocatedElements * sizeof(ListEl); allocSize = (allocSize + sizeof(Scalar) - 1)/sizeof(Scalar); Scalar* newBuffer = new Scalar[allocSize]; std::memcpy(newBuffer, m_buffer, copyElements * sizeof(ListEl)); delete[] m_buffer; m_buffer = newBuffer; } protected: // element type of the linked list struct ListEl { StorageIndex next; StorageIndex index; Scalar value; }; // used to store data in both mode Scalar* m_buffer; Scalar m_zero; StorageIndex m_size; StorageIndex m_start; StorageIndex m_end; StorageIndex m_allocatedSize; StorageIndex m_allocatedElements; StorageIndex m_mode; // linked list mode StorageIndex m_llStart; StorageIndex m_llCurrent; StorageIndex m_llSize; }; /** \returns the number of non zeros in the current sub vector */ template Index AmbiVector<_Scalar,_StorageIndex>::nonZeros() const { if (m_mode==IsSparse) return m_llSize; else return m_end - m_start; } template void AmbiVector<_Scalar,_StorageIndex>::init(double estimatedDensity) { if (estimatedDensity>0.1) init(IsDense); else init(IsSparse); } template void AmbiVector<_Scalar,_StorageIndex>::init(int mode) { m_mode = mode; // This is only necessary in sparse mode, but we set these unconditionally to avoid some maybe-uninitialized warnings // if (m_mode==IsSparse) { m_llSize = 0; m_llStart = -1; } } /** Must be called whenever we might perform a write access * with an index smaller than the previous one. * * Don't worry, this function is extremely cheap. */ template void AmbiVector<_Scalar,_StorageIndex>::restart() { m_llCurrent = m_llStart; } /** Set all coefficients of current subvector to zero */ template void AmbiVector<_Scalar,_StorageIndex>::setZero() { if (m_mode==IsDense) { for (Index i=m_start; i _Scalar& AmbiVector<_Scalar,_StorageIndex>::coeffRef(Index i) { if (m_mode==IsDense) return m_buffer[i]; else { ListEl* EIGEN_RESTRICT llElements = reinterpret_cast(m_buffer); // TODO factorize the following code to reduce code generation eigen_assert(m_mode==IsSparse); if (m_llSize==0) { // this is the first element m_llStart = 0; m_llCurrent = 0; ++m_llSize; llElements[0].value = Scalar(0); llElements[0].index = convert_index(i); llElements[0].next = -1; return llElements[0].value; } else if (i=llElements[m_llCurrent].index && "you must call restart() before inserting an element with lower or equal index"); while (nextel >= 0 && llElements[nextel].index<=i) { m_llCurrent = nextel; nextel = llElements[nextel].next; } if (llElements[m_llCurrent].index==i) { // the coefficient already exists and we found it ! return llElements[m_llCurrent].value; } else { if (m_llSize>=m_allocatedElements) { reallocateSparse(); llElements = reinterpret_cast(m_buffer); } eigen_internal_assert(m_llSize _Scalar& AmbiVector<_Scalar,_StorageIndex>::coeff(Index i) { if (m_mode==IsDense) return m_buffer[i]; else { ListEl* EIGEN_RESTRICT llElements = reinterpret_cast(m_buffer); eigen_assert(m_mode==IsSparse); if ((m_llSize==0) || (i= 0 && llElements[elid].index class AmbiVector<_Scalar,_StorageIndex>::Iterator { public: typedef _Scalar Scalar; typedef typename NumTraits::Real RealScalar; /** Default constructor * \param vec the vector on which we iterate * \param epsilon the minimal value used to prune zero coefficients. * In practice, all coefficients having a magnitude smaller than \a epsilon * are skipped. */ explicit Iterator(const AmbiVector& vec, const RealScalar& epsilon = 0) : m_vector(vec) { using std::abs; m_epsilon = epsilon; m_isDense = m_vector.m_mode==IsDense; if (m_isDense) { m_currentEl = 0; // this is to avoid a compilation warning m_cachedValue = 0; // this is to avoid a compilation warning m_cachedIndex = m_vector.m_start-1; ++(*this); } else { ListEl* EIGEN_RESTRICT llElements = reinterpret_cast(m_vector.m_buffer); m_currentEl = m_vector.m_llStart; while (m_currentEl>=0 && abs(llElements[m_currentEl].value)<=m_epsilon) m_currentEl = llElements[m_currentEl].next; if (m_currentEl<0) { m_cachedValue = 0; // this is to avoid a compilation warning m_cachedIndex = -1; } else { m_cachedIndex = llElements[m_currentEl].index; m_cachedValue = llElements[m_currentEl].value; } } } StorageIndex index() const { return m_cachedIndex; } Scalar value() const { return m_cachedValue; } operator bool() const { return m_cachedIndex>=0; } Iterator& operator++() { using std::abs; if (m_isDense) { do { ++m_cachedIndex; } while (m_cachedIndex(m_vector.m_buffer); do { m_currentEl = llElements[m_currentEl].next; } while (m_currentEl>=0 && abs(llElements[m_currentEl].value)<=m_epsilon); if (m_currentEl<0) { m_cachedIndex = -1; } else { m_cachedIndex = llElements[m_currentEl].index; m_cachedValue = llElements[m_currentEl].value; } } return *this; } protected: const AmbiVector& m_vector; // the target vector StorageIndex m_currentEl; // the current element in sparse/linked-list mode RealScalar m_epsilon; // epsilon used to prune zero coefficients StorageIndex m_cachedIndex; // current coordinate Scalar m_cachedValue; // current value bool m_isDense; // mode of the vector }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_AMBIVECTOR_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseCompressedBase.h0000644000175100001770000003244614107270226024655 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_COMPRESSED_BASE_H #define EIGEN_SPARSE_COMPRESSED_BASE_H namespace Eigen { template class SparseCompressedBase; namespace internal { template struct traits > : traits {}; } // end namespace internal /** \ingroup SparseCore_Module * \class SparseCompressedBase * \brief Common base class for sparse [compressed]-{row|column}-storage format. * * This class defines the common interface for all derived classes implementing the compressed sparse storage format, such as: * - SparseMatrix * - Ref * - Map * */ template class SparseCompressedBase : public SparseMatrixBase { public: typedef SparseMatrixBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(SparseCompressedBase) using Base::operator=; using Base::IsRowMajor; class InnerIterator; class ReverseInnerIterator; protected: typedef typename Base::IndexVector IndexVector; Eigen::Map innerNonZeros() { return Eigen::Map(innerNonZeroPtr(), isCompressed()?0:derived().outerSize()); } const Eigen::Map innerNonZeros() const { return Eigen::Map(innerNonZeroPtr(), isCompressed()?0:derived().outerSize()); } public: /** \returns the number of non zero coefficients */ inline Index nonZeros() const { if(Derived::IsVectorAtCompileTime && outerIndexPtr()==0) return derived().nonZeros(); else if(isCompressed()) return outerIndexPtr()[derived().outerSize()]-outerIndexPtr()[0]; else if(derived().outerSize()==0) return 0; else return innerNonZeros().sum(); } /** \returns a const pointer to the array of values. * This function is aimed at interoperability with other libraries. * \sa innerIndexPtr(), outerIndexPtr() */ inline const Scalar* valuePtr() const { return derived().valuePtr(); } /** \returns a non-const pointer to the array of values. * This function is aimed at interoperability with other libraries. * \sa innerIndexPtr(), outerIndexPtr() */ inline Scalar* valuePtr() { return derived().valuePtr(); } /** \returns a const pointer to the array of inner indices. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), outerIndexPtr() */ inline const StorageIndex* innerIndexPtr() const { return derived().innerIndexPtr(); } /** \returns a non-const pointer to the array of inner indices. * This function is aimed at interoperability with other libraries. * \sa valuePtr(), outerIndexPtr() */ inline StorageIndex* innerIndexPtr() { return derived().innerIndexPtr(); } /** \returns a const pointer to the array of the starting positions of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 for SparseVector * \sa valuePtr(), innerIndexPtr() */ inline const StorageIndex* outerIndexPtr() const { return derived().outerIndexPtr(); } /** \returns a non-const pointer to the array of the starting positions of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 for SparseVector * \sa valuePtr(), innerIndexPtr() */ inline StorageIndex* outerIndexPtr() { return derived().outerIndexPtr(); } /** \returns a const pointer to the array of the number of non zeros of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 in compressed mode */ inline const StorageIndex* innerNonZeroPtr() const { return derived().innerNonZeroPtr(); } /** \returns a non-const pointer to the array of the number of non zeros of the inner vectors. * This function is aimed at interoperability with other libraries. * \warning it returns the null pointer 0 in compressed mode */ inline StorageIndex* innerNonZeroPtr() { return derived().innerNonZeroPtr(); } /** \returns whether \c *this is in compressed form. */ inline bool isCompressed() const { return innerNonZeroPtr()==0; } /** \returns a read-only view of the stored coefficients as a 1D array expression. * * \warning this method is for \b compressed \b storage \b only, and it will trigger an assertion otherwise. * * \sa valuePtr(), isCompressed() */ const Map > coeffs() const { eigen_assert(isCompressed()); return Array::Map(valuePtr(),nonZeros()); } /** \returns a read-write view of the stored coefficients as a 1D array expression * * \warning this method is for \b compressed \b storage \b only, and it will trigger an assertion otherwise. * * Here is an example: * \include SparseMatrix_coeffs.cpp * and the output is: * \include SparseMatrix_coeffs.out * * \sa valuePtr(), isCompressed() */ Map > coeffs() { eigen_assert(isCompressed()); return Array::Map(valuePtr(),nonZeros()); } protected: /** Default constructor. Do nothing. */ SparseCompressedBase() {} /** \internal return the index of the coeff at (row,col) or just before if it does not exist. * This is an analogue of std::lower_bound. */ internal::LowerBoundIndex lower_bound(Index row, Index col) const { eigen_internal_assert(row>=0 && rowrows() && col>=0 && colcols()); const Index outer = Derived::IsRowMajor ? row : col; const Index inner = Derived::IsRowMajor ? col : row; Index start = this->outerIndexPtr()[outer]; Index end = this->isCompressed() ? this->outerIndexPtr()[outer+1] : this->outerIndexPtr()[outer] + this->innerNonZeroPtr()[outer]; eigen_assert(end>=start && "you are using a non finalized sparse matrix or written coefficient does not exist"); internal::LowerBoundIndex p; p.value = std::lower_bound(this->innerIndexPtr()+start, this->innerIndexPtr()+end,inner) - this->innerIndexPtr(); p.found = (p.valueinnerIndexPtr()[p.value]==inner); return p; } friend struct internal::evaluator >; private: template explicit SparseCompressedBase(const SparseCompressedBase&); }; template class SparseCompressedBase::InnerIterator { public: InnerIterator() : m_values(0), m_indices(0), m_outer(0), m_id(0), m_end(0) {} InnerIterator(const InnerIterator& other) : m_values(other.m_values), m_indices(other.m_indices), m_outer(other.m_outer), m_id(other.m_id), m_end(other.m_end) {} InnerIterator& operator=(const InnerIterator& other) { m_values = other.m_values; m_indices = other.m_indices; const_cast(m_outer).setValue(other.m_outer.value()); m_id = other.m_id; m_end = other.m_end; return *this; } InnerIterator(const SparseCompressedBase& mat, Index outer) : m_values(mat.valuePtr()), m_indices(mat.innerIndexPtr()), m_outer(outer) { if(Derived::IsVectorAtCompileTime && mat.outerIndexPtr()==0) { m_id = 0; m_end = mat.nonZeros(); } else { m_id = mat.outerIndexPtr()[outer]; if(mat.isCompressed()) m_end = mat.outerIndexPtr()[outer+1]; else m_end = m_id + mat.innerNonZeroPtr()[outer]; } } explicit InnerIterator(const SparseCompressedBase& mat) : m_values(mat.valuePtr()), m_indices(mat.innerIndexPtr()), m_outer(0), m_id(0), m_end(mat.nonZeros()) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); } explicit InnerIterator(const internal::CompressedStorage& data) : m_values(data.valuePtr()), m_indices(data.indexPtr()), m_outer(0), m_id(0), m_end(data.size()) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); } inline InnerIterator& operator++() { m_id++; return *this; } inline InnerIterator& operator+=(Index i) { m_id += i ; return *this; } inline InnerIterator operator+(Index i) { InnerIterator result = *this; result += i; return result; } inline const Scalar& value() const { return m_values[m_id]; } inline Scalar& valueRef() { return const_cast(m_values[m_id]); } inline StorageIndex index() const { return m_indices[m_id]; } inline Index outer() const { return m_outer.value(); } inline Index row() const { return IsRowMajor ? m_outer.value() : index(); } inline Index col() const { return IsRowMajor ? index() : m_outer.value(); } inline operator bool() const { return (m_id < m_end); } protected: const Scalar* m_values; const StorageIndex* m_indices; typedef internal::variable_if_dynamic OuterType; const OuterType m_outer; Index m_id; Index m_end; private: // If you get here, then you're not using the right InnerIterator type, e.g.: // SparseMatrix A; // SparseMatrix::InnerIterator it(A,0); template InnerIterator(const SparseMatrixBase&, Index outer); }; template class SparseCompressedBase::ReverseInnerIterator { public: ReverseInnerIterator(const SparseCompressedBase& mat, Index outer) : m_values(mat.valuePtr()), m_indices(mat.innerIndexPtr()), m_outer(outer) { if(Derived::IsVectorAtCompileTime && mat.outerIndexPtr()==0) { m_start = 0; m_id = mat.nonZeros(); } else { m_start = mat.outerIndexPtr()[outer]; if(mat.isCompressed()) m_id = mat.outerIndexPtr()[outer+1]; else m_id = m_start + mat.innerNonZeroPtr()[outer]; } } explicit ReverseInnerIterator(const SparseCompressedBase& mat) : m_values(mat.valuePtr()), m_indices(mat.innerIndexPtr()), m_outer(0), m_start(0), m_id(mat.nonZeros()) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); } explicit ReverseInnerIterator(const internal::CompressedStorage& data) : m_values(data.valuePtr()), m_indices(data.indexPtr()), m_outer(0), m_start(0), m_id(data.size()) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); } inline ReverseInnerIterator& operator--() { --m_id; return *this; } inline ReverseInnerIterator& operator-=(Index i) { m_id -= i; return *this; } inline ReverseInnerIterator operator-(Index i) { ReverseInnerIterator result = *this; result -= i; return result; } inline const Scalar& value() const { return m_values[m_id-1]; } inline Scalar& valueRef() { return const_cast(m_values[m_id-1]); } inline StorageIndex index() const { return m_indices[m_id-1]; } inline Index outer() const { return m_outer.value(); } inline Index row() const { return IsRowMajor ? m_outer.value() : index(); } inline Index col() const { return IsRowMajor ? index() : m_outer.value(); } inline operator bool() const { return (m_id > m_start); } protected: const Scalar* m_values; const StorageIndex* m_indices; typedef internal::variable_if_dynamic OuterType; const OuterType m_outer; Index m_start; Index m_id; }; namespace internal { template struct evaluator > : evaluator_base { typedef typename Derived::Scalar Scalar; typedef typename Derived::InnerIterator InnerIterator; enum { CoeffReadCost = NumTraits::ReadCost, Flags = Derived::Flags }; evaluator() : m_matrix(0), m_zero(0) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } explicit evaluator(const Derived &mat) : m_matrix(&mat), m_zero(0) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_matrix->nonZeros(); } operator Derived&() { return m_matrix->const_cast_derived(); } operator const Derived&() const { return *m_matrix; } typedef typename DenseCoeffsBase::CoeffReturnType CoeffReturnType; const Scalar& coeff(Index row, Index col) const { Index p = find(row,col); if(p==Dynamic) return m_zero; else return m_matrix->const_cast_derived().valuePtr()[p]; } Scalar& coeffRef(Index row, Index col) { Index p = find(row,col); eigen_assert(p!=Dynamic && "written coefficient does not exist"); return m_matrix->const_cast_derived().valuePtr()[p]; } protected: Index find(Index row, Index col) const { internal::LowerBoundIndex p = m_matrix->lower_bound(row,col); return p.found ? p.value : Dynamic; } const Derived *m_matrix; const Scalar m_zero; }; } } // end namespace Eigen #endif // EIGEN_SPARSE_COMPRESSED_BASE_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseDot.h0000644000175100001770000000601014107270226022470 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_DOT_H #define EIGEN_SPARSE_DOT_H namespace Eigen { template template typename internal::traits::Scalar SparseMatrixBase::dot(const MatrixBase& other) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) EIGEN_STATIC_ASSERT_VECTOR_ONLY(OtherDerived) EIGEN_STATIC_ASSERT_SAME_VECTOR_SIZE(Derived,OtherDerived) EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) eigen_assert(size() == other.size()); eigen_assert(other.size()>0 && "you are using a non initialized vector"); internal::evaluator thisEval(derived()); typename internal::evaluator::InnerIterator i(thisEval, 0); Scalar res(0); while (i) { res += numext::conj(i.value()) * other.coeff(i.index()); ++i; } return res; } template template typename internal::traits::Scalar SparseMatrixBase::dot(const SparseMatrixBase& other) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) EIGEN_STATIC_ASSERT_VECTOR_ONLY(OtherDerived) EIGEN_STATIC_ASSERT_SAME_VECTOR_SIZE(Derived,OtherDerived) EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) eigen_assert(size() == other.size()); internal::evaluator thisEval(derived()); typename internal::evaluator::InnerIterator i(thisEval, 0); internal::evaluator otherEval(other.derived()); typename internal::evaluator::InnerIterator j(otherEval, 0); Scalar res(0); while (i && j) { if (i.index()==j.index()) { res += numext::conj(i.value()) * j.value(); ++i; ++j; } else if (i.index() inline typename NumTraits::Scalar>::Real SparseMatrixBase::squaredNorm() const { return numext::real((*this).cwiseAbs2().sum()); } template inline typename NumTraits::Scalar>::Real SparseMatrixBase::norm() const { using std::sqrt; return sqrt(squaredNorm()); } template inline typename NumTraits::Scalar>::Real SparseMatrixBase::blueNorm() const { return internal::blueNorm_impl(*this); } } // end namespace Eigen #endif // EIGEN_SPARSE_DOT_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseFuzzy.h0000644000175100001770000000212314107270226023072 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_FUZZY_H #define EIGEN_SPARSE_FUZZY_H namespace Eigen { template template bool SparseMatrixBase::isApprox(const SparseMatrixBase& other, const RealScalar &prec) const { const typename internal::nested_eval::type actualA(derived()); typename internal::conditional::type, const PlainObject>::type actualB(other.derived()); return (actualA - actualB).squaredNorm() <= prec * prec * numext::mini(actualA.squaredNorm(), actualB.squaredNorm()); } } // end namespace Eigen #endif // EIGEN_SPARSE_FUZZY_H piqp-octave/src/eigen/Eigen/src/SparseCore/MappedSparseMatrix.h0000644000175100001770000000421714107270226024344 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MAPPED_SPARSEMATRIX_H #define EIGEN_MAPPED_SPARSEMATRIX_H namespace Eigen { /** \deprecated Use Map > * \class MappedSparseMatrix * * \brief Sparse matrix * * \param _Scalar the scalar type, i.e. the type of the coefficients * * See http://www.netlib.org/linalg/html_templates/node91.html for details on the storage scheme. * */ namespace internal { template struct traits > : traits > {}; } // end namespace internal template class MappedSparseMatrix : public Map > { typedef Map > Base; public: typedef typename Base::StorageIndex StorageIndex; typedef typename Base::Scalar Scalar; inline MappedSparseMatrix(Index rows, Index cols, Index nnz, StorageIndex* outerIndexPtr, StorageIndex* innerIndexPtr, Scalar* valuePtr, StorageIndex* innerNonZeroPtr = 0) : Base(rows, cols, nnz, outerIndexPtr, innerIndexPtr, valuePtr, innerNonZeroPtr) {} /** Empty destructor */ inline ~MappedSparseMatrix() {} }; namespace internal { template struct evaluator > : evaluator > > { typedef MappedSparseMatrix<_Scalar,_Options,_StorageIndex> XprType; typedef evaluator > Base; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; } } // end namespace Eigen #endif // EIGEN_MAPPED_SPARSEMATRIX_H piqp-octave/src/eigen/Eigen/src/SparseCore/TriangularSolver.h0000644000175100001770000002267114107270226024102 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSETRIANGULARSOLVER_H #define EIGEN_SPARSETRIANGULARSOLVER_H namespace Eigen { namespace internal { template::Flags) & RowMajorBit> struct sparse_solve_triangular_selector; // forward substitution, row-major template struct sparse_solve_triangular_selector { typedef typename Rhs::Scalar Scalar; typedef evaluator LhsEval; typedef typename evaluator::InnerIterator LhsIterator; static void run(const Lhs& lhs, Rhs& other) { LhsEval lhsEval(lhs); for(Index col=0 ; col struct sparse_solve_triangular_selector { typedef typename Rhs::Scalar Scalar; typedef evaluator LhsEval; typedef typename evaluator::InnerIterator LhsIterator; static void run(const Lhs& lhs, Rhs& other) { LhsEval lhsEval(lhs); for(Index col=0 ; col=0 ; --i) { Scalar tmp = other.coeff(i,col); Scalar l_ii(0); LhsIterator it(lhsEval, i); while(it && it.index() struct sparse_solve_triangular_selector { typedef typename Rhs::Scalar Scalar; typedef evaluator LhsEval; typedef typename evaluator::InnerIterator LhsIterator; static void run(const Lhs& lhs, Rhs& other) { LhsEval lhsEval(lhs); for(Index col=0 ; col struct sparse_solve_triangular_selector { typedef typename Rhs::Scalar Scalar; typedef evaluator LhsEval; typedef typename evaluator::InnerIterator LhsIterator; static void run(const Lhs& lhs, Rhs& other) { LhsEval lhsEval(lhs); for(Index col=0 ; col=0; --i) { Scalar& tmp = other.coeffRef(i,col); if (tmp!=Scalar(0)) // optimization when other is actually sparse { if(!(Mode & UnitDiag)) { // TODO replace this by a binary search. make sure the binary search is safe for partially sorted elements LhsIterator it(lhsEval, i); while(it && it.index()!=i) ++it; eigen_assert(it && it.index()==i); other.coeffRef(i,col) /= it.value(); } LhsIterator it(lhsEval, i); for(; it && it.index() template void TriangularViewImpl::solveInPlace(MatrixBase& other) const { eigen_assert(derived().cols() == derived().rows() && derived().cols() == other.rows()); eigen_assert((!(Mode & ZeroDiag)) && bool(Mode & (Upper|Lower))); enum { copy = internal::traits::Flags & RowMajorBit }; typedef typename internal::conditional::type, OtherDerived&>::type OtherCopy; OtherCopy otherCopy(other.derived()); internal::sparse_solve_triangular_selector::type, Mode>::run(derived().nestedExpression(), otherCopy); if (copy) other = otherCopy; } #endif // pure sparse path namespace internal { template struct sparse_solve_triangular_sparse_selector; // forward substitution, col-major template struct sparse_solve_triangular_sparse_selector { typedef typename Rhs::Scalar Scalar; typedef typename promote_index_type::StorageIndex, typename traits::StorageIndex>::type StorageIndex; static void run(const Lhs& lhs, Rhs& other) { const bool IsLower = (UpLo==Lower); AmbiVector tempVector(other.rows()*2); tempVector.setBounds(0,other.rows()); Rhs res(other.rows(), other.cols()); res.reserve(other.nonZeros()); for(Index col=0 ; col=0; i+=IsLower?1:-1) { tempVector.restart(); Scalar& ci = tempVector.coeffRef(i); if (ci!=Scalar(0)) { // find typename Lhs::InnerIterator it(lhs, i); if(!(Mode & UnitDiag)) { if (IsLower) { eigen_assert(it.index()==i); ci /= it.value(); } else ci /= lhs.coeff(i,i); } tempVector.restart(); if (IsLower) { if (it.index()==i) ++it; for(; it; ++it) tempVector.coeffRef(it.index()) -= ci * it.value(); } else { for(; it && it.index()::Iterator it(tempVector/*,1e-12*/); it; ++it) { ++ count; // std::cerr << "fill " << it.index() << ", " << col << "\n"; // std::cout << it.value() << " "; // FIXME use insertBack res.insert(it.index(), col) = it.value(); } // std::cout << "tempVector.nonZeros() == " << int(count) << " / " << (other.rows()) << "\n"; } res.finalize(); other = res.markAsRValue(); } }; } // end namespace internal #ifndef EIGEN_PARSED_BY_DOXYGEN template template void TriangularViewImpl::solveInPlace(SparseMatrixBase& other) const { eigen_assert(derived().cols() == derived().rows() && derived().cols() == other.rows()); eigen_assert( (!(Mode & ZeroDiag)) && bool(Mode & (Upper|Lower))); // enum { copy = internal::traits::Flags & RowMajorBit }; // typedef typename internal::conditional::type, OtherDerived&>::type OtherCopy; // OtherCopy otherCopy(other.derived()); internal::sparse_solve_triangular_sparse_selector::run(derived().nestedExpression(), other.derived()); // if (copy) // other = otherCopy; } #endif } // end namespace Eigen #endif // EIGEN_SPARSETRIANGULARSOLVER_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseSelfAdjointView.h0000644000175100001770000006244114107270226025011 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_SELFADJOINTVIEW_H #define EIGEN_SPARSE_SELFADJOINTVIEW_H namespace Eigen { /** \ingroup SparseCore_Module * \class SparseSelfAdjointView * * \brief Pseudo expression to manipulate a triangular sparse matrix as a selfadjoint matrix. * * \param MatrixType the type of the dense matrix storing the coefficients * \param Mode can be either \c #Lower or \c #Upper * * This class is an expression of a sefladjoint matrix from a triangular part of a matrix * with given dense storage of the coefficients. It is the return type of MatrixBase::selfadjointView() * and most of the time this is the only way that it is used. * * \sa SparseMatrixBase::selfadjointView() */ namespace internal { template struct traits > : traits { }; template void permute_symm_to_symm(const MatrixType& mat, SparseMatrix& _dest, const typename MatrixType::StorageIndex* perm = 0); template void permute_symm_to_fullsymm(const MatrixType& mat, SparseMatrix& _dest, const typename MatrixType::StorageIndex* perm = 0); } template class SparseSelfAdjointView : public EigenBase > { public: enum { Mode = _Mode, TransposeMode = ((Mode & Upper) ? Lower : 0) | ((Mode & Lower) ? Upper : 0), RowsAtCompileTime = internal::traits::RowsAtCompileTime, ColsAtCompileTime = internal::traits::ColsAtCompileTime }; typedef EigenBase Base; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef Matrix VectorI; typedef typename internal::ref_selector::non_const_type MatrixTypeNested; typedef typename internal::remove_all::type _MatrixTypeNested; explicit inline SparseSelfAdjointView(MatrixType& matrix) : m_matrix(matrix) { eigen_assert(rows()==cols() && "SelfAdjointView is only for squared matrices"); } inline Index rows() const { return m_matrix.rows(); } inline Index cols() const { return m_matrix.cols(); } /** \internal \returns a reference to the nested matrix */ const _MatrixTypeNested& matrix() const { return m_matrix; } typename internal::remove_reference::type& matrix() { return m_matrix; } /** \returns an expression of the matrix product between a sparse self-adjoint matrix \c *this and a sparse matrix \a rhs. * * Note that there is no algorithmic advantage of performing such a product compared to a general sparse-sparse matrix product. * Indeed, the SparseSelfadjointView operand is first copied into a temporary SparseMatrix before computing the product. */ template Product operator*(const SparseMatrixBase& rhs) const { return Product(*this, rhs.derived()); } /** \returns an expression of the matrix product between a sparse matrix \a lhs and a sparse self-adjoint matrix \a rhs. * * Note that there is no algorithmic advantage of performing such a product compared to a general sparse-sparse matrix product. * Indeed, the SparseSelfadjointView operand is first copied into a temporary SparseMatrix before computing the product. */ template friend Product operator*(const SparseMatrixBase& lhs, const SparseSelfAdjointView& rhs) { return Product(lhs.derived(), rhs); } /** Efficient sparse self-adjoint matrix times dense vector/matrix product */ template Product operator*(const MatrixBase& rhs) const { return Product(*this, rhs.derived()); } /** Efficient dense vector/matrix times sparse self-adjoint matrix product */ template friend Product operator*(const MatrixBase& lhs, const SparseSelfAdjointView& rhs) { return Product(lhs.derived(), rhs); } /** Perform a symmetric rank K update of the selfadjoint matrix \c *this: * \f$ this = this + \alpha ( u u^* ) \f$ where \a u is a vector or matrix. * * \returns a reference to \c *this * * To perform \f$ this = this + \alpha ( u^* u ) \f$ you can simply * call this function with u.adjoint(). */ template SparseSelfAdjointView& rankUpdate(const SparseMatrixBase& u, const Scalar& alpha = Scalar(1)); /** \returns an expression of P H P^-1 */ // TODO implement twists in a more evaluator friendly fashion SparseSymmetricPermutationProduct<_MatrixTypeNested,Mode> twistedBy(const PermutationMatrix& perm) const { return SparseSymmetricPermutationProduct<_MatrixTypeNested,Mode>(m_matrix, perm); } template SparseSelfAdjointView& operator=(const SparseSymmetricPermutationProduct& permutedMatrix) { internal::call_assignment_no_alias_no_transpose(*this, permutedMatrix); return *this; } SparseSelfAdjointView& operator=(const SparseSelfAdjointView& src) { PermutationMatrix pnull; return *this = src.twistedBy(pnull); } // Since we override the copy-assignment operator, we need to explicitly re-declare the copy-constructor EIGEN_DEFAULT_COPY_CONSTRUCTOR(SparseSelfAdjointView) template SparseSelfAdjointView& operator=(const SparseSelfAdjointView& src) { PermutationMatrix pnull; return *this = src.twistedBy(pnull); } void resize(Index rows, Index cols) { EIGEN_ONLY_USED_FOR_DEBUG(rows); EIGEN_ONLY_USED_FOR_DEBUG(cols); eigen_assert(rows == this->rows() && cols == this->cols() && "SparseSelfadjointView::resize() does not actually allow to resize."); } protected: MatrixTypeNested m_matrix; //mutable VectorI m_countPerRow; //mutable VectorI m_countPerCol; private: template void evalTo(Dest &) const; }; /*************************************************************************** * Implementation of SparseMatrixBase methods ***************************************************************************/ template template typename SparseMatrixBase::template ConstSelfAdjointViewReturnType::Type SparseMatrixBase::selfadjointView() const { return SparseSelfAdjointView(derived()); } template template typename SparseMatrixBase::template SelfAdjointViewReturnType::Type SparseMatrixBase::selfadjointView() { return SparseSelfAdjointView(derived()); } /*************************************************************************** * Implementation of SparseSelfAdjointView methods ***************************************************************************/ template template SparseSelfAdjointView& SparseSelfAdjointView::rankUpdate(const SparseMatrixBase& u, const Scalar& alpha) { SparseMatrix tmp = u * u.adjoint(); if(alpha==Scalar(0)) m_matrix = tmp.template triangularView(); else m_matrix += alpha * tmp.template triangularView(); return *this; } namespace internal { // TODO currently a selfadjoint expression has the form SelfAdjointView<.,.> // in the future selfadjoint-ness should be defined by the expression traits // such that Transpose > is valid. (currently TriangularBase::transpose() is overloaded to make it work) template struct evaluator_traits > { typedef typename storage_kind_to_evaluator_kind::Kind Kind; typedef SparseSelfAdjointShape Shape; }; struct SparseSelfAdjoint2Sparse {}; template<> struct AssignmentKind { typedef SparseSelfAdjoint2Sparse Kind; }; template<> struct AssignmentKind { typedef Sparse2Sparse Kind; }; template< typename DstXprType, typename SrcXprType, typename Functor> struct Assignment { typedef typename DstXprType::StorageIndex StorageIndex; typedef internal::assign_op AssignOpType; template static void run(SparseMatrix &dst, const SrcXprType &src, const AssignOpType&/*func*/) { internal::permute_symm_to_fullsymm(src.matrix(), dst); } // FIXME: the handling of += and -= in sparse matrices should be cleanup so that next two overloads could be reduced to: template static void run(SparseMatrix &dst, const SrcXprType &src, const AssignFunc& func) { SparseMatrix tmp(src.rows(),src.cols()); run(tmp, src, AssignOpType()); call_assignment_no_alias_no_transpose(dst, tmp, func); } template static void run(SparseMatrix &dst, const SrcXprType &src, const internal::add_assign_op& /* func */) { SparseMatrix tmp(src.rows(),src.cols()); run(tmp, src, AssignOpType()); dst += tmp; } template static void run(SparseMatrix &dst, const SrcXprType &src, const internal::sub_assign_op& /* func */) { SparseMatrix tmp(src.rows(),src.cols()); run(tmp, src, AssignOpType()); dst -= tmp; } template static void run(DynamicSparseMatrix& dst, const SrcXprType &src, const AssignOpType&/*func*/) { // TODO directly evaluate into dst; SparseMatrix tmp(dst.rows(),dst.cols()); internal::permute_symm_to_fullsymm(src.matrix(), tmp); dst = tmp; } }; } // end namespace internal /*************************************************************************** * Implementation of sparse self-adjoint time dense matrix ***************************************************************************/ namespace internal { template inline void sparse_selfadjoint_time_dense_product(const SparseLhsType& lhs, const DenseRhsType& rhs, DenseResType& res, const AlphaType& alpha) { EIGEN_ONLY_USED_FOR_DEBUG(alpha); typedef typename internal::nested_eval::type SparseLhsTypeNested; typedef typename internal::remove_all::type SparseLhsTypeNestedCleaned; typedef evaluator LhsEval; typedef typename LhsEval::InnerIterator LhsIterator; typedef typename SparseLhsType::Scalar LhsScalar; enum { LhsIsRowMajor = (LhsEval::Flags&RowMajorBit)==RowMajorBit, ProcessFirstHalf = ((Mode&(Upper|Lower))==(Upper|Lower)) || ( (Mode&Upper) && !LhsIsRowMajor) || ( (Mode&Lower) && LhsIsRowMajor), ProcessSecondHalf = !ProcessFirstHalf }; SparseLhsTypeNested lhs_nested(lhs); LhsEval lhsEval(lhs_nested); // work on one column at once for (Index k=0; k::ReturnType rhs_j(alpha*rhs(j,k)); // accumulator for partial scalar product typename DenseResType::Scalar res_j(0); for(; (ProcessFirstHalf ? i && i.index() < j : i) ; ++i) { LhsScalar lhs_ij = i.value(); if(!LhsIsRowMajor) lhs_ij = numext::conj(lhs_ij); res_j += lhs_ij * rhs.coeff(i.index(),k); res(i.index(),k) += numext::conj(lhs_ij) * rhs_j; } res.coeffRef(j,k) += alpha * res_j; // handle diagonal coeff if (ProcessFirstHalf && i && (i.index()==j)) res.coeffRef(j,k) += alpha * i.value() * rhs.coeff(j,k); } } } template struct generic_product_impl : generic_product_impl_base > { template static void scaleAndAddTo(Dest& dst, const LhsView& lhsView, const Rhs& rhs, const typename Dest::Scalar& alpha) { typedef typename LhsView::_MatrixTypeNested Lhs; typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhsView.matrix()); RhsNested rhsNested(rhs); internal::sparse_selfadjoint_time_dense_product(lhsNested, rhsNested, dst, alpha); } }; template struct generic_product_impl : generic_product_impl_base > { template static void scaleAndAddTo(Dest& dst, const Lhs& lhs, const RhsView& rhsView, const typename Dest::Scalar& alpha) { typedef typename RhsView::_MatrixTypeNested Rhs; typedef typename nested_eval::type LhsNested; typedef typename nested_eval::type RhsNested; LhsNested lhsNested(lhs); RhsNested rhsNested(rhsView.matrix()); // transpose everything Transpose dstT(dst); internal::sparse_selfadjoint_time_dense_product(rhsNested.transpose(), lhsNested.transpose(), dstT, alpha); } }; // NOTE: these two overloads are needed to evaluate the sparse selfadjoint view into a full sparse matrix // TODO: maybe the copy could be handled by generic_product_impl so that these overloads would not be needed anymore template struct product_evaluator, ProductTag, SparseSelfAdjointShape, SparseShape> : public evaluator::PlainObject> { typedef Product XprType; typedef typename XprType::PlainObject PlainObject; typedef evaluator Base; product_evaluator(const XprType& xpr) : m_lhs(xpr.lhs()), m_result(xpr.rows(), xpr.cols()) { ::new (static_cast(this)) Base(m_result); generic_product_impl::evalTo(m_result, m_lhs, xpr.rhs()); } protected: typename Rhs::PlainObject m_lhs; PlainObject m_result; }; template struct product_evaluator, ProductTag, SparseShape, SparseSelfAdjointShape> : public evaluator::PlainObject> { typedef Product XprType; typedef typename XprType::PlainObject PlainObject; typedef evaluator Base; product_evaluator(const XprType& xpr) : m_rhs(xpr.rhs()), m_result(xpr.rows(), xpr.cols()) { ::new (static_cast(this)) Base(m_result); generic_product_impl::evalTo(m_result, xpr.lhs(), m_rhs); } protected: typename Lhs::PlainObject m_rhs; PlainObject m_result; }; } // namespace internal /*************************************************************************** * Implementation of symmetric copies and permutations ***************************************************************************/ namespace internal { template void permute_symm_to_fullsymm(const MatrixType& mat, SparseMatrix& _dest, const typename MatrixType::StorageIndex* perm) { typedef typename MatrixType::StorageIndex StorageIndex; typedef typename MatrixType::Scalar Scalar; typedef SparseMatrix Dest; typedef Matrix VectorI; typedef evaluator MatEval; typedef typename evaluator::InnerIterator MatIterator; MatEval matEval(mat); Dest& dest(_dest.derived()); enum { StorageOrderMatch = int(Dest::IsRowMajor) == int(MatrixType::IsRowMajor) }; Index size = mat.rows(); VectorI count; count.resize(size); count.setZero(); dest.resize(size,size); for(Index j = 0; jc) || ( Mode==Upper && r(it.index()); Index r = it.row(); Index c = it.col(); StorageIndex jp = perm ? perm[j] : j; StorageIndex ip = perm ? perm[i] : i; if(Mode==int(Upper|Lower)) { Index k = count[StorageOrderMatch ? jp : ip]++; dest.innerIndexPtr()[k] = StorageOrderMatch ? ip : jp; dest.valuePtr()[k] = it.value(); } else if(r==c) { Index k = count[ip]++; dest.innerIndexPtr()[k] = ip; dest.valuePtr()[k] = it.value(); } else if(( (Mode&Lower)==Lower && r>c) || ( (Mode&Upper)==Upper && r void permute_symm_to_symm(const MatrixType& mat, SparseMatrix& _dest, const typename MatrixType::StorageIndex* perm) { typedef typename MatrixType::StorageIndex StorageIndex; typedef typename MatrixType::Scalar Scalar; SparseMatrix& dest(_dest.derived()); typedef Matrix VectorI; typedef evaluator MatEval; typedef typename evaluator::InnerIterator MatIterator; enum { SrcOrder = MatrixType::IsRowMajor ? RowMajor : ColMajor, StorageOrderMatch = int(SrcOrder) == int(DstOrder), DstMode = DstOrder==RowMajor ? (_DstMode==Upper ? Lower : Upper) : _DstMode, SrcMode = SrcOrder==RowMajor ? (_SrcMode==Upper ? Lower : Upper) : _SrcMode }; MatEval matEval(mat); Index size = mat.rows(); VectorI count(size); count.setZero(); dest.resize(size,size); for(StorageIndex j = 0; jj)) continue; StorageIndex ip = perm ? perm[i] : i; count[int(DstMode)==int(Lower) ? (std::min)(ip,jp) : (std::max)(ip,jp)]++; } } dest.outerIndexPtr()[0] = 0; for(Index j=0; jj)) continue; StorageIndex jp = perm ? perm[j] : j; StorageIndex ip = perm? perm[i] : i; Index k = count[int(DstMode)==int(Lower) ? (std::min)(ip,jp) : (std::max)(ip,jp)]++; dest.innerIndexPtr()[k] = int(DstMode)==int(Lower) ? (std::max)(ip,jp) : (std::min)(ip,jp); if(!StorageOrderMatch) std::swap(ip,jp); if( ((int(DstMode)==int(Lower) && ipjp))) dest.valuePtr()[k] = numext::conj(it.value()); else dest.valuePtr()[k] = it.value(); } } } } // TODO implement twists in a more evaluator friendly fashion namespace internal { template struct traits > : traits { }; } template class SparseSymmetricPermutationProduct : public EigenBase > { public: typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::StorageIndex StorageIndex; enum { RowsAtCompileTime = internal::traits::RowsAtCompileTime, ColsAtCompileTime = internal::traits::ColsAtCompileTime }; protected: typedef PermutationMatrix Perm; public: typedef Matrix VectorI; typedef typename MatrixType::Nested MatrixTypeNested; typedef typename internal::remove_all::type NestedExpression; SparseSymmetricPermutationProduct(const MatrixType& mat, const Perm& perm) : m_matrix(mat), m_perm(perm) {} inline Index rows() const { return m_matrix.rows(); } inline Index cols() const { return m_matrix.cols(); } const NestedExpression& matrix() const { return m_matrix; } const Perm& perm() const { return m_perm; } protected: MatrixTypeNested m_matrix; const Perm& m_perm; }; namespace internal { template struct Assignment, internal::assign_op, Sparse2Sparse> { typedef SparseSymmetricPermutationProduct SrcXprType; typedef typename DstXprType::StorageIndex DstIndex; template static void run(SparseMatrix &dst, const SrcXprType &src, const internal::assign_op &) { // internal::permute_symm_to_fullsymm(m_matrix,_dest,m_perm.indices().data()); SparseMatrix tmp; internal::permute_symm_to_fullsymm(src.matrix(),tmp,src.perm().indices().data()); dst = tmp; } template static void run(SparseSelfAdjointView& dst, const SrcXprType &src, const internal::assign_op &) { internal::permute_symm_to_symm(src.matrix(),dst.matrix(),src.perm().indices().data()); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSE_SELFADJOINTVIEW_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseCwiseUnaryOp.h0000644000175100001770000001122514107270226024336 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_CWISE_UNARY_OP_H #define EIGEN_SPARSE_CWISE_UNARY_OP_H namespace Eigen { namespace internal { template struct unary_evaluator, IteratorBased> : public evaluator_base > { public: typedef CwiseUnaryOp XprType; class InnerIterator; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit unary_evaluator(const XprType& op) : m_functor(op.functor()), m_argImpl(op.nestedExpression()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } inline Index nonZerosEstimate() const { return m_argImpl.nonZerosEstimate(); } protected: typedef typename evaluator::InnerIterator EvalIterator; const UnaryOp m_functor; evaluator m_argImpl; }; template class unary_evaluator, IteratorBased>::InnerIterator : public unary_evaluator, IteratorBased>::EvalIterator { protected: typedef typename XprType::Scalar Scalar; typedef typename unary_evaluator, IteratorBased>::EvalIterator Base; public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& unaryOp, Index outer) : Base(unaryOp.m_argImpl,outer), m_functor(unaryOp.m_functor) {} EIGEN_STRONG_INLINE InnerIterator& operator++() { Base::operator++(); return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_functor(Base::value()); } protected: const UnaryOp m_functor; private: Scalar& valueRef(); }; template struct unary_evaluator, IteratorBased> : public evaluator_base > { public: typedef CwiseUnaryView XprType; class InnerIterator; enum { CoeffReadCost = int(evaluator::CoeffReadCost) + int(functor_traits::Cost), Flags = XprType::Flags }; explicit unary_evaluator(const XprType& op) : m_functor(op.functor()), m_argImpl(op.nestedExpression()) { EIGEN_INTERNAL_CHECK_COST_VALUE(functor_traits::Cost); EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } protected: typedef typename evaluator::InnerIterator EvalIterator; const ViewOp m_functor; evaluator m_argImpl; }; template class unary_evaluator, IteratorBased>::InnerIterator : public unary_evaluator, IteratorBased>::EvalIterator { protected: typedef typename XprType::Scalar Scalar; typedef typename unary_evaluator, IteratorBased>::EvalIterator Base; public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& unaryOp, Index outer) : Base(unaryOp.m_argImpl,outer), m_functor(unaryOp.m_functor) {} EIGEN_STRONG_INLINE InnerIterator& operator++() { Base::operator++(); return *this; } EIGEN_STRONG_INLINE Scalar value() const { return m_functor(Base::value()); } EIGEN_STRONG_INLINE Scalar& valueRef() { return m_functor(Base::valueRef()); } protected: const ViewOp m_functor; }; } // end namespace internal template EIGEN_STRONG_INLINE Derived& SparseMatrixBase::operator*=(const Scalar& other) { typedef typename internal::evaluator::InnerIterator EvalIterator; internal::evaluator thisEval(derived()); for (Index j=0; j EIGEN_STRONG_INLINE Derived& SparseMatrixBase::operator/=(const Scalar& other) { typedef typename internal::evaluator::InnerIterator EvalIterator; internal::evaluator thisEval(derived()); for (Index j=0; j // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSEMATRIXBASE_H #define EIGEN_SPARSEMATRIXBASE_H namespace Eigen { /** \ingroup SparseCore_Module * * \class SparseMatrixBase * * \brief Base class of any sparse matrices or sparse expressions * * \tparam Derived is the derived type, e.g. a sparse matrix type, or an expression, etc. * * This class can be extended with the help of the plugin mechanism described on the page * \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_SPARSEMATRIXBASE_PLUGIN. */ template class SparseMatrixBase : public EigenBase { public: typedef typename internal::traits::Scalar Scalar; /** The numeric type of the expression' coefficients, e.g. float, double, int or std::complex, etc. * * It is an alias for the Scalar type */ typedef Scalar value_type; typedef typename internal::packet_traits::type PacketScalar; typedef typename internal::traits::StorageKind StorageKind; /** The integer type used to \b store indices within a SparseMatrix. * For a \c SparseMatrix it an alias of the third template parameter \c IndexType. */ typedef typename internal::traits::StorageIndex StorageIndex; typedef typename internal::add_const_on_value_type_if_arithmetic< typename internal::packet_traits::type >::type PacketReturnType; typedef SparseMatrixBase StorageBaseType; typedef Matrix IndexVector; typedef Matrix ScalarVector; template Derived& operator=(const EigenBase &other); enum { RowsAtCompileTime = internal::traits::RowsAtCompileTime, /**< The number of rows at compile-time. This is just a copy of the value provided * by the \a Derived type. If a value is not known at compile-time, * it is set to the \a Dynamic constant. * \sa MatrixBase::rows(), MatrixBase::cols(), ColsAtCompileTime, SizeAtCompileTime */ ColsAtCompileTime = internal::traits::ColsAtCompileTime, /**< The number of columns at compile-time. This is just a copy of the value provided * by the \a Derived type. If a value is not known at compile-time, * it is set to the \a Dynamic constant. * \sa MatrixBase::rows(), MatrixBase::cols(), RowsAtCompileTime, SizeAtCompileTime */ SizeAtCompileTime = (internal::size_at_compile_time::RowsAtCompileTime, internal::traits::ColsAtCompileTime>::ret), /**< This is equal to the number of coefficients, i.e. the number of * rows times the number of columns, or to \a Dynamic if this is not * known at compile-time. \sa RowsAtCompileTime, ColsAtCompileTime */ MaxRowsAtCompileTime = RowsAtCompileTime, MaxColsAtCompileTime = ColsAtCompileTime, MaxSizeAtCompileTime = (internal::size_at_compile_time::ret), IsVectorAtCompileTime = RowsAtCompileTime == 1 || ColsAtCompileTime == 1, /**< This is set to true if either the number of rows or the number of * columns is known at compile-time to be equal to 1. Indeed, in that case, * we are dealing with a column-vector (if there is only one column) or with * a row-vector (if there is only one row). */ NumDimensions = int(MaxSizeAtCompileTime) == 1 ? 0 : bool(IsVectorAtCompileTime) ? 1 : 2, /**< This value is equal to Tensor::NumDimensions, i.e. 0 for scalars, 1 for vectors, * and 2 for matrices. */ Flags = internal::traits::Flags, /**< This stores expression \ref flags flags which may or may not be inherited by new expressions * constructed from this one. See the \ref flags "list of flags". */ IsRowMajor = Flags&RowMajorBit ? 1 : 0, InnerSizeAtCompileTime = int(IsVectorAtCompileTime) ? int(SizeAtCompileTime) : int(IsRowMajor) ? int(ColsAtCompileTime) : int(RowsAtCompileTime), #ifndef EIGEN_PARSED_BY_DOXYGEN _HasDirectAccess = (int(Flags)&DirectAccessBit) ? 1 : 0 // workaround sunCC #endif }; /** \internal the return type of MatrixBase::adjoint() */ typedef typename internal::conditional::IsComplex, CwiseUnaryOp, Eigen::Transpose >, Transpose >::type AdjointReturnType; typedef Transpose TransposeReturnType; typedef typename internal::add_const >::type ConstTransposeReturnType; // FIXME storage order do not match evaluator storage order typedef SparseMatrix PlainObject; #ifndef EIGEN_PARSED_BY_DOXYGEN /** This is the "real scalar" type; if the \a Scalar type is already real numbers * (e.g. int, float or double) then \a RealScalar is just the same as \a Scalar. If * \a Scalar is \a std::complex then RealScalar is \a T. * * \sa class NumTraits */ typedef typename NumTraits::Real RealScalar; /** \internal the return type of coeff() */ typedef typename internal::conditional<_HasDirectAccess, const Scalar&, Scalar>::type CoeffReturnType; /** \internal Represents a matrix with all coefficients equal to one another*/ typedef CwiseNullaryOp,Matrix > ConstantReturnType; /** type of the equivalent dense matrix */ typedef Matrix DenseMatrixType; /** type of the equivalent square matrix */ typedef Matrix SquareMatrixType; inline const Derived& derived() const { return *static_cast(this); } inline Derived& derived() { return *static_cast(this); } inline Derived& const_cast_derived() const { return *static_cast(const_cast(this)); } typedef EigenBase Base; #endif // not EIGEN_PARSED_BY_DOXYGEN #define EIGEN_CURRENT_STORAGE_BASE_CLASS Eigen::SparseMatrixBase #ifdef EIGEN_PARSED_BY_DOXYGEN #define EIGEN_DOC_UNARY_ADDONS(METHOD,OP) /**

    This method does not change the sparsity of \c *this: the OP is applied to explicitly stored coefficients only. \sa SparseCompressedBase::coeffs()

    */ #define EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /**

    \warning This method returns a read-only expression for any sparse matrices. \sa \ref TutorialSparse_SubMatrices "Sparse block operations"

    */ #define EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(COND) /**

    \warning This method returns a read-write expression for COND sparse matrices only. Otherwise, the returned expression is read-only. \sa \ref TutorialSparse_SubMatrices "Sparse block operations"

    */ #else #define EIGEN_DOC_UNARY_ADDONS(X,Y) #define EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL #define EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(COND) #endif # include "../plugins/CommonCwiseUnaryOps.h" # include "../plugins/CommonCwiseBinaryOps.h" # include "../plugins/MatrixCwiseUnaryOps.h" # include "../plugins/MatrixCwiseBinaryOps.h" # include "../plugins/BlockMethods.h" # ifdef EIGEN_SPARSEMATRIXBASE_PLUGIN # include EIGEN_SPARSEMATRIXBASE_PLUGIN # endif #undef EIGEN_CURRENT_STORAGE_BASE_CLASS #undef EIGEN_DOC_UNARY_ADDONS #undef EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL #undef EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF /** \returns the number of rows. \sa cols() */ inline Index rows() const { return derived().rows(); } /** \returns the number of columns. \sa rows() */ inline Index cols() const { return derived().cols(); } /** \returns the number of coefficients, which is \a rows()*cols(). * \sa rows(), cols(). */ inline Index size() const { return rows() * cols(); } /** \returns true if either the number of rows or the number of columns is equal to 1. * In other words, this function returns * \code rows()==1 || cols()==1 \endcode * \sa rows(), cols(), IsVectorAtCompileTime. */ inline bool isVector() const { return rows()==1 || cols()==1; } /** \returns the size of the storage major dimension, * i.e., the number of columns for a columns major matrix, and the number of rows otherwise */ Index outerSize() const { return (int(Flags)&RowMajorBit) ? this->rows() : this->cols(); } /** \returns the size of the inner dimension according to the storage order, * i.e., the number of rows for a columns major matrix, and the number of cols otherwise */ Index innerSize() const { return (int(Flags)&RowMajorBit) ? this->cols() : this->rows(); } bool isRValue() const { return m_isRValue; } Derived& markAsRValue() { m_isRValue = true; return derived(); } SparseMatrixBase() : m_isRValue(false) { /* TODO check flags */ } template Derived& operator=(const ReturnByValue& other); template inline Derived& operator=(const SparseMatrixBase& other); inline Derived& operator=(const Derived& other); protected: template inline Derived& assign(const OtherDerived& other); template inline void assignGeneric(const OtherDerived& other); public: friend std::ostream & operator << (std::ostream & s, const SparseMatrixBase& m) { typedef typename Derived::Nested Nested; typedef typename internal::remove_all::type NestedCleaned; if (Flags&RowMajorBit) { Nested nm(m.derived()); internal::evaluator thisEval(nm); for (Index row=0; row::InnerIterator it(thisEval, row); it; ++it) { for ( ; col thisEval(nm); if (m.cols() == 1) { Index row = 0; for (typename internal::evaluator::InnerIterator it(thisEval, 0); it; ++it) { for ( ; row trans = m; s << static_cast >&>(trans); } } return s; } template Derived& operator+=(const SparseMatrixBase& other); template Derived& operator-=(const SparseMatrixBase& other); template Derived& operator+=(const DiagonalBase& other); template Derived& operator-=(const DiagonalBase& other); template Derived& operator+=(const EigenBase &other); template Derived& operator-=(const EigenBase &other); Derived& operator*=(const Scalar& other); Derived& operator/=(const Scalar& other); template struct CwiseProductDenseReturnType { typedef CwiseBinaryOp::Scalar, typename internal::traits::Scalar >::ReturnType>, const Derived, const OtherDerived > Type; }; template EIGEN_STRONG_INLINE const typename CwiseProductDenseReturnType::Type cwiseProduct(const MatrixBase &other) const; // sparse * diagonal template const Product operator*(const DiagonalBase &other) const { return Product(derived(), other.derived()); } // diagonal * sparse template friend const Product operator*(const DiagonalBase &lhs, const SparseMatrixBase& rhs) { return Product(lhs.derived(), rhs.derived()); } // sparse * sparse template const Product operator*(const SparseMatrixBase &other) const; // sparse * dense template const Product operator*(const MatrixBase &other) const { return Product(derived(), other.derived()); } // dense * sparse template friend const Product operator*(const MatrixBase &lhs, const SparseMatrixBase& rhs) { return Product(lhs.derived(), rhs.derived()); } /** \returns an expression of P H P^-1 where H is the matrix represented by \c *this */ SparseSymmetricPermutationProduct twistedBy(const PermutationMatrix& perm) const { return SparseSymmetricPermutationProduct(derived(), perm); } template Derived& operator*=(const SparseMatrixBase& other); template inline const TriangularView triangularView() const; template struct SelfAdjointViewReturnType { typedef SparseSelfAdjointView Type; }; template struct ConstSelfAdjointViewReturnType { typedef const SparseSelfAdjointView Type; }; template inline typename ConstSelfAdjointViewReturnType::Type selfadjointView() const; template inline typename SelfAdjointViewReturnType::Type selfadjointView(); template Scalar dot(const MatrixBase& other) const; template Scalar dot(const SparseMatrixBase& other) const; RealScalar squaredNorm() const; RealScalar norm() const; RealScalar blueNorm() const; TransposeReturnType transpose() { return TransposeReturnType(derived()); } const ConstTransposeReturnType transpose() const { return ConstTransposeReturnType(derived()); } const AdjointReturnType adjoint() const { return AdjointReturnType(transpose()); } DenseMatrixType toDense() const { return DenseMatrixType(derived()); } template bool isApprox(const SparseMatrixBase& other, const RealScalar& prec = NumTraits::dummy_precision()) const; template bool isApprox(const MatrixBase& other, const RealScalar& prec = NumTraits::dummy_precision()) const { return toDense().isApprox(other,prec); } /** \returns the matrix or vector obtained by evaluating this expression. * * Notice that in the case of a plain matrix or vector (not an expression) this function just returns * a const reference, in order to avoid a useless copy. */ inline const typename internal::eval::type eval() const { return typename internal::eval::type(derived()); } Scalar sum() const; inline const SparseView pruned(const Scalar& reference = Scalar(0), const RealScalar& epsilon = NumTraits::dummy_precision()) const; protected: bool m_isRValue; static inline StorageIndex convert_index(const Index idx) { return internal::convert_index(idx); } private: template void evalTo(Dest &) const; }; } // end namespace Eigen #endif // EIGEN_SPARSEMATRIXBASE_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseTranspose.h0000644000175100001770000000614714107270226023733 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSETRANSPOSE_H #define EIGEN_SPARSETRANSPOSE_H namespace Eigen { namespace internal { template class SparseTransposeImpl : public SparseMatrixBase > {}; template class SparseTransposeImpl : public SparseCompressedBase > { typedef SparseCompressedBase > Base; public: using Base::derived; typedef typename Base::Scalar Scalar; typedef typename Base::StorageIndex StorageIndex; inline Index nonZeros() const { return derived().nestedExpression().nonZeros(); } inline const Scalar* valuePtr() const { return derived().nestedExpression().valuePtr(); } inline const StorageIndex* innerIndexPtr() const { return derived().nestedExpression().innerIndexPtr(); } inline const StorageIndex* outerIndexPtr() const { return derived().nestedExpression().outerIndexPtr(); } inline const StorageIndex* innerNonZeroPtr() const { return derived().nestedExpression().innerNonZeroPtr(); } inline Scalar* valuePtr() { return derived().nestedExpression().valuePtr(); } inline StorageIndex* innerIndexPtr() { return derived().nestedExpression().innerIndexPtr(); } inline StorageIndex* outerIndexPtr() { return derived().nestedExpression().outerIndexPtr(); } inline StorageIndex* innerNonZeroPtr() { return derived().nestedExpression().innerNonZeroPtr(); } }; } template class TransposeImpl : public internal::SparseTransposeImpl { protected: typedef internal::SparseTransposeImpl Base; }; namespace internal { template struct unary_evaluator, IteratorBased> : public evaluator_base > { typedef typename evaluator::InnerIterator EvalIterator; public: typedef Transpose XprType; inline Index nonZerosEstimate() const { return m_argImpl.nonZerosEstimate(); } class InnerIterator : public EvalIterator { public: EIGEN_STRONG_INLINE InnerIterator(const unary_evaluator& unaryOp, Index outer) : EvalIterator(unaryOp.m_argImpl,outer) {} Index row() const { return EvalIterator::col(); } Index col() const { return EvalIterator::row(); } }; enum { CoeffReadCost = evaluator::CoeffReadCost, Flags = XprType::Flags }; explicit unary_evaluator(const XprType& op) :m_argImpl(op.nestedExpression()) {} protected: evaluator m_argImpl; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SPARSETRANSPOSE_H piqp-octave/src/eigen/Eigen/src/SparseCore/SparseMap.h0000644000175100001770000003045514107270226022471 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_MAP_H #define EIGEN_SPARSE_MAP_H namespace Eigen { namespace internal { template struct traits, Options, StrideType> > : public traits > { typedef SparseMatrix PlainObjectType; typedef traits TraitsBase; enum { Flags = TraitsBase::Flags & (~NestByRefBit) }; }; template struct traits, Options, StrideType> > : public traits > { typedef SparseMatrix PlainObjectType; typedef traits TraitsBase; enum { Flags = TraitsBase::Flags & (~ (NestByRefBit | LvalueBit)) }; }; } // end namespace internal template::has_write_access ? WriteAccessors : ReadOnlyAccessors > class SparseMapBase; /** \ingroup SparseCore_Module * class SparseMapBase * \brief Common base class for Map and Ref instance of sparse matrix and vector. */ template class SparseMapBase : public SparseCompressedBase { public: typedef SparseCompressedBase Base; typedef typename Base::Scalar Scalar; typedef typename Base::StorageIndex StorageIndex; enum { IsRowMajor = Base::IsRowMajor }; using Base::operator=; protected: typedef typename internal::conditional< bool(internal::is_lvalue::value), Scalar *, const Scalar *>::type ScalarPointer; typedef typename internal::conditional< bool(internal::is_lvalue::value), StorageIndex *, const StorageIndex *>::type IndexPointer; Index m_outerSize; Index m_innerSize; Array m_zero_nnz; IndexPointer m_outerIndex; IndexPointer m_innerIndices; ScalarPointer m_values; IndexPointer m_innerNonZeros; public: /** \copydoc SparseMatrixBase::rows() */ inline Index rows() const { return IsRowMajor ? m_outerSize : m_innerSize; } /** \copydoc SparseMatrixBase::cols() */ inline Index cols() const { return IsRowMajor ? m_innerSize : m_outerSize; } /** \copydoc SparseMatrixBase::innerSize() */ inline Index innerSize() const { return m_innerSize; } /** \copydoc SparseMatrixBase::outerSize() */ inline Index outerSize() const { return m_outerSize; } /** \copydoc SparseCompressedBase::nonZeros */ inline Index nonZeros() const { return m_zero_nnz[1]; } /** \copydoc SparseCompressedBase::isCompressed */ bool isCompressed() const { return m_innerNonZeros==0; } //---------------------------------------- // direct access interface /** \copydoc SparseMatrix::valuePtr */ inline const Scalar* valuePtr() const { return m_values; } /** \copydoc SparseMatrix::innerIndexPtr */ inline const StorageIndex* innerIndexPtr() const { return m_innerIndices; } /** \copydoc SparseMatrix::outerIndexPtr */ inline const StorageIndex* outerIndexPtr() const { return m_outerIndex; } /** \copydoc SparseMatrix::innerNonZeroPtr */ inline const StorageIndex* innerNonZeroPtr() const { return m_innerNonZeros; } //---------------------------------------- /** \copydoc SparseMatrix::coeff */ inline Scalar coeff(Index row, Index col) const { const Index outer = IsRowMajor ? row : col; const Index inner = IsRowMajor ? col : row; Index start = m_outerIndex[outer]; Index end = isCompressed() ? m_outerIndex[outer+1] : start + m_innerNonZeros[outer]; if (start==end) return Scalar(0); else if (end>0 && inner==m_innerIndices[end-1]) return m_values[end-1]; // ^^ optimization: let's first check if it is the last coefficient // (very common in high level algorithms) const StorageIndex* r = std::lower_bound(&m_innerIndices[start],&m_innerIndices[end-1],inner); const Index id = r-&m_innerIndices[0]; return ((*r==inner) && (id(nnz)), m_outerIndex(outerIndexPtr), m_innerIndices(innerIndexPtr), m_values(valuePtr), m_innerNonZeros(innerNonZerosPtr) {} // for vectors inline SparseMapBase(Index size, Index nnz, IndexPointer innerIndexPtr, ScalarPointer valuePtr) : m_outerSize(1), m_innerSize(size), m_zero_nnz(0,internal::convert_index(nnz)), m_outerIndex(m_zero_nnz.data()), m_innerIndices(innerIndexPtr), m_values(valuePtr), m_innerNonZeros(0) {} /** Empty destructor */ inline ~SparseMapBase() {} protected: inline SparseMapBase() {} }; /** \ingroup SparseCore_Module * class SparseMapBase * \brief Common base class for writable Map and Ref instance of sparse matrix and vector. */ template class SparseMapBase : public SparseMapBase { typedef MapBase ReadOnlyMapBase; public: typedef SparseMapBase Base; typedef typename Base::Scalar Scalar; typedef typename Base::StorageIndex StorageIndex; enum { IsRowMajor = Base::IsRowMajor }; using Base::operator=; public: //---------------------------------------- // direct access interface using Base::valuePtr; using Base::innerIndexPtr; using Base::outerIndexPtr; using Base::innerNonZeroPtr; /** \copydoc SparseMatrix::valuePtr */ inline Scalar* valuePtr() { return Base::m_values; } /** \copydoc SparseMatrix::innerIndexPtr */ inline StorageIndex* innerIndexPtr() { return Base::m_innerIndices; } /** \copydoc SparseMatrix::outerIndexPtr */ inline StorageIndex* outerIndexPtr() { return Base::m_outerIndex; } /** \copydoc SparseMatrix::innerNonZeroPtr */ inline StorageIndex* innerNonZeroPtr() { return Base::m_innerNonZeros; } //---------------------------------------- /** \copydoc SparseMatrix::coeffRef */ inline Scalar& coeffRef(Index row, Index col) { const Index outer = IsRowMajor ? row : col; const Index inner = IsRowMajor ? col : row; Index start = Base::m_outerIndex[outer]; Index end = Base::isCompressed() ? Base::m_outerIndex[outer+1] : start + Base::m_innerNonZeros[outer]; eigen_assert(end>=start && "you probably called coeffRef on a non finalized matrix"); eigen_assert(end>start && "coeffRef cannot be called on a zero coefficient"); StorageIndex* r = std::lower_bound(&Base::m_innerIndices[start],&Base::m_innerIndices[end],inner); const Index id = r - &Base::m_innerIndices[0]; eigen_assert((*r==inner) && (id(Base::m_values)[id]; } inline SparseMapBase(Index rows, Index cols, Index nnz, StorageIndex* outerIndexPtr, StorageIndex* innerIndexPtr, Scalar* valuePtr, StorageIndex* innerNonZerosPtr = 0) : Base(rows, cols, nnz, outerIndexPtr, innerIndexPtr, valuePtr, innerNonZerosPtr) {} // for vectors inline SparseMapBase(Index size, Index nnz, StorageIndex* innerIndexPtr, Scalar* valuePtr) : Base(size, nnz, innerIndexPtr, valuePtr) {} /** Empty destructor */ inline ~SparseMapBase() {} protected: inline SparseMapBase() {} }; /** \ingroup SparseCore_Module * * \brief Specialization of class Map for SparseMatrix-like storage. * * \tparam SparseMatrixType the equivalent sparse matrix type of the referenced data, it must be a template instance of class SparseMatrix. * * \sa class Map, class SparseMatrix, class Ref */ #ifndef EIGEN_PARSED_BY_DOXYGEN template class Map, Options, StrideType> : public SparseMapBase, Options, StrideType> > #else template class Map : public SparseMapBase #endif { public: typedef SparseMapBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Map) enum { IsRowMajor = Base::IsRowMajor }; public: /** Constructs a read-write Map to a sparse matrix of size \a rows x \a cols, containing \a nnz non-zero coefficients, * stored as a sparse format as defined by the pointers \a outerIndexPtr, \a innerIndexPtr, and \a valuePtr. * If the optional parameter \a innerNonZerosPtr is the null pointer, then a standard compressed format is assumed. * * This constructor is available only if \c SparseMatrixType is non-const. * * More details on the expected storage schemes are given in the \ref TutorialSparse "manual pages". */ inline Map(Index rows, Index cols, Index nnz, StorageIndex* outerIndexPtr, StorageIndex* innerIndexPtr, Scalar* valuePtr, StorageIndex* innerNonZerosPtr = 0) : Base(rows, cols, nnz, outerIndexPtr, innerIndexPtr, valuePtr, innerNonZerosPtr) {} #ifndef EIGEN_PARSED_BY_DOXYGEN /** Empty destructor */ inline ~Map() {} }; template class Map, Options, StrideType> : public SparseMapBase, Options, StrideType> > { public: typedef SparseMapBase Base; EIGEN_SPARSE_PUBLIC_INTERFACE(Map) enum { IsRowMajor = Base::IsRowMajor }; public: #endif /** This is the const version of the above constructor. * * This constructor is available only if \c SparseMatrixType is const, e.g.: * \code Map > \endcode */ inline Map(Index rows, Index cols, Index nnz, const StorageIndex* outerIndexPtr, const StorageIndex* innerIndexPtr, const Scalar* valuePtr, const StorageIndex* innerNonZerosPtr = 0) : Base(rows, cols, nnz, outerIndexPtr, innerIndexPtr, valuePtr, innerNonZerosPtr) {} /** Empty destructor */ inline ~Map() {} }; namespace internal { template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Map, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; template struct evaluator, Options, StrideType> > : evaluator, Options, StrideType> > > { typedef evaluator, Options, StrideType> > > Base; typedef Map, Options, StrideType> XprType; evaluator() : Base() {} explicit evaluator(const XprType &mat) : Base(mat) {} }; } } // end namespace Eigen #endif // EIGEN_SPARSE_MAP_H piqp-octave/src/eigen/Eigen/src/StlSupport/0000755000175100001770000000000014107270226020507 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/StlSupport/StdDeque.h0000644000175100001770000001117214107270226022400 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2009 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STDDEQUE_H #define EIGEN_STDDEQUE_H #include "details.h" /** * This section contains a convenience MACRO which allows an easy specialization of * std::deque such that for data types with alignment issues the correct allocator * is used automatically. */ #define EIGEN_DEFINE_STL_DEQUE_SPECIALIZATION(...) \ namespace std \ { \ template<> \ class deque<__VA_ARGS__, std::allocator<__VA_ARGS__> > \ : public deque<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > \ { \ typedef deque<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > deque_base; \ public: \ typedef __VA_ARGS__ value_type; \ typedef deque_base::allocator_type allocator_type; \ typedef deque_base::size_type size_type; \ typedef deque_base::iterator iterator; \ explicit deque(const allocator_type& a = allocator_type()) : deque_base(a) {} \ template \ deque(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) : deque_base(first, last, a) {} \ deque(const deque& c) : deque_base(c) {} \ explicit deque(size_type num, const value_type& val = value_type()) : deque_base(num, val) {} \ deque(iterator start_, iterator end_) : deque_base(start_, end_) {} \ deque& operator=(const deque& x) { \ deque_base::operator=(x); \ return *this; \ } \ }; \ } // check whether we really need the std::deque specialization #if !EIGEN_HAS_CXX11_CONTAINERS && !(defined(_GLIBCXX_DEQUE) && (!EIGEN_GNUC_AT_LEAST(4,1))) /* Note that before gcc-4.1 we already have: std::deque::resize(size_type,const T&). */ namespace std { #define EIGEN_STD_DEQUE_SPECIALIZATION_BODY \ public: \ typedef T value_type; \ typedef typename deque_base::allocator_type allocator_type; \ typedef typename deque_base::size_type size_type; \ typedef typename deque_base::iterator iterator; \ typedef typename deque_base::const_iterator const_iterator; \ explicit deque(const allocator_type& a = allocator_type()) : deque_base(a) {} \ template \ deque(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) \ : deque_base(first, last, a) {} \ deque(const deque& c) : deque_base(c) {} \ explicit deque(size_type num, const value_type& val = value_type()) : deque_base(num, val) {} \ deque(iterator start_, iterator end_) : deque_base(start_, end_) {} \ deque& operator=(const deque& x) { \ deque_base::operator=(x); \ return *this; \ } template class deque > : public deque > { typedef deque > deque_base; EIGEN_STD_DEQUE_SPECIALIZATION_BODY void resize(size_type new_size) { resize(new_size, T()); } #if defined(_DEQUE_) // workaround MSVC std::deque implementation void resize(size_type new_size, const value_type& x) { if (deque_base::size() < new_size) deque_base::_Insert_n(deque_base::end(), new_size - deque_base::size(), x); else if (new_size < deque_base::size()) deque_base::erase(deque_base::begin() + new_size, deque_base::end()); } void push_back(const value_type& x) { deque_base::push_back(x); } void push_front(const value_type& x) { deque_base::push_front(x); } using deque_base::insert; iterator insert(const_iterator position, const value_type& x) { return deque_base::insert(position,x); } void insert(const_iterator position, size_type new_size, const value_type& x) { deque_base::insert(position, new_size, x); } #else // default implementation which should always work. void resize(size_type new_size, const value_type& x) { if (new_size < deque_base::size()) deque_base::erase(deque_base::begin() + new_size, deque_base::end()); else if (new_size > deque_base::size()) deque_base::insert(deque_base::end(), new_size - deque_base::size(), x); } #endif }; } #endif // check whether specialization is actually required #endif // EIGEN_STDDEQUE_H piqp-octave/src/eigen/Eigen/src/StlSupport/StdList.h0000644000175100001770000001007314107270226022247 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STDLIST_H #define EIGEN_STDLIST_H #include "details.h" /** * This section contains a convenience MACRO which allows an easy specialization of * std::list such that for data types with alignment issues the correct allocator * is used automatically. */ #define EIGEN_DEFINE_STL_LIST_SPECIALIZATION(...) \ namespace std \ { \ template<> \ class list<__VA_ARGS__, std::allocator<__VA_ARGS__> > \ : public list<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > \ { \ typedef list<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > list_base; \ public: \ typedef __VA_ARGS__ value_type; \ typedef list_base::allocator_type allocator_type; \ typedef list_base::size_type size_type; \ typedef list_base::iterator iterator; \ explicit list(const allocator_type& a = allocator_type()) : list_base(a) {} \ template \ list(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) : list_base(first, last, a) {} \ list(const list& c) : list_base(c) {} \ explicit list(size_type num, const value_type& val = value_type()) : list_base(num, val) {} \ list(iterator start_, iterator end_) : list_base(start_, end_) {} \ list& operator=(const list& x) { \ list_base::operator=(x); \ return *this; \ } \ }; \ } // check whether we really need the std::list specialization #if !EIGEN_HAS_CXX11_CONTAINERS && !(defined(_GLIBCXX_LIST) && (!EIGEN_GNUC_AT_LEAST(4,1))) /* Note that before gcc-4.1 we already have: std::list::resize(size_type,const T&). */ namespace std { #define EIGEN_STD_LIST_SPECIALIZATION_BODY \ public: \ typedef T value_type; \ typedef typename list_base::allocator_type allocator_type; \ typedef typename list_base::size_type size_type; \ typedef typename list_base::iterator iterator; \ typedef typename list_base::const_iterator const_iterator; \ explicit list(const allocator_type& a = allocator_type()) : list_base(a) {} \ template \ list(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) \ : list_base(first, last, a) {} \ list(const list& c) : list_base(c) {} \ explicit list(size_type num, const value_type& val = value_type()) : list_base(num, val) {} \ list(iterator start_, iterator end_) : list_base(start_, end_) {} \ list& operator=(const list& x) { \ list_base::operator=(x); \ return *this; \ } template class list > : public list > { typedef list > list_base; EIGEN_STD_LIST_SPECIALIZATION_BODY void resize(size_type new_size) { resize(new_size, T()); } void resize(size_type new_size, const value_type& x) { if (list_base::size() < new_size) list_base::insert(list_base::end(), new_size - list_base::size(), x); else while (new_size < list_base::size()) list_base::pop_back(); } #if defined(_LIST_) // workaround MSVC std::list implementation void push_back(const value_type& x) { list_base::push_back(x); } using list_base::insert; iterator insert(const_iterator position, const value_type& x) { return list_base::insert(position,x); } void insert(const_iterator position, size_type new_size, const value_type& x) { list_base::insert(position, new_size, x); } #endif }; } #endif // check whether specialization is actually required #endif // EIGEN_STDLIST_H piqp-octave/src/eigen/Eigen/src/StlSupport/details.h0000644000175100001770000000537114107270226022313 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2009 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STL_DETAILS_H #define EIGEN_STL_DETAILS_H #ifndef EIGEN_ALIGNED_ALLOCATOR #define EIGEN_ALIGNED_ALLOCATOR Eigen::aligned_allocator #endif namespace Eigen { // This one is needed to prevent reimplementing the whole std::vector. template class aligned_allocator_indirection : public EIGEN_ALIGNED_ALLOCATOR { public: typedef std::size_t size_type; typedef std::ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; template struct rebind { typedef aligned_allocator_indirection other; }; aligned_allocator_indirection() {} aligned_allocator_indirection(const aligned_allocator_indirection& ) : EIGEN_ALIGNED_ALLOCATOR() {} aligned_allocator_indirection(const EIGEN_ALIGNED_ALLOCATOR& ) {} template aligned_allocator_indirection(const aligned_allocator_indirection& ) {} template aligned_allocator_indirection(const EIGEN_ALIGNED_ALLOCATOR& ) {} ~aligned_allocator_indirection() {} }; #if EIGEN_COMP_MSVC // sometimes, MSVC detects, at compile time, that the argument x // in std::vector::resize(size_t s,T x) won't be aligned and generate an error // even if this function is never called. Whence this little wrapper. #define EIGEN_WORKAROUND_MSVC_STL_SUPPORT(T) \ typename Eigen::internal::conditional< \ Eigen::internal::is_arithmetic::value, \ T, \ Eigen::internal::workaround_msvc_stl_support \ >::type namespace internal { template struct workaround_msvc_stl_support : public T { inline workaround_msvc_stl_support() : T() {} inline workaround_msvc_stl_support(const T& other) : T(other) {} inline operator T& () { return *static_cast(this); } inline operator const T& () const { return *static_cast(this); } template inline T& operator=(const OtherT& other) { T::operator=(other); return *this; } inline workaround_msvc_stl_support& operator=(const workaround_msvc_stl_support& other) { T::operator=(other); return *this; } }; } #else #define EIGEN_WORKAROUND_MSVC_STL_SUPPORT(T) T #endif } #endif // EIGEN_STL_DETAILS_H piqp-octave/src/eigen/Eigen/src/StlSupport/StdVector.h0000644000175100001770000001233214107270226022576 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2009 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STDVECTOR_H #define EIGEN_STDVECTOR_H #include "details.h" /** * This section contains a convenience MACRO which allows an easy specialization of * std::vector such that for data types with alignment issues the correct allocator * is used automatically. */ #define EIGEN_DEFINE_STL_VECTOR_SPECIALIZATION(...) \ namespace std \ { \ template<> \ class vector<__VA_ARGS__, std::allocator<__VA_ARGS__> > \ : public vector<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > \ { \ typedef vector<__VA_ARGS__, EIGEN_ALIGNED_ALLOCATOR<__VA_ARGS__> > vector_base; \ public: \ typedef __VA_ARGS__ value_type; \ typedef vector_base::allocator_type allocator_type; \ typedef vector_base::size_type size_type; \ typedef vector_base::iterator iterator; \ explicit vector(const allocator_type& a = allocator_type()) : vector_base(a) {} \ template \ vector(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) : vector_base(first, last, a) {} \ vector(const vector& c) : vector_base(c) {} \ explicit vector(size_type num, const value_type& val = value_type()) : vector_base(num, val) {} \ vector(iterator start_, iterator end_) : vector_base(start_, end_) {} \ vector& operator=(const vector& x) { \ vector_base::operator=(x); \ return *this; \ } \ }; \ } // Don't specialize if containers are implemented according to C++11 #if !EIGEN_HAS_CXX11_CONTAINERS namespace std { #define EIGEN_STD_VECTOR_SPECIALIZATION_BODY \ public: \ typedef T value_type; \ typedef typename vector_base::allocator_type allocator_type; \ typedef typename vector_base::size_type size_type; \ typedef typename vector_base::iterator iterator; \ typedef typename vector_base::const_iterator const_iterator; \ explicit vector(const allocator_type& a = allocator_type()) : vector_base(a) {} \ template \ vector(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()) \ : vector_base(first, last, a) {} \ vector(const vector& c) : vector_base(c) {} \ explicit vector(size_type num, const value_type& val = value_type()) : vector_base(num, val) {} \ vector(iterator start_, iterator end_) : vector_base(start_, end_) {} \ vector& operator=(const vector& x) { \ vector_base::operator=(x); \ return *this; \ } template class vector > : public vector > { typedef vector > vector_base; EIGEN_STD_VECTOR_SPECIALIZATION_BODY void resize(size_type new_size) { resize(new_size, T()); } #if defined(_VECTOR_) // workaround MSVC std::vector implementation void resize(size_type new_size, const value_type& x) { if (vector_base::size() < new_size) vector_base::_Insert_n(vector_base::end(), new_size - vector_base::size(), x); else if (new_size < vector_base::size()) vector_base::erase(vector_base::begin() + new_size, vector_base::end()); } void push_back(const value_type& x) { vector_base::push_back(x); } using vector_base::insert; iterator insert(const_iterator position, const value_type& x) { return vector_base::insert(position,x); } void insert(const_iterator position, size_type new_size, const value_type& x) { vector_base::insert(position, new_size, x); } #elif defined(_GLIBCXX_VECTOR) && (!(EIGEN_GNUC_AT_LEAST(4,1))) /* Note that before gcc-4.1 we already have: std::vector::resize(size_type,const T&). * However, this specialization is still needed to make the above EIGEN_DEFINE_STL_VECTOR_SPECIALIZATION trick to work. */ void resize(size_type new_size, const value_type& x) { vector_base::resize(new_size,x); } #elif defined(_GLIBCXX_VECTOR) && EIGEN_GNUC_AT_LEAST(4,2) // workaround GCC std::vector implementation void resize(size_type new_size, const value_type& x) { if (new_size < vector_base::size()) vector_base::_M_erase_at_end(this->_M_impl._M_start + new_size); else vector_base::insert(vector_base::end(), new_size - vector_base::size(), x); } #else // either GCC 4.1 or non-GCC // default implementation which should always work. void resize(size_type new_size, const value_type& x) { if (new_size < vector_base::size()) vector_base::erase(vector_base::begin() + new_size, vector_base::end()); else if (new_size > vector_base::size()) vector_base::insert(vector_base::end(), new_size - vector_base::size(), x); } #endif }; } #endif // !EIGEN_HAS_CXX11_CONTAINERS #endif // EIGEN_STDVECTOR_H piqp-octave/src/eigen/Eigen/src/SparseCholesky/0000755000175100001770000000000014107270226021307 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SparseCholesky/SimplicialCholesky.h0000644000175100001770000005723014107270226025257 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SIMPLICIAL_CHOLESKY_H #define EIGEN_SIMPLICIAL_CHOLESKY_H namespace Eigen { enum SimplicialCholeskyMode { SimplicialCholeskyLLT, SimplicialCholeskyLDLT }; namespace internal { template struct simplicial_cholesky_grab_input { typedef CholMatrixType const * ConstCholMatrixPtr; static void run(const InputMatrixType& input, ConstCholMatrixPtr &pmat, CholMatrixType &tmp) { tmp = input; pmat = &tmp; } }; template struct simplicial_cholesky_grab_input { typedef MatrixType const * ConstMatrixPtr; static void run(const MatrixType& input, ConstMatrixPtr &pmat, MatrixType &/*tmp*/) { pmat = &input; } }; } // end namespace internal /** \ingroup SparseCholesky_Module * \brief A base class for direct sparse Cholesky factorizations * * This is a base class for LL^T and LDL^T Cholesky factorizations of sparse matrices that are * selfadjoint and positive definite. These factorizations allow for solving A.X = B where * X and B can be either dense or sparse. * * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization * such that the factorized matrix is P A P^-1. * * \tparam Derived the type of the derived class, that is the actual factorization type. * */ template class SimplicialCholeskyBase : public SparseSolverBase { typedef SparseSolverBase Base; using Base::m_isInitialized; public: typedef typename internal::traits::MatrixType MatrixType; typedef typename internal::traits::OrderingType OrderingType; enum { UpLo = internal::traits::UpLo }; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef CholMatrixType const * ConstCholMatrixPtr; typedef Matrix VectorType; typedef Matrix VectorI; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: using Base::derived; /** Default constructor */ SimplicialCholeskyBase() : m_info(Success), m_factorizationIsOk(false), m_analysisIsOk(false), m_shiftOffset(0), m_shiftScale(1) {} explicit SimplicialCholeskyBase(const MatrixType& matrix) : m_info(Success), m_factorizationIsOk(false), m_analysisIsOk(false), m_shiftOffset(0), m_shiftScale(1) { derived().compute(matrix); } ~SimplicialCholeskyBase() { } Derived& derived() { return *static_cast(this); } const Derived& derived() const { return *static_cast(this); } inline Index cols() const { return m_matrix.cols(); } inline Index rows() const { return m_matrix.rows(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix.appears to be negative. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } /** \returns the permutation P * \sa permutationPinv() */ const PermutationMatrix& permutationP() const { return m_P; } /** \returns the inverse P^-1 of the permutation P * \sa permutationP() */ const PermutationMatrix& permutationPinv() const { return m_Pinv; } /** Sets the shift parameters that will be used to adjust the diagonal coefficients during the numerical factorization. * * During the numerical factorization, the diagonal coefficients are transformed by the following linear model:\n * \c d_ii = \a offset + \a scale * \c d_ii * * The default is the identity transformation with \a offset=0, and \a scale=1. * * \returns a reference to \c *this. */ Derived& setShift(const RealScalar& offset, const RealScalar& scale = 1) { m_shiftOffset = offset; m_shiftScale = scale; return derived(); } #ifndef EIGEN_PARSED_BY_DOXYGEN /** \internal */ template void dumpMemory(Stream& s) { int total = 0; s << " L: " << ((total+=(m_matrix.cols()+1) * sizeof(int) + m_matrix.nonZeros()*(sizeof(int)+sizeof(Scalar))) >> 20) << "Mb" << "\n"; s << " diag: " << ((total+=m_diag.size() * sizeof(Scalar)) >> 20) << "Mb" << "\n"; s << " tree: " << ((total+=m_parent.size() * sizeof(int)) >> 20) << "Mb" << "\n"; s << " nonzeros: " << ((total+=m_nonZerosPerCol.size() * sizeof(int)) >> 20) << "Mb" << "\n"; s << " perm: " << ((total+=m_P.size() * sizeof(int)) >> 20) << "Mb" << "\n"; s << " perm^-1: " << ((total+=m_Pinv.size() * sizeof(int)) >> 20) << "Mb" << "\n"; s << " TOTAL: " << (total>> 20) << "Mb" << "\n"; } /** \internal */ template void _solve_impl(const MatrixBase &b, MatrixBase &dest) const { eigen_assert(m_factorizationIsOk && "The decomposition is not in a valid state for solving, you must first call either compute() or symbolic()/numeric()"); eigen_assert(m_matrix.rows()==b.rows()); if(m_info!=Success) return; if(m_P.size()>0) dest = m_P * b; else dest = b; if(m_matrix.nonZeros()>0) // otherwise L==I derived().matrixL().solveInPlace(dest); if(m_diag.size()>0) dest = m_diag.asDiagonal().inverse() * dest; if (m_matrix.nonZeros()>0) // otherwise U==I derived().matrixU().solveInPlace(dest); if(m_P.size()>0) dest = m_Pinv * dest; } template void _solve_impl(const SparseMatrixBase &b, SparseMatrixBase &dest) const { internal::solve_sparse_through_dense_panels(derived(), b, dest); } #endif // EIGEN_PARSED_BY_DOXYGEN protected: /** Computes the sparse Cholesky decomposition of \a matrix */ template void compute(const MatrixType& matrix) { eigen_assert(matrix.rows()==matrix.cols()); Index size = matrix.cols(); CholMatrixType tmp(size,size); ConstCholMatrixPtr pmat; ordering(matrix, pmat, tmp); analyzePattern_preordered(*pmat, DoLDLT); factorize_preordered(*pmat); } template void factorize(const MatrixType& a) { eigen_assert(a.rows()==a.cols()); Index size = a.cols(); CholMatrixType tmp(size,size); ConstCholMatrixPtr pmat; if(m_P.size() == 0 && (int(UpLo) & int(Upper)) == Upper) { // If there is no ordering, try to directly use the input matrix without any copy internal::simplicial_cholesky_grab_input::run(a, pmat, tmp); } else { tmp.template selfadjointView() = a.template selfadjointView().twistedBy(m_P); pmat = &tmp; } factorize_preordered(*pmat); } template void factorize_preordered(const CholMatrixType& a); void analyzePattern(const MatrixType& a, bool doLDLT) { eigen_assert(a.rows()==a.cols()); Index size = a.cols(); CholMatrixType tmp(size,size); ConstCholMatrixPtr pmat; ordering(a, pmat, tmp); analyzePattern_preordered(*pmat,doLDLT); } void analyzePattern_preordered(const CholMatrixType& a, bool doLDLT); void ordering(const MatrixType& a, ConstCholMatrixPtr &pmat, CholMatrixType& ap); /** keeps off-diagonal entries; drops diagonal entries */ struct keep_diag { inline bool operator() (const Index& row, const Index& col, const Scalar&) const { return row!=col; } }; mutable ComputationInfo m_info; bool m_factorizationIsOk; bool m_analysisIsOk; CholMatrixType m_matrix; VectorType m_diag; // the diagonal coefficients (LDLT mode) VectorI m_parent; // elimination tree VectorI m_nonZerosPerCol; PermutationMatrix m_P; // the permutation PermutationMatrix m_Pinv; // the inverse permutation RealScalar m_shiftOffset; RealScalar m_shiftScale; }; template > class SimplicialLLT; template > class SimplicialLDLT; template > class SimplicialCholesky; namespace internal { template struct traits > { typedef _MatrixType MatrixType; typedef _Ordering OrderingType; enum { UpLo = _UpLo }; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef TriangularView MatrixL; typedef TriangularView MatrixU; static inline MatrixL getL(const CholMatrixType& m) { return MatrixL(m); } static inline MatrixU getU(const CholMatrixType& m) { return MatrixU(m.adjoint()); } }; template struct traits > { typedef _MatrixType MatrixType; typedef _Ordering OrderingType; enum { UpLo = _UpLo }; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef TriangularView MatrixL; typedef TriangularView MatrixU; static inline MatrixL getL(const CholMatrixType& m) { return MatrixL(m); } static inline MatrixU getU(const CholMatrixType& m) { return MatrixU(m.adjoint()); } }; template struct traits > { typedef _MatrixType MatrixType; typedef _Ordering OrderingType; enum { UpLo = _UpLo }; }; } /** \ingroup SparseCholesky_Module * \class SimplicialLLT * \brief A direct sparse LLT Cholesky factorizations * * This class provides a LL^T Cholesky factorizations of sparse matrices that are * selfadjoint and positive definite. The factorization allows for solving A.X = B where * X and B can be either dense or sparse. * * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization * such that the factorized matrix is P A P^-1. * * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * \tparam _UpLo the triangular part that will be used for the computations. It can be Lower * or Upper. Default is Lower. * \tparam _Ordering The ordering method to use, either AMDOrdering<> or NaturalOrdering<>. Default is AMDOrdering<> * * \implsparsesolverconcept * * \sa class SimplicialLDLT, class AMDOrdering, class NaturalOrdering */ template class SimplicialLLT : public SimplicialCholeskyBase > { public: typedef _MatrixType MatrixType; enum { UpLo = _UpLo }; typedef SimplicialCholeskyBase Base; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef Matrix VectorType; typedef internal::traits Traits; typedef typename Traits::MatrixL MatrixL; typedef typename Traits::MatrixU MatrixU; public: /** Default constructor */ SimplicialLLT() : Base() {} /** Constructs and performs the LLT factorization of \a matrix */ explicit SimplicialLLT(const MatrixType& matrix) : Base(matrix) {} /** \returns an expression of the factor L */ inline const MatrixL matrixL() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial LLT not factorized"); return Traits::getL(Base::m_matrix); } /** \returns an expression of the factor U (= L^*) */ inline const MatrixU matrixU() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial LLT not factorized"); return Traits::getU(Base::m_matrix); } /** Computes the sparse Cholesky decomposition of \a matrix */ SimplicialLLT& compute(const MatrixType& matrix) { Base::template compute(matrix); return *this; } /** Performs a symbolic decomposition on the sparcity of \a matrix. * * This function is particularly useful when solving for several problems having the same structure. * * \sa factorize() */ void analyzePattern(const MatrixType& a) { Base::analyzePattern(a, false); } /** Performs a numeric decomposition of \a matrix * * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed. * * \sa analyzePattern() */ void factorize(const MatrixType& a) { Base::template factorize(a); } /** \returns the determinant of the underlying matrix from the current factorization */ Scalar determinant() const { Scalar detL = Base::m_matrix.diagonal().prod(); return numext::abs2(detL); } }; /** \ingroup SparseCholesky_Module * \class SimplicialLDLT * \brief A direct sparse LDLT Cholesky factorizations without square root. * * This class provides a LDL^T Cholesky factorizations without square root of sparse matrices that are * selfadjoint and positive definite. The factorization allows for solving A.X = B where * X and B can be either dense or sparse. * * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization * such that the factorized matrix is P A P^-1. * * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * \tparam _UpLo the triangular part that will be used for the computations. It can be Lower * or Upper. Default is Lower. * \tparam _Ordering The ordering method to use, either AMDOrdering<> or NaturalOrdering<>. Default is AMDOrdering<> * * \implsparsesolverconcept * * \sa class SimplicialLLT, class AMDOrdering, class NaturalOrdering */ template class SimplicialLDLT : public SimplicialCholeskyBase > { public: typedef _MatrixType MatrixType; enum { UpLo = _UpLo }; typedef SimplicialCholeskyBase Base; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef Matrix VectorType; typedef internal::traits Traits; typedef typename Traits::MatrixL MatrixL; typedef typename Traits::MatrixU MatrixU; public: /** Default constructor */ SimplicialLDLT() : Base() {} /** Constructs and performs the LLT factorization of \a matrix */ explicit SimplicialLDLT(const MatrixType& matrix) : Base(matrix) {} /** \returns a vector expression of the diagonal D */ inline const VectorType vectorD() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized"); return Base::m_diag; } /** \returns an expression of the factor L */ inline const MatrixL matrixL() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized"); return Traits::getL(Base::m_matrix); } /** \returns an expression of the factor U (= L^*) */ inline const MatrixU matrixU() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized"); return Traits::getU(Base::m_matrix); } /** Computes the sparse Cholesky decomposition of \a matrix */ SimplicialLDLT& compute(const MatrixType& matrix) { Base::template compute(matrix); return *this; } /** Performs a symbolic decomposition on the sparcity of \a matrix. * * This function is particularly useful when solving for several problems having the same structure. * * \sa factorize() */ void analyzePattern(const MatrixType& a) { Base::analyzePattern(a, true); } /** Performs a numeric decomposition of \a matrix * * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed. * * \sa analyzePattern() */ void factorize(const MatrixType& a) { Base::template factorize(a); } /** \returns the determinant of the underlying matrix from the current factorization */ Scalar determinant() const { return Base::m_diag.prod(); } }; /** \deprecated use SimplicialLDLT or class SimplicialLLT * \ingroup SparseCholesky_Module * \class SimplicialCholesky * * \sa class SimplicialLDLT, class SimplicialLLT */ template class SimplicialCholesky : public SimplicialCholeskyBase > { public: typedef _MatrixType MatrixType; enum { UpLo = _UpLo }; typedef SimplicialCholeskyBase Base; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix CholMatrixType; typedef Matrix VectorType; typedef internal::traits Traits; typedef internal::traits > LDLTTraits; typedef internal::traits > LLTTraits; public: SimplicialCholesky() : Base(), m_LDLT(true) {} explicit SimplicialCholesky(const MatrixType& matrix) : Base(), m_LDLT(true) { compute(matrix); } SimplicialCholesky& setMode(SimplicialCholeskyMode mode) { switch(mode) { case SimplicialCholeskyLLT: m_LDLT = false; break; case SimplicialCholeskyLDLT: m_LDLT = true; break; default: break; } return *this; } inline const VectorType vectorD() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial Cholesky not factorized"); return Base::m_diag; } inline const CholMatrixType rawMatrix() const { eigen_assert(Base::m_factorizationIsOk && "Simplicial Cholesky not factorized"); return Base::m_matrix; } /** Computes the sparse Cholesky decomposition of \a matrix */ SimplicialCholesky& compute(const MatrixType& matrix) { if(m_LDLT) Base::template compute(matrix); else Base::template compute(matrix); return *this; } /** Performs a symbolic decomposition on the sparcity of \a matrix. * * This function is particularly useful when solving for several problems having the same structure. * * \sa factorize() */ void analyzePattern(const MatrixType& a) { Base::analyzePattern(a, m_LDLT); } /** Performs a numeric decomposition of \a matrix * * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed. * * \sa analyzePattern() */ void factorize(const MatrixType& a) { if(m_LDLT) Base::template factorize(a); else Base::template factorize(a); } /** \internal */ template void _solve_impl(const MatrixBase &b, MatrixBase &dest) const { eigen_assert(Base::m_factorizationIsOk && "The decomposition is not in a valid state for solving, you must first call either compute() or symbolic()/numeric()"); eigen_assert(Base::m_matrix.rows()==b.rows()); if(Base::m_info!=Success) return; if(Base::m_P.size()>0) dest = Base::m_P * b; else dest = b; if(Base::m_matrix.nonZeros()>0) // otherwise L==I { if(m_LDLT) LDLTTraits::getL(Base::m_matrix).solveInPlace(dest); else LLTTraits::getL(Base::m_matrix).solveInPlace(dest); } if(Base::m_diag.size()>0) dest = Base::m_diag.real().asDiagonal().inverse() * dest; if (Base::m_matrix.nonZeros()>0) // otherwise I==I { if(m_LDLT) LDLTTraits::getU(Base::m_matrix).solveInPlace(dest); else LLTTraits::getU(Base::m_matrix).solveInPlace(dest); } if(Base::m_P.size()>0) dest = Base::m_Pinv * dest; } /** \internal */ template void _solve_impl(const SparseMatrixBase &b, SparseMatrixBase &dest) const { internal::solve_sparse_through_dense_panels(*this, b, dest); } Scalar determinant() const { if(m_LDLT) { return Base::m_diag.prod(); } else { Scalar detL = Diagonal(Base::m_matrix).prod(); return numext::abs2(detL); } } protected: bool m_LDLT; }; template void SimplicialCholeskyBase::ordering(const MatrixType& a, ConstCholMatrixPtr &pmat, CholMatrixType& ap) { eigen_assert(a.rows()==a.cols()); const Index size = a.rows(); pmat = ≈ // Note that ordering methods compute the inverse permutation if(!internal::is_same >::value) { { CholMatrixType C; C = a.template selfadjointView(); OrderingType ordering; ordering(C,m_Pinv); } if(m_Pinv.size()>0) m_P = m_Pinv.inverse(); else m_P.resize(0); ap.resize(size,size); ap.template selfadjointView() = a.template selfadjointView().twistedBy(m_P); } else { m_Pinv.resize(0); m_P.resize(0); if(int(UpLo)==int(Lower) || MatrixType::IsRowMajor) { // we have to transpose the lower part to to the upper one ap.resize(size,size); ap.template selfadjointView() = a.template selfadjointView(); } else internal::simplicial_cholesky_grab_input::run(a, pmat, ap); } } } // end namespace Eigen #endif // EIGEN_SIMPLICIAL_CHOLESKY_H piqp-octave/src/eigen/Eigen/src/SparseCholesky/SimplicialCholesky_impl.h0000644000175100001770000001330614107270226026274 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2012 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* NOTE: these functions have been adapted from the LDL library: LDL Copyright (c) 2005 by Timothy A. Davis. All Rights Reserved. The author of LDL, Timothy A. Davis., has executed a license with Google LLC to permit distribution of this code and derivative works as part of Eigen under the Mozilla Public License v. 2.0, as stated at the top of this file. */ #ifndef EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H #define EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H namespace Eigen { template void SimplicialCholeskyBase::analyzePattern_preordered(const CholMatrixType& ap, bool doLDLT) { const StorageIndex size = StorageIndex(ap.rows()); m_matrix.resize(size, size); m_parent.resize(size); m_nonZerosPerCol.resize(size); ei_declare_aligned_stack_constructed_variable(StorageIndex, tags, size, 0); for(StorageIndex k = 0; k < size; ++k) { /* L(k,:) pattern: all nodes reachable in etree from nz in A(0:k-1,k) */ m_parent[k] = -1; /* parent of k is not yet known */ tags[k] = k; /* mark node k as visited */ m_nonZerosPerCol[k] = 0; /* count of nonzeros in column k of L */ for(typename CholMatrixType::InnerIterator it(ap,k); it; ++it) { StorageIndex i = it.index(); if(i < k) { /* follow path from i to root of etree, stop at flagged node */ for(; tags[i] != k; i = m_parent[i]) { /* find parent of i if not yet determined */ if (m_parent[i] == -1) m_parent[i] = k; m_nonZerosPerCol[i]++; /* L (k,i) is nonzero */ tags[i] = k; /* mark i as visited */ } } } } /* construct Lp index array from m_nonZerosPerCol column counts */ StorageIndex* Lp = m_matrix.outerIndexPtr(); Lp[0] = 0; for(StorageIndex k = 0; k < size; ++k) Lp[k+1] = Lp[k] + m_nonZerosPerCol[k] + (doLDLT ? 0 : 1); m_matrix.resizeNonZeros(Lp[size]); m_isInitialized = true; m_info = Success; m_analysisIsOk = true; m_factorizationIsOk = false; } template template void SimplicialCholeskyBase::factorize_preordered(const CholMatrixType& ap) { using std::sqrt; eigen_assert(m_analysisIsOk && "You must first call analyzePattern()"); eigen_assert(ap.rows()==ap.cols()); eigen_assert(m_parent.size()==ap.rows()); eigen_assert(m_nonZerosPerCol.size()==ap.rows()); const StorageIndex size = StorageIndex(ap.rows()); const StorageIndex* Lp = m_matrix.outerIndexPtr(); StorageIndex* Li = m_matrix.innerIndexPtr(); Scalar* Lx = m_matrix.valuePtr(); ei_declare_aligned_stack_constructed_variable(Scalar, y, size, 0); ei_declare_aligned_stack_constructed_variable(StorageIndex, pattern, size, 0); ei_declare_aligned_stack_constructed_variable(StorageIndex, tags, size, 0); bool ok = true; m_diag.resize(DoLDLT ? size : 0); for(StorageIndex k = 0; k < size; ++k) { // compute nonzero pattern of kth row of L, in topological order y[k] = Scalar(0); // Y(0:k) is now all zero StorageIndex top = size; // stack for pattern is empty tags[k] = k; // mark node k as visited m_nonZerosPerCol[k] = 0; // count of nonzeros in column k of L for(typename CholMatrixType::InnerIterator it(ap,k); it; ++it) { StorageIndex i = it.index(); if(i <= k) { y[i] += numext::conj(it.value()); /* scatter A(i,k) into Y (sum duplicates) */ Index len; for(len = 0; tags[i] != k; i = m_parent[i]) { pattern[len++] = i; /* L(k,i) is nonzero */ tags[i] = k; /* mark i as visited */ } while(len > 0) pattern[--top] = pattern[--len]; } } /* compute numerical values kth row of L (a sparse triangular solve) */ RealScalar d = numext::real(y[k]) * m_shiftScale + m_shiftOffset; // get D(k,k), apply the shift function, and clear Y(k) y[k] = Scalar(0); for(; top < size; ++top) { Index i = pattern[top]; /* pattern[top:n-1] is pattern of L(:,k) */ Scalar yi = y[i]; /* get and clear Y(i) */ y[i] = Scalar(0); /* the nonzero entry L(k,i) */ Scalar l_ki; if(DoLDLT) l_ki = yi / numext::real(m_diag[i]); else yi = l_ki = yi / Lx[Lp[i]]; Index p2 = Lp[i] + m_nonZerosPerCol[i]; Index p; for(p = Lp[i] + (DoLDLT ? 0 : 1); p < p2; ++p) y[Li[p]] -= numext::conj(Lx[p]) * yi; d -= numext::real(l_ki * numext::conj(yi)); Li[p] = k; /* store L(k,i) in column form of L */ Lx[p] = l_ki; ++m_nonZerosPerCol[i]; /* increment count of nonzeros in col i */ } if(DoLDLT) { m_diag[k] = d; if(d == RealScalar(0)) { ok = false; /* failure, D(k,k) is zero */ break; } } else { Index p = Lp[k] + m_nonZerosPerCol[k]++; Li[p] = k ; /* store L(k,k) = sqrt (d) in column k */ if(d <= RealScalar(0)) { ok = false; /* failure, matrix is not positive definite */ break; } Lx[p] = sqrt(d) ; } } m_info = ok ? Success : NumericalIssue; m_factorizationIsOk = true; } } // end namespace Eigen #endif // EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H piqp-octave/src/eigen/Eigen/src/misc/0000755000175100001770000000000014107270226017303 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/misc/Image.h0000644000175100001770000000554114107270226020503 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MISC_IMAGE_H #define EIGEN_MISC_IMAGE_H namespace Eigen { namespace internal { /** \class image_retval_base * */ template struct traits > { typedef typename DecompositionType::MatrixType MatrixType; typedef Matrix< typename MatrixType::Scalar, MatrixType::RowsAtCompileTime, // the image is a subspace of the destination space, whose // dimension is the number of rows of the original matrix Dynamic, // we don't know at compile time the dimension of the image (the rank) MatrixType::Options, MatrixType::MaxRowsAtCompileTime, // the image matrix will consist of columns from the original matrix, MatrixType::MaxColsAtCompileTime // so it has the same number of rows and at most as many columns. > ReturnType; }; template struct image_retval_base : public ReturnByValue > { typedef _DecompositionType DecompositionType; typedef typename DecompositionType::MatrixType MatrixType; typedef ReturnByValue Base; image_retval_base(const DecompositionType& dec, const MatrixType& originalMatrix) : m_dec(dec), m_rank(dec.rank()), m_cols(m_rank == 0 ? 1 : m_rank), m_originalMatrix(originalMatrix) {} inline Index rows() const { return m_dec.rows(); } inline Index cols() const { return m_cols; } inline Index rank() const { return m_rank; } inline const DecompositionType& dec() const { return m_dec; } inline const MatrixType& originalMatrix() const { return m_originalMatrix; } template inline void evalTo(Dest& dst) const { static_cast*>(this)->evalTo(dst); } protected: const DecompositionType& m_dec; Index m_rank, m_cols; const MatrixType& m_originalMatrix; }; } // end namespace internal #define EIGEN_MAKE_IMAGE_HELPERS(DecompositionType) \ typedef typename DecompositionType::MatrixType MatrixType; \ typedef typename MatrixType::Scalar Scalar; \ typedef typename MatrixType::RealScalar RealScalar; \ typedef Eigen::internal::image_retval_base Base; \ using Base::dec; \ using Base::originalMatrix; \ using Base::rank; \ using Base::rows; \ using Base::cols; \ image_retval(const DecompositionType& dec, const MatrixType& originalMatrix) \ : Base(dec, originalMatrix) {} } // end namespace Eigen #endif // EIGEN_MISC_IMAGE_H piqp-octave/src/eigen/Eigen/src/misc/Kernel.h0000644000175100001770000000526614107270226020705 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MISC_KERNEL_H #define EIGEN_MISC_KERNEL_H namespace Eigen { namespace internal { /** \class kernel_retval_base * */ template struct traits > { typedef typename DecompositionType::MatrixType MatrixType; typedef Matrix< typename MatrixType::Scalar, MatrixType::ColsAtCompileTime, // the number of rows in the "kernel matrix" // is the number of cols of the original matrix // so that the product "matrix * kernel = zero" makes sense Dynamic, // we don't know at compile-time the dimension of the kernel MatrixType::Options, MatrixType::MaxColsAtCompileTime, // see explanation for 2nd template parameter MatrixType::MaxColsAtCompileTime // the kernel is a subspace of the domain space, // whose dimension is the number of columns of the original matrix > ReturnType; }; template struct kernel_retval_base : public ReturnByValue > { typedef _DecompositionType DecompositionType; typedef ReturnByValue Base; explicit kernel_retval_base(const DecompositionType& dec) : m_dec(dec), m_rank(dec.rank()), m_cols(m_rank==dec.cols() ? 1 : dec.cols() - m_rank) {} inline Index rows() const { return m_dec.cols(); } inline Index cols() const { return m_cols; } inline Index rank() const { return m_rank; } inline const DecompositionType& dec() const { return m_dec; } template inline void evalTo(Dest& dst) const { static_cast*>(this)->evalTo(dst); } protected: const DecompositionType& m_dec; Index m_rank, m_cols; }; } // end namespace internal #define EIGEN_MAKE_KERNEL_HELPERS(DecompositionType) \ typedef typename DecompositionType::MatrixType MatrixType; \ typedef typename MatrixType::Scalar Scalar; \ typedef typename MatrixType::RealScalar RealScalar; \ typedef Eigen::internal::kernel_retval_base Base; \ using Base::dec; \ using Base::rank; \ using Base::rows; \ using Base::cols; \ kernel_retval(const DecompositionType& dec) : Base(dec) {} } // end namespace Eigen #endif // EIGEN_MISC_KERNEL_H piqp-octave/src/eigen/Eigen/src/misc/RealSvd2x2.h0000644000175100001770000000332414107270226021352 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2010 Benoit Jacob // Copyright (C) 2013-2016 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_REALSVD2X2_H #define EIGEN_REALSVD2X2_H namespace Eigen { namespace internal { template void real_2x2_jacobi_svd(const MatrixType& matrix, Index p, Index q, JacobiRotation *j_left, JacobiRotation *j_right) { using std::sqrt; using std::abs; Matrix m; m << numext::real(matrix.coeff(p,p)), numext::real(matrix.coeff(p,q)), numext::real(matrix.coeff(q,p)), numext::real(matrix.coeff(q,q)); JacobiRotation rot1; RealScalar t = m.coeff(0,0) + m.coeff(1,1); RealScalar d = m.coeff(1,0) - m.coeff(0,1); if(abs(d) < (std::numeric_limits::min)()) { rot1.s() = RealScalar(0); rot1.c() = RealScalar(1); } else { // If d!=0, then t/d cannot overflow because the magnitude of the // entries forming d are not too small compared to the ones forming t. RealScalar u = t / d; RealScalar tmp = sqrt(RealScalar(1) + numext::abs2(u)); rot1.s() = RealScalar(1) / tmp; rot1.c() = u / tmp; } m.applyOnTheLeft(0,1,rot1); j_right->makeJacobi(m,0,1); *j_left = rot1 * j_right->transpose(); } } // end namespace internal } // end namespace Eigen #endif // EIGEN_REALSVD2X2_H piqp-octave/src/eigen/Eigen/src/misc/blas.h0000644000175100001770000007354014107270226020406 0ustar runnerdocker#ifndef BLAS_H #define BLAS_H #ifdef __cplusplus extern "C" { #endif #define BLASFUNC(FUNC) FUNC##_ #ifdef __WIN64__ typedef long long BLASLONG; typedef unsigned long long BLASULONG; #else typedef long BLASLONG; typedef unsigned long BLASULONG; #endif int BLASFUNC(xerbla)(const char *, int *info, int); float BLASFUNC(sdot) (int *, float *, int *, float *, int *); float BLASFUNC(sdsdot)(int *, float *, float *, int *, float *, int *); double BLASFUNC(dsdot) (int *, float *, int *, float *, int *); double BLASFUNC(ddot) (int *, double *, int *, double *, int *); double BLASFUNC(qdot) (int *, double *, int *, double *, int *); int BLASFUNC(cdotuw) (int *, float *, int *, float *, int *, float*); int BLASFUNC(cdotcw) (int *, float *, int *, float *, int *, float*); int BLASFUNC(zdotuw) (int *, double *, int *, double *, int *, double*); int BLASFUNC(zdotcw) (int *, double *, int *, double *, int *, double*); int BLASFUNC(saxpy) (const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(daxpy) (const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(qaxpy) (const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(caxpy) (const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(zaxpy) (const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(xaxpy) (const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(caxpyc)(const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(zaxpyc)(const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(xaxpyc)(const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(scopy) (int *, float *, int *, float *, int *); int BLASFUNC(dcopy) (int *, double *, int *, double *, int *); int BLASFUNC(qcopy) (int *, double *, int *, double *, int *); int BLASFUNC(ccopy) (int *, float *, int *, float *, int *); int BLASFUNC(zcopy) (int *, double *, int *, double *, int *); int BLASFUNC(xcopy) (int *, double *, int *, double *, int *); int BLASFUNC(sswap) (int *, float *, int *, float *, int *); int BLASFUNC(dswap) (int *, double *, int *, double *, int *); int BLASFUNC(qswap) (int *, double *, int *, double *, int *); int BLASFUNC(cswap) (int *, float *, int *, float *, int *); int BLASFUNC(zswap) (int *, double *, int *, double *, int *); int BLASFUNC(xswap) (int *, double *, int *, double *, int *); float BLASFUNC(sasum) (int *, float *, int *); float BLASFUNC(scasum)(int *, float *, int *); double BLASFUNC(dasum) (int *, double *, int *); double BLASFUNC(qasum) (int *, double *, int *); double BLASFUNC(dzasum)(int *, double *, int *); double BLASFUNC(qxasum)(int *, double *, int *); int BLASFUNC(isamax)(int *, float *, int *); int BLASFUNC(idamax)(int *, double *, int *); int BLASFUNC(iqamax)(int *, double *, int *); int BLASFUNC(icamax)(int *, float *, int *); int BLASFUNC(izamax)(int *, double *, int *); int BLASFUNC(ixamax)(int *, double *, int *); int BLASFUNC(ismax) (int *, float *, int *); int BLASFUNC(idmax) (int *, double *, int *); int BLASFUNC(iqmax) (int *, double *, int *); int BLASFUNC(icmax) (int *, float *, int *); int BLASFUNC(izmax) (int *, double *, int *); int BLASFUNC(ixmax) (int *, double *, int *); int BLASFUNC(isamin)(int *, float *, int *); int BLASFUNC(idamin)(int *, double *, int *); int BLASFUNC(iqamin)(int *, double *, int *); int BLASFUNC(icamin)(int *, float *, int *); int BLASFUNC(izamin)(int *, double *, int *); int BLASFUNC(ixamin)(int *, double *, int *); int BLASFUNC(ismin)(int *, float *, int *); int BLASFUNC(idmin)(int *, double *, int *); int BLASFUNC(iqmin)(int *, double *, int *); int BLASFUNC(icmin)(int *, float *, int *); int BLASFUNC(izmin)(int *, double *, int *); int BLASFUNC(ixmin)(int *, double *, int *); float BLASFUNC(samax) (int *, float *, int *); double BLASFUNC(damax) (int *, double *, int *); double BLASFUNC(qamax) (int *, double *, int *); float BLASFUNC(scamax)(int *, float *, int *); double BLASFUNC(dzamax)(int *, double *, int *); double BLASFUNC(qxamax)(int *, double *, int *); float BLASFUNC(samin) (int *, float *, int *); double BLASFUNC(damin) (int *, double *, int *); double BLASFUNC(qamin) (int *, double *, int *); float BLASFUNC(scamin)(int *, float *, int *); double BLASFUNC(dzamin)(int *, double *, int *); double BLASFUNC(qxamin)(int *, double *, int *); float BLASFUNC(smax) (int *, float *, int *); double BLASFUNC(dmax) (int *, double *, int *); double BLASFUNC(qmax) (int *, double *, int *); float BLASFUNC(scmax) (int *, float *, int *); double BLASFUNC(dzmax) (int *, double *, int *); double BLASFUNC(qxmax) (int *, double *, int *); float BLASFUNC(smin) (int *, float *, int *); double BLASFUNC(dmin) (int *, double *, int *); double BLASFUNC(qmin) (int *, double *, int *); float BLASFUNC(scmin) (int *, float *, int *); double BLASFUNC(dzmin) (int *, double *, int *); double BLASFUNC(qxmin) (int *, double *, int *); int BLASFUNC(sscal) (int *, float *, float *, int *); int BLASFUNC(dscal) (int *, double *, double *, int *); int BLASFUNC(qscal) (int *, double *, double *, int *); int BLASFUNC(cscal) (int *, float *, float *, int *); int BLASFUNC(zscal) (int *, double *, double *, int *); int BLASFUNC(xscal) (int *, double *, double *, int *); int BLASFUNC(csscal)(int *, float *, float *, int *); int BLASFUNC(zdscal)(int *, double *, double *, int *); int BLASFUNC(xqscal)(int *, double *, double *, int *); float BLASFUNC(snrm2) (int *, float *, int *); float BLASFUNC(scnrm2)(int *, float *, int *); double BLASFUNC(dnrm2) (int *, double *, int *); double BLASFUNC(qnrm2) (int *, double *, int *); double BLASFUNC(dznrm2)(int *, double *, int *); double BLASFUNC(qxnrm2)(int *, double *, int *); int BLASFUNC(srot) (int *, float *, int *, float *, int *, float *, float *); int BLASFUNC(drot) (int *, double *, int *, double *, int *, double *, double *); int BLASFUNC(qrot) (int *, double *, int *, double *, int *, double *, double *); int BLASFUNC(csrot) (int *, float *, int *, float *, int *, float *, float *); int BLASFUNC(zdrot) (int *, double *, int *, double *, int *, double *, double *); int BLASFUNC(xqrot) (int *, double *, int *, double *, int *, double *, double *); int BLASFUNC(srotg) (float *, float *, float *, float *); int BLASFUNC(drotg) (double *, double *, double *, double *); int BLASFUNC(qrotg) (double *, double *, double *, double *); int BLASFUNC(crotg) (float *, float *, float *, float *); int BLASFUNC(zrotg) (double *, double *, double *, double *); int BLASFUNC(xrotg) (double *, double *, double *, double *); int BLASFUNC(srotmg)(float *, float *, float *, float *, float *); int BLASFUNC(drotmg)(double *, double *, double *, double *, double *); int BLASFUNC(srotm) (int *, float *, int *, float *, int *, float *); int BLASFUNC(drotm) (int *, double *, int *, double *, int *, double *); int BLASFUNC(qrotm) (int *, double *, int *, double *, int *, double *); /* Level 2 routines */ int BLASFUNC(sger)(int *, int *, float *, float *, int *, float *, int *, float *, int *); int BLASFUNC(dger)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(qger)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(cgeru)(int *, int *, float *, float *, int *, float *, int *, float *, int *); int BLASFUNC(cgerc)(int *, int *, float *, float *, int *, float *, int *, float *, int *); int BLASFUNC(zgeru)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(zgerc)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(xgeru)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(xgerc)(int *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(sgemv)(const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dgemv)(const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(qgemv)(const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cgemv)(const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zgemv)(const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xgemv)(const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(strsv) (const char *, const char *, const char *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(dtrsv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(qtrsv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(ctrsv) (const char *, const char *, const char *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(ztrsv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(xtrsv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(stpsv) (char *, char *, char *, int *, float *, float *, int *); int BLASFUNC(dtpsv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(qtpsv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(ctpsv) (char *, char *, char *, int *, float *, float *, int *); int BLASFUNC(ztpsv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(xtpsv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(strmv) (const char *, const char *, const char *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(dtrmv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(qtrmv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(ctrmv) (const char *, const char *, const char *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(ztrmv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(xtrmv) (const char *, const char *, const char *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(stpmv) (char *, char *, char *, int *, float *, float *, int *); int BLASFUNC(dtpmv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(qtpmv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(ctpmv) (char *, char *, char *, int *, float *, float *, int *); int BLASFUNC(ztpmv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(xtpmv) (char *, char *, char *, int *, double *, double *, int *); int BLASFUNC(stbmv) (char *, char *, char *, int *, int *, float *, int *, float *, int *); int BLASFUNC(dtbmv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(qtbmv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(ctbmv) (char *, char *, char *, int *, int *, float *, int *, float *, int *); int BLASFUNC(ztbmv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(xtbmv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(stbsv) (char *, char *, char *, int *, int *, float *, int *, float *, int *); int BLASFUNC(dtbsv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(qtbsv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(ctbsv) (char *, char *, char *, int *, int *, float *, int *, float *, int *); int BLASFUNC(ztbsv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(xtbsv) (char *, char *, char *, int *, int *, double *, int *, double *, int *); int BLASFUNC(ssymv) (const char *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dsymv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(qsymv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(sspmv) (char *, int *, float *, float *, float *, int *, float *, float *, int *); int BLASFUNC(dspmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(qspmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(ssyr) (const char *, const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(dsyr) (const char *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(qsyr) (const char *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(ssyr2) (const char *, const int *, const float *, const float *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(dsyr2) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(qsyr2) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(csyr2) (const char *, const int *, const float *, const float *, const int *, const float *, const int *, float *, const int *); int BLASFUNC(zsyr2) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(xsyr2) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, double *, const int *); int BLASFUNC(sspr) (char *, int *, float *, float *, int *, float *); int BLASFUNC(dspr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(qspr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(sspr2) (char *, int *, float *, float *, int *, float *, int *, float *); int BLASFUNC(dspr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(qspr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(cspr2) (char *, int *, float *, float *, int *, float *, int *, float *); int BLASFUNC(zspr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(xspr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(cher) (char *, int *, float *, float *, int *, float *, int *); int BLASFUNC(zher) (char *, int *, double *, double *, int *, double *, int *); int BLASFUNC(xher) (char *, int *, double *, double *, int *, double *, int *); int BLASFUNC(chpr) (char *, int *, float *, float *, int *, float *); int BLASFUNC(zhpr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(xhpr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(cher2) (char *, int *, float *, float *, int *, float *, int *, float *, int *); int BLASFUNC(zher2) (char *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(xher2) (char *, int *, double *, double *, int *, double *, int *, double *, int *); int BLASFUNC(chpr2) (char *, int *, float *, float *, int *, float *, int *, float *); int BLASFUNC(zhpr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(xhpr2) (char *, int *, double *, double *, int *, double *, int *, double *); int BLASFUNC(chemv) (const char *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zhemv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xhemv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(chpmv) (char *, int *, float *, float *, float *, int *, float *, float *, int *); int BLASFUNC(zhpmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(xhpmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(snorm)(char *, int *, int *, float *, int *); int BLASFUNC(dnorm)(char *, int *, int *, double *, int *); int BLASFUNC(cnorm)(char *, int *, int *, float *, int *); int BLASFUNC(znorm)(char *, int *, int *, double *, int *); int BLASFUNC(sgbmv)(char *, int *, int *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(dgbmv)(char *, int *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(qgbmv)(char *, int *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(cgbmv)(char *, int *, int *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zgbmv)(char *, int *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xgbmv)(char *, int *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(ssbmv)(char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(dsbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(qsbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(csbmv)(char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zsbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xsbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(chbmv)(char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zhbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xhbmv)(char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); /* Level 3 routines */ int BLASFUNC(sgemm)(const char *, const char *, const int *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dgemm)(const char *, const char *, const int *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(qgemm)(const char *, const char *, const int *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cgemm)(const char *, const char *, const int *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zgemm)(const char *, const char *, const int *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xgemm)(const char *, const char *, const int *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cgemm3m)(char *, char *, int *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zgemm3m)(char *, char *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xgemm3m)(char *, char *, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(sge2mm)(char *, char *, char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(dge2mm)(char *, char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(cge2mm)(char *, char *, char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zge2mm)(char *, char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(strsm)(const char *, const char *, const char *, const char *, const int *, const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(dtrsm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(qtrsm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(ctrsm)(const char *, const char *, const char *, const char *, const int *, const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(ztrsm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(xtrsm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(strmm)(const char *, const char *, const char *, const char *, const int *, const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(dtrmm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(qtrmm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(ctrmm)(const char *, const char *, const char *, const char *, const int *, const int *, const float *, const float *, const int *, float *, const int *); int BLASFUNC(ztrmm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(xtrmm)(const char *, const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, double *, const int *); int BLASFUNC(ssymm)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dsymm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(qsymm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(csymm)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zsymm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xsymm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(csymm3m)(char *, char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zsymm3m)(char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xsymm3m)(char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(ssyrk)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dsyrk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(qsyrk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(csyrk)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zsyrk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xsyrk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(ssyr2k)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(dsyr2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); int BLASFUNC(qsyr2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); int BLASFUNC(csyr2k)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zsyr2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); int BLASFUNC(xsyr2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); int BLASFUNC(chemm)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zhemm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xhemm)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(chemm3m)(char *, char *, int *, int *, float *, float *, int *, float *, int *, float *, float *, int *); int BLASFUNC(zhemm3m)(char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(xhemm3m)(char *, char *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int BLASFUNC(cherk)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zherk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xherk)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cher2k)(const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zher2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xher2k)(const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cher2m)(const char *, const char *, const char *, const int *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zher2m)(const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); int BLASFUNC(xher2m)(const char *, const char *, const char *, const int *, const int *, const double *, const double *, const int *, const double*, const int *, const double *, double *, const int *); #ifdef __cplusplus } #endif #endif piqp-octave/src/eigen/Eigen/src/misc/lapack.h0000644000175100001770000001723214107270226020714 0ustar runnerdocker#ifndef LAPACK_H #define LAPACK_H #include "blas.h" #ifdef __cplusplus extern "C" { #endif int BLASFUNC(csymv) (const char *, const int *, const float *, const float *, const int *, const float *, const int *, const float *, float *, const int *); int BLASFUNC(zsymv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(xsymv) (const char *, const int *, const double *, const double *, const int *, const double *, const int *, const double *, double *, const int *); int BLASFUNC(cspmv) (char *, int *, float *, float *, float *, int *, float *, float *, int *); int BLASFUNC(zspmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(xspmv) (char *, int *, double *, double *, double *, int *, double *, double *, int *); int BLASFUNC(csyr) (char *, int *, float *, float *, int *, float *, int *); int BLASFUNC(zsyr) (char *, int *, double *, double *, int *, double *, int *); int BLASFUNC(xsyr) (char *, int *, double *, double *, int *, double *, int *); int BLASFUNC(cspr) (char *, int *, float *, float *, int *, float *); int BLASFUNC(zspr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(xspr) (char *, int *, double *, double *, int *, double *); int BLASFUNC(sgemt)(char *, int *, int *, float *, float *, int *, float *, int *); int BLASFUNC(dgemt)(char *, int *, int *, double *, double *, int *, double *, int *); int BLASFUNC(cgemt)(char *, int *, int *, float *, float *, int *, float *, int *); int BLASFUNC(zgemt)(char *, int *, int *, double *, double *, int *, double *, int *); int BLASFUNC(sgema)(char *, char *, int *, int *, float *, float *, int *, float *, float *, int *, float *, int *); int BLASFUNC(dgema)(char *, char *, int *, int *, double *, double *, int *, double*, double *, int *, double*, int *); int BLASFUNC(cgema)(char *, char *, int *, int *, float *, float *, int *, float *, float *, int *, float *, int *); int BLASFUNC(zgema)(char *, char *, int *, int *, double *, double *, int *, double*, double *, int *, double*, int *); int BLASFUNC(sgems)(char *, char *, int *, int *, float *, float *, int *, float *, float *, int *, float *, int *); int BLASFUNC(dgems)(char *, char *, int *, int *, double *, double *, int *, double*, double *, int *, double*, int *); int BLASFUNC(cgems)(char *, char *, int *, int *, float *, float *, int *, float *, float *, int *, float *, int *); int BLASFUNC(zgems)(char *, char *, int *, int *, double *, double *, int *, double*, double *, int *, double*, int *); int BLASFUNC(sgetf2)(int *, int *, float *, int *, int *, int *); int BLASFUNC(dgetf2)(int *, int *, double *, int *, int *, int *); int BLASFUNC(qgetf2)(int *, int *, double *, int *, int *, int *); int BLASFUNC(cgetf2)(int *, int *, float *, int *, int *, int *); int BLASFUNC(zgetf2)(int *, int *, double *, int *, int *, int *); int BLASFUNC(xgetf2)(int *, int *, double *, int *, int *, int *); int BLASFUNC(sgetrf)(int *, int *, float *, int *, int *, int *); int BLASFUNC(dgetrf)(int *, int *, double *, int *, int *, int *); int BLASFUNC(qgetrf)(int *, int *, double *, int *, int *, int *); int BLASFUNC(cgetrf)(int *, int *, float *, int *, int *, int *); int BLASFUNC(zgetrf)(int *, int *, double *, int *, int *, int *); int BLASFUNC(xgetrf)(int *, int *, double *, int *, int *, int *); int BLASFUNC(slaswp)(int *, float *, int *, int *, int *, int *, int *); int BLASFUNC(dlaswp)(int *, double *, int *, int *, int *, int *, int *); int BLASFUNC(qlaswp)(int *, double *, int *, int *, int *, int *, int *); int BLASFUNC(claswp)(int *, float *, int *, int *, int *, int *, int *); int BLASFUNC(zlaswp)(int *, double *, int *, int *, int *, int *, int *); int BLASFUNC(xlaswp)(int *, double *, int *, int *, int *, int *, int *); int BLASFUNC(sgetrs)(char *, int *, int *, float *, int *, int *, float *, int *, int *); int BLASFUNC(dgetrs)(char *, int *, int *, double *, int *, int *, double *, int *, int *); int BLASFUNC(qgetrs)(char *, int *, int *, double *, int *, int *, double *, int *, int *); int BLASFUNC(cgetrs)(char *, int *, int *, float *, int *, int *, float *, int *, int *); int BLASFUNC(zgetrs)(char *, int *, int *, double *, int *, int *, double *, int *, int *); int BLASFUNC(xgetrs)(char *, int *, int *, double *, int *, int *, double *, int *, int *); int BLASFUNC(sgesv)(int *, int *, float *, int *, int *, float *, int *, int *); int BLASFUNC(dgesv)(int *, int *, double *, int *, int *, double*, int *, int *); int BLASFUNC(qgesv)(int *, int *, double *, int *, int *, double*, int *, int *); int BLASFUNC(cgesv)(int *, int *, float *, int *, int *, float *, int *, int *); int BLASFUNC(zgesv)(int *, int *, double *, int *, int *, double*, int *, int *); int BLASFUNC(xgesv)(int *, int *, double *, int *, int *, double*, int *, int *); int BLASFUNC(spotf2)(char *, int *, float *, int *, int *); int BLASFUNC(dpotf2)(char *, int *, double *, int *, int *); int BLASFUNC(qpotf2)(char *, int *, double *, int *, int *); int BLASFUNC(cpotf2)(char *, int *, float *, int *, int *); int BLASFUNC(zpotf2)(char *, int *, double *, int *, int *); int BLASFUNC(xpotf2)(char *, int *, double *, int *, int *); int BLASFUNC(spotrf)(char *, int *, float *, int *, int *); int BLASFUNC(dpotrf)(char *, int *, double *, int *, int *); int BLASFUNC(qpotrf)(char *, int *, double *, int *, int *); int BLASFUNC(cpotrf)(char *, int *, float *, int *, int *); int BLASFUNC(zpotrf)(char *, int *, double *, int *, int *); int BLASFUNC(xpotrf)(char *, int *, double *, int *, int *); int BLASFUNC(slauu2)(char *, int *, float *, int *, int *); int BLASFUNC(dlauu2)(char *, int *, double *, int *, int *); int BLASFUNC(qlauu2)(char *, int *, double *, int *, int *); int BLASFUNC(clauu2)(char *, int *, float *, int *, int *); int BLASFUNC(zlauu2)(char *, int *, double *, int *, int *); int BLASFUNC(xlauu2)(char *, int *, double *, int *, int *); int BLASFUNC(slauum)(char *, int *, float *, int *, int *); int BLASFUNC(dlauum)(char *, int *, double *, int *, int *); int BLASFUNC(qlauum)(char *, int *, double *, int *, int *); int BLASFUNC(clauum)(char *, int *, float *, int *, int *); int BLASFUNC(zlauum)(char *, int *, double *, int *, int *); int BLASFUNC(xlauum)(char *, int *, double *, int *, int *); int BLASFUNC(strti2)(char *, char *, int *, float *, int *, int *); int BLASFUNC(dtrti2)(char *, char *, int *, double *, int *, int *); int BLASFUNC(qtrti2)(char *, char *, int *, double *, int *, int *); int BLASFUNC(ctrti2)(char *, char *, int *, float *, int *, int *); int BLASFUNC(ztrti2)(char *, char *, int *, double *, int *, int *); int BLASFUNC(xtrti2)(char *, char *, int *, double *, int *, int *); int BLASFUNC(strtri)(char *, char *, int *, float *, int *, int *); int BLASFUNC(dtrtri)(char *, char *, int *, double *, int *, int *); int BLASFUNC(qtrtri)(char *, char *, int *, double *, int *, int *); int BLASFUNC(ctrtri)(char *, char *, int *, float *, int *, int *); int BLASFUNC(ztrtri)(char *, char *, int *, double *, int *, int *); int BLASFUNC(xtrtri)(char *, char *, int *, double *, int *, int *); int BLASFUNC(spotri)(char *, int *, float *, int *, int *); int BLASFUNC(dpotri)(char *, int *, double *, int *, int *); int BLASFUNC(qpotri)(char *, int *, double *, int *, int *); int BLASFUNC(cpotri)(char *, int *, float *, int *, int *); int BLASFUNC(zpotri)(char *, int *, double *, int *, int *); int BLASFUNC(xpotri)(char *, int *, double *, int *, int *); #ifdef __cplusplus } #endif #endif piqp-octave/src/eigen/Eigen/src/misc/lapacke.h0000755000175100001770000402310114107270226021060 0ustar runnerdocker/***************************************************************************** Copyright (c) 2010, Intel Corp. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ****************************************************************************** * Contents: Native C interface to LAPACK * Author: Intel Corporation * Generated November, 2011 *****************************************************************************/ #ifndef _MKL_LAPACKE_H_ #ifndef _LAPACKE_H_ #define _LAPACKE_H_ /* * Turn on HAVE_LAPACK_CONFIG_H to redefine C-LAPACK datatypes */ #ifdef HAVE_LAPACK_CONFIG_H #include "lapacke_config.h" #endif #include #ifndef lapack_int #define lapack_int int #endif #ifndef lapack_logical #define lapack_logical lapack_int #endif /* Complex types are structures equivalent to the * Fortran complex types COMPLEX(4) and COMPLEX(8). * * One can also redefine the types with his own types * for example by including in the code definitions like * * #define lapack_complex_float std::complex * #define lapack_complex_double std::complex * * or define these types in the command line: * * -Dlapack_complex_float="std::complex" * -Dlapack_complex_double="std::complex" */ #ifndef LAPACK_COMPLEX_CUSTOM /* Complex type (single precision) */ #ifndef lapack_complex_float #include #define lapack_complex_float float _Complex #endif #ifndef lapack_complex_float_real #define lapack_complex_float_real(z) (creal(z)) #endif #ifndef lapack_complex_float_imag #define lapack_complex_float_imag(z) (cimag(z)) #endif lapack_complex_float lapack_make_complex_float( float re, float im ); /* Complex type (double precision) */ #ifndef lapack_complex_double #include #define lapack_complex_double double _Complex #endif #ifndef lapack_complex_double_real #define lapack_complex_double_real(z) (creal(z)) #endif #ifndef lapack_complex_double_imag #define lapack_complex_double_imag(z) (cimag(z)) #endif lapack_complex_double lapack_make_complex_double( double re, double im ); #endif #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ #ifndef LAPACKE_malloc #define LAPACKE_malloc( size ) malloc( size ) #endif #ifndef LAPACKE_free #define LAPACKE_free( p ) free( p ) #endif #define LAPACK_C2INT( x ) (lapack_int)(*((float*)&x )) #define LAPACK_Z2INT( x ) (lapack_int)(*((double*)&x )) #define LAPACK_ROW_MAJOR 101 #define LAPACK_COL_MAJOR 102 #define LAPACK_WORK_MEMORY_ERROR -1010 #define LAPACK_TRANSPOSE_MEMORY_ERROR -1011 /* Callback logical functions of one, two, or three arguments are used * to select eigenvalues to sort to the top left of the Schur form. * The value is selected if function returns TRUE (non-zero). */ typedef lapack_logical (*LAPACK_S_SELECT2) ( const float*, const float* ); typedef lapack_logical (*LAPACK_S_SELECT3) ( const float*, const float*, const float* ); typedef lapack_logical (*LAPACK_D_SELECT2) ( const double*, const double* ); typedef lapack_logical (*LAPACK_D_SELECT3) ( const double*, const double*, const double* ); typedef lapack_logical (*LAPACK_C_SELECT1) ( const lapack_complex_float* ); typedef lapack_logical (*LAPACK_C_SELECT2) ( const lapack_complex_float*, const lapack_complex_float* ); typedef lapack_logical (*LAPACK_Z_SELECT1) ( const lapack_complex_double* ); typedef lapack_logical (*LAPACK_Z_SELECT2) ( const lapack_complex_double*, const lapack_complex_double* ); #include "lapacke_mangling.h" #define LAPACK_lsame LAPACK_GLOBAL(lsame,LSAME) lapack_logical LAPACK_lsame( char* ca, char* cb, lapack_int lca, lapack_int lcb ); /* C-LAPACK function prototypes */ lapack_int LAPACKE_sbdsdc( int matrix_order, char uplo, char compq, lapack_int n, float* d, float* e, float* u, lapack_int ldu, float* vt, lapack_int ldvt, float* q, lapack_int* iq ); lapack_int LAPACKE_dbdsdc( int matrix_order, char uplo, char compq, lapack_int n, double* d, double* e, double* u, lapack_int ldu, double* vt, lapack_int ldvt, double* q, lapack_int* iq ); lapack_int LAPACKE_sbdsqr( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, float* d, float* e, float* vt, lapack_int ldvt, float* u, lapack_int ldu, float* c, lapack_int ldc ); lapack_int LAPACKE_dbdsqr( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, double* d, double* e, double* vt, lapack_int ldvt, double* u, lapack_int ldu, double* c, lapack_int ldc ); lapack_int LAPACKE_cbdsqr( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, float* d, float* e, lapack_complex_float* vt, lapack_int ldvt, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zbdsqr( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, double* d, double* e, lapack_complex_double* vt, lapack_int ldvt, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_sdisna( char job, lapack_int m, lapack_int n, const float* d, float* sep ); lapack_int LAPACKE_ddisna( char job, lapack_int m, lapack_int n, const double* d, double* sep ); lapack_int LAPACKE_sgbbrd( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, float* ab, lapack_int ldab, float* d, float* e, float* q, lapack_int ldq, float* pt, lapack_int ldpt, float* c, lapack_int ldc ); lapack_int LAPACKE_dgbbrd( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, double* ab, lapack_int ldab, double* d, double* e, double* q, lapack_int ldq, double* pt, lapack_int ldpt, double* c, lapack_int ldc ); lapack_int LAPACKE_cgbbrd( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, lapack_complex_float* ab, lapack_int ldab, float* d, float* e, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* pt, lapack_int ldpt, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zgbbrd( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, lapack_complex_double* ab, lapack_int ldab, double* d, double* e, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* pt, lapack_int ldpt, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_sgbcon( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_dgbcon( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_cgbcon( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zgbcon( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_sgbequ( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgbequ( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgbequ( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgbequ( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgbequb( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgbequb( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgbequb( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgbequb( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgbrfs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dgbrfs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cgbrfs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zgbrfs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sgbrfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dgbrfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cgbrfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zgbrfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_sgbsv( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgbsv( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgbsv( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgbsv( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgbsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* rpivot ); lapack_int LAPACKE_dgbsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* rpivot ); lapack_int LAPACKE_cgbsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* rpivot ); lapack_int LAPACKE_zgbsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* rpivot ); lapack_int LAPACKE_sgbsvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dgbsvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cgbsvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zgbsvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_sgbtrf( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, float* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_dgbtrf( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, double* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_cgbtrf( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, lapack_complex_float* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_zgbtrf( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, lapack_complex_double* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_sgbtrs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgbtrs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgbtrs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgbtrs( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgebak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* scale, lapack_int m, float* v, lapack_int ldv ); lapack_int LAPACKE_dgebak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* scale, lapack_int m, double* v, lapack_int ldv ); lapack_int LAPACKE_cgebak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* scale, lapack_int m, lapack_complex_float* v, lapack_int ldv ); lapack_int LAPACKE_zgebak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* scale, lapack_int m, lapack_complex_double* v, lapack_int ldv ); lapack_int LAPACKE_sgebal( int matrix_order, char job, lapack_int n, float* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, float* scale ); lapack_int LAPACKE_dgebal( int matrix_order, char job, lapack_int n, double* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, double* scale ); lapack_int LAPACKE_cgebal( int matrix_order, char job, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, float* scale ); lapack_int LAPACKE_zgebal( int matrix_order, char job, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, double* scale ); lapack_int LAPACKE_sgebrd( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* d, float* e, float* tauq, float* taup ); lapack_int LAPACKE_dgebrd( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* d, double* e, double* tauq, double* taup ); lapack_int LAPACKE_cgebrd( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* d, float* e, lapack_complex_float* tauq, lapack_complex_float* taup ); lapack_int LAPACKE_zgebrd( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* d, double* e, lapack_complex_double* tauq, lapack_complex_double* taup ); lapack_int LAPACKE_sgecon( int matrix_order, char norm, lapack_int n, const float* a, lapack_int lda, float anorm, float* rcond ); lapack_int LAPACKE_dgecon( int matrix_order, char norm, lapack_int n, const double* a, lapack_int lda, double anorm, double* rcond ); lapack_int LAPACKE_cgecon( int matrix_order, char norm, lapack_int n, const lapack_complex_float* a, lapack_int lda, float anorm, float* rcond ); lapack_int LAPACKE_zgecon( int matrix_order, char norm, lapack_int n, const lapack_complex_double* a, lapack_int lda, double anorm, double* rcond ); lapack_int LAPACKE_sgeequ( int matrix_order, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgeequ( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgeequ( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgeequ( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgeequb( int matrix_order, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgeequb( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgeequb( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgeequb( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgees( int matrix_order, char jobvs, char sort, LAPACK_S_SELECT2 select, lapack_int n, float* a, lapack_int lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int ldvs ); lapack_int LAPACKE_dgees( int matrix_order, char jobvs, char sort, LAPACK_D_SELECT2 select, lapack_int n, double* a, lapack_int lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int ldvs ); lapack_int LAPACKE_cgees( int matrix_order, char jobvs, char sort, LAPACK_C_SELECT1 select, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int ldvs ); lapack_int LAPACKE_zgees( int matrix_order, char jobvs, char sort, LAPACK_Z_SELECT1 select, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int ldvs ); lapack_int LAPACKE_sgeesx( int matrix_order, char jobvs, char sort, LAPACK_S_SELECT2 select, char sense, lapack_int n, float* a, lapack_int lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int ldvs, float* rconde, float* rcondv ); lapack_int LAPACKE_dgeesx( int matrix_order, char jobvs, char sort, LAPACK_D_SELECT2 select, char sense, lapack_int n, double* a, lapack_int lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int ldvs, double* rconde, double* rcondv ); lapack_int LAPACKE_cgeesx( int matrix_order, char jobvs, char sort, LAPACK_C_SELECT1 select, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int ldvs, float* rconde, float* rcondv ); lapack_int LAPACKE_zgeesx( int matrix_order, char jobvs, char sort, LAPACK_Z_SELECT1 select, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int ldvs, double* rconde, double* rcondv ); lapack_int LAPACKE_sgeev( int matrix_order, char jobvl, char jobvr, lapack_int n, float* a, lapack_int lda, float* wr, float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr ); lapack_int LAPACKE_dgeev( int matrix_order, char jobvl, char jobvr, lapack_int n, double* a, lapack_int lda, double* wr, double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr ); lapack_int LAPACKE_cgeev( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr ); lapack_int LAPACKE_zgeev( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr ); lapack_int LAPACKE_sgeevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, float* a, lapack_int lda, float* wr, float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv ); lapack_int LAPACKE_dgeevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, double* a, lapack_int lda, double* wr, double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv ); lapack_int LAPACKE_cgeevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv ); lapack_int LAPACKE_zgeevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv ); lapack_int LAPACKE_sgehrd( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgehrd( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgehrd( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgehrd( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgejsv( int matrix_order, char joba, char jobu, char jobv, char jobr, char jobt, char jobp, lapack_int m, lapack_int n, float* a, lapack_int lda, float* sva, float* u, lapack_int ldu, float* v, lapack_int ldv, float* stat, lapack_int* istat ); lapack_int LAPACKE_dgejsv( int matrix_order, char joba, char jobu, char jobv, char jobr, char jobt, char jobp, lapack_int m, lapack_int n, double* a, lapack_int lda, double* sva, double* u, lapack_int ldu, double* v, lapack_int ldv, double* stat, lapack_int* istat ); lapack_int LAPACKE_sgelq2( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgelq2( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgelq2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgelq2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgelqf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgelqf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgelqf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgelqf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgels( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dgels( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_cgels( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgels( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgelsd( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank ); lapack_int LAPACKE_dgelsd( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank ); lapack_int LAPACKE_cgelsd( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank ); lapack_int LAPACKE_zgelsd( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank ); lapack_int LAPACKE_sgelss( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank ); lapack_int LAPACKE_dgelss( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank ); lapack_int LAPACKE_cgelss( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank ); lapack_int LAPACKE_zgelss( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank ); lapack_int LAPACKE_sgelsy( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* jpvt, float rcond, lapack_int* rank ); lapack_int LAPACKE_dgelsy( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* jpvt, double rcond, lapack_int* rank ); lapack_int LAPACKE_cgelsy( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* jpvt, float rcond, lapack_int* rank ); lapack_int LAPACKE_zgelsy( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* jpvt, double rcond, lapack_int* rank ); lapack_int LAPACKE_sgeqlf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgeqlf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgeqlf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqlf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgeqp3( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* jpvt, float* tau ); lapack_int LAPACKE_dgeqp3( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* jpvt, double* tau ); lapack_int LAPACKE_cgeqp3( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* jpvt, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqp3( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* jpvt, lapack_complex_double* tau ); lapack_int LAPACKE_sgeqpf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* jpvt, float* tau ); lapack_int LAPACKE_dgeqpf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* jpvt, double* tau ); lapack_int LAPACKE_cgeqpf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* jpvt, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqpf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* jpvt, lapack_complex_double* tau ); lapack_int LAPACKE_sgeqr2( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgeqr2( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgeqr2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqr2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgeqrf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgeqrf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgeqrf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqrf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgeqrfp( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgeqrfp( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgeqrfp( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgeqrfp( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgerfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dgerfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cgerfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zgerfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sgerfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dgerfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cgerfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zgerfsx( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_sgerqf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dgerqf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_cgerqf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_zgerqf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_sgesdd( int matrix_order, char jobz, lapack_int m, lapack_int n, float* a, lapack_int lda, float* s, float* u, lapack_int ldu, float* vt, lapack_int ldvt ); lapack_int LAPACKE_dgesdd( int matrix_order, char jobz, lapack_int m, lapack_int n, double* a, lapack_int lda, double* s, double* u, lapack_int ldu, double* vt, lapack_int ldvt ); lapack_int LAPACKE_cgesdd( int matrix_order, char jobz, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* s, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* vt, lapack_int ldvt ); lapack_int LAPACKE_zgesdd( int matrix_order, char jobz, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* s, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* vt, lapack_int ldvt ); lapack_int LAPACKE_sgesv( int matrix_order, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgesv( int matrix_order, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgesv( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgesv( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_dsgesv( int matrix_order, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb, double* x, lapack_int ldx, lapack_int* iter ); lapack_int LAPACKE_zcgesv( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, lapack_int* iter ); lapack_int LAPACKE_sgesvd( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, float* a, lapack_int lda, float* s, float* u, lapack_int ldu, float* vt, lapack_int ldvt, float* superb ); lapack_int LAPACKE_dgesvd( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, double* a, lapack_int lda, double* s, double* u, lapack_int ldu, double* vt, lapack_int ldvt, double* superb ); lapack_int LAPACKE_cgesvd( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* s, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* vt, lapack_int ldvt, float* superb ); lapack_int LAPACKE_zgesvd( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* s, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* vt, lapack_int ldvt, double* superb ); lapack_int LAPACKE_sgesvj( int matrix_order, char joba, char jobu, char jobv, lapack_int m, lapack_int n, float* a, lapack_int lda, float* sva, lapack_int mv, float* v, lapack_int ldv, float* stat ); lapack_int LAPACKE_dgesvj( int matrix_order, char joba, char jobu, char jobv, lapack_int m, lapack_int n, double* a, lapack_int lda, double* sva, lapack_int mv, double* v, lapack_int ldv, double* stat ); lapack_int LAPACKE_sgesvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* rpivot ); lapack_int LAPACKE_dgesvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* rpivot ); lapack_int LAPACKE_cgesvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* rpivot ); lapack_int LAPACKE_zgesvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* rpivot ); lapack_int LAPACKE_sgesvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dgesvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cgesvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zgesvxx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_sgetf2( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_dgetf2( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_cgetf2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zgetf2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_sgetrf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_dgetrf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_cgetrf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zgetrf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_sgetri( int matrix_order, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_dgetri( int matrix_order, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_cgetri( int matrix_order, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zgetri( int matrix_order, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_sgetrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgetrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgetrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgetrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sggbak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* lscale, const float* rscale, lapack_int m, float* v, lapack_int ldv ); lapack_int LAPACKE_dggbak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* lscale, const double* rscale, lapack_int m, double* v, lapack_int ldv ); lapack_int LAPACKE_cggbak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* lscale, const float* rscale, lapack_int m, lapack_complex_float* v, lapack_int ldv ); lapack_int LAPACKE_zggbak( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* lscale, const double* rscale, lapack_int m, lapack_complex_double* v, lapack_int ldv ); lapack_int LAPACKE_sggbal( int matrix_order, char job, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale ); lapack_int LAPACKE_dggbal( int matrix_order, char job, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale ); lapack_int LAPACKE_cggbal( int matrix_order, char job, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale ); lapack_int LAPACKE_zggbal( int matrix_order, char job, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale ); lapack_int LAPACKE_sgges( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_S_SELECT3 selctg, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int ldvsl, float* vsr, lapack_int ldvsr ); lapack_int LAPACKE_dgges( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_D_SELECT3 selctg, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int ldvsl, double* vsr, lapack_int ldvsr ); lapack_int LAPACKE_cgges( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_C_SELECT2 selctg, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int ldvsl, lapack_complex_float* vsr, lapack_int ldvsr ); lapack_int LAPACKE_zgges( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_Z_SELECT2 selctg, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int ldvsl, lapack_complex_double* vsr, lapack_int ldvsr ); lapack_int LAPACKE_sggesx( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_S_SELECT3 selctg, char sense, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int ldvsl, float* vsr, lapack_int ldvsr, float* rconde, float* rcondv ); lapack_int LAPACKE_dggesx( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_D_SELECT3 selctg, char sense, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int ldvsl, double* vsr, lapack_int ldvsr, double* rconde, double* rcondv ); lapack_int LAPACKE_cggesx( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_C_SELECT2 selctg, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int ldvsl, lapack_complex_float* vsr, lapack_int ldvsr, float* rconde, float* rcondv ); lapack_int LAPACKE_zggesx( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_Z_SELECT2 selctg, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int ldvsl, lapack_complex_double* vsr, lapack_int ldvsr, double* rconde, double* rcondv ); lapack_int LAPACKE_sggev( int matrix_order, char jobvl, char jobvr, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr ); lapack_int LAPACKE_dggev( int matrix_order, char jobvl, char jobvr, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr ); lapack_int LAPACKE_cggev( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr ); lapack_int LAPACKE_zggev( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr ); lapack_int LAPACKE_sggevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv ); lapack_int LAPACKE_dggevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv ); lapack_int LAPACKE_cggevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv ); lapack_int LAPACKE_zggevx( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv ); lapack_int LAPACKE_sggglm( int matrix_order, lapack_int n, lapack_int m, lapack_int p, float* a, lapack_int lda, float* b, lapack_int ldb, float* d, float* x, float* y ); lapack_int LAPACKE_dggglm( int matrix_order, lapack_int n, lapack_int m, lapack_int p, double* a, lapack_int lda, double* b, lapack_int ldb, double* d, double* x, double* y ); lapack_int LAPACKE_cggglm( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* d, lapack_complex_float* x, lapack_complex_float* y ); lapack_int LAPACKE_zggglm( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* d, lapack_complex_double* x, lapack_complex_double* y ); lapack_int LAPACKE_sgghrd( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, float* b, lapack_int ldb, float* q, lapack_int ldq, float* z, lapack_int ldz ); lapack_int LAPACKE_dgghrd( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, double* b, lapack_int ldb, double* q, lapack_int ldq, double* z, lapack_int ldz ); lapack_int LAPACKE_cgghrd( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zgghrd( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_sgglse( int matrix_order, lapack_int m, lapack_int n, lapack_int p, float* a, lapack_int lda, float* b, lapack_int ldb, float* c, float* d, float* x ); lapack_int LAPACKE_dgglse( int matrix_order, lapack_int m, lapack_int n, lapack_int p, double* a, lapack_int lda, double* b, lapack_int ldb, double* c, double* d, double* x ); lapack_int LAPACKE_cgglse( int matrix_order, lapack_int m, lapack_int n, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_complex_float* d, lapack_complex_float* x ); lapack_int LAPACKE_zgglse( int matrix_order, lapack_int m, lapack_int n, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_complex_double* d, lapack_complex_double* x ); lapack_int LAPACKE_sggqrf( int matrix_order, lapack_int n, lapack_int m, lapack_int p, float* a, lapack_int lda, float* taua, float* b, lapack_int ldb, float* taub ); lapack_int LAPACKE_dggqrf( int matrix_order, lapack_int n, lapack_int m, lapack_int p, double* a, lapack_int lda, double* taua, double* b, lapack_int ldb, double* taub ); lapack_int LAPACKE_cggqrf( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* taub ); lapack_int LAPACKE_zggqrf( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* taub ); lapack_int LAPACKE_sggrqf( int matrix_order, lapack_int m, lapack_int p, lapack_int n, float* a, lapack_int lda, float* taua, float* b, lapack_int ldb, float* taub ); lapack_int LAPACKE_dggrqf( int matrix_order, lapack_int m, lapack_int p, lapack_int n, double* a, lapack_int lda, double* taua, double* b, lapack_int ldb, double* taub ); lapack_int LAPACKE_cggrqf( int matrix_order, lapack_int m, lapack_int p, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* taub ); lapack_int LAPACKE_zggrqf( int matrix_order, lapack_int m, lapack_int p, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* taub ); lapack_int LAPACKE_sggsvd( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, float* a, lapack_int lda, float* b, lapack_int ldb, float* alpha, float* beta, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq, lapack_int* iwork ); lapack_int LAPACKE_dggsvd( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, double* a, lapack_int lda, double* b, lapack_int ldb, double* alpha, double* beta, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq, lapack_int* iwork ); lapack_int LAPACKE_cggsvd( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* alpha, float* beta, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq, lapack_int* iwork ); lapack_int LAPACKE_zggsvd( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* alpha, double* beta, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq, lapack_int* iwork ); lapack_int LAPACKE_sggsvp( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float tola, float tolb, lapack_int* k, lapack_int* l, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq ); lapack_int LAPACKE_dggsvp( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double tola, double tolb, lapack_int* k, lapack_int* l, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq ); lapack_int LAPACKE_cggsvp( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float tola, float tolb, lapack_int* k, lapack_int* l, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq ); lapack_int LAPACKE_zggsvp( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double tola, double tolb, lapack_int* k, lapack_int* l, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq ); lapack_int LAPACKE_sgtcon( char norm, lapack_int n, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_dgtcon( char norm, lapack_int n, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_cgtcon( char norm, lapack_int n, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zgtcon( char norm, lapack_int n, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_sgtrfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, const float* dlf, const float* df, const float* duf, const float* du2, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dgtrfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, const double* dlf, const double* df, const double* duf, const double* du2, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cgtrfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* dlf, const lapack_complex_float* df, const lapack_complex_float* duf, const lapack_complex_float* du2, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zgtrfs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* dlf, const lapack_complex_double* df, const lapack_complex_double* duf, const lapack_complex_double* du2, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sgtsv( int matrix_order, lapack_int n, lapack_int nrhs, float* dl, float* d, float* du, float* b, lapack_int ldb ); lapack_int LAPACKE_dgtsv( int matrix_order, lapack_int n, lapack_int nrhs, double* dl, double* d, double* du, double* b, lapack_int ldb ); lapack_int LAPACKE_cgtsv( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgtsv( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgtsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, float* dlf, float* df, float* duf, float* du2, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dgtsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, double* dlf, double* df, double* duf, double* du2, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cgtsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, lapack_complex_float* dlf, lapack_complex_float* df, lapack_complex_float* duf, lapack_complex_float* du2, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zgtsvx( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, lapack_complex_double* dlf, lapack_complex_double* df, lapack_complex_double* duf, lapack_complex_double* du2, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_sgttrf( lapack_int n, float* dl, float* d, float* du, float* du2, lapack_int* ipiv ); lapack_int LAPACKE_dgttrf( lapack_int n, double* dl, double* d, double* du, double* du2, lapack_int* ipiv ); lapack_int LAPACKE_cgttrf( lapack_int n, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* du2, lapack_int* ipiv ); lapack_int LAPACKE_zgttrf( lapack_int n, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* du2, lapack_int* ipiv ); lapack_int LAPACKE_sgttrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgttrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgttrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgttrs( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chbev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhbev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chbevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhbevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chbevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zhbevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_chbgst( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* bb, lapack_int ldbb, lapack_complex_float* x, lapack_int ldx ); lapack_int LAPACKE_zhbgst( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* bb, lapack_int ldbb, lapack_complex_double* x, lapack_int ldx ); lapack_int LAPACKE_chbgv( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhbgv( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chbgvd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhbgvd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chbgvx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, lapack_complex_float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zhbgvx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, lapack_complex_double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_chbtrd( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* d, float* e, lapack_complex_float* q, lapack_int ldq ); lapack_int LAPACKE_zhbtrd( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* d, double* e, lapack_complex_double* q, lapack_int ldq ); lapack_int LAPACKE_checon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zhecon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_cheequb( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zheequb( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_cheev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* w ); lapack_int LAPACKE_zheev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* w ); lapack_int LAPACKE_cheevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* w ); lapack_int LAPACKE_zheevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* w ); lapack_int LAPACKE_cheevr( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_zheevr( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_cheevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zheevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_chegst( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhegst( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chegv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* w ); lapack_int LAPACKE_zhegv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* w ); lapack_int LAPACKE_chegvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* w ); lapack_int LAPACKE_zhegvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* w ); lapack_int LAPACKE_chegvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zhegvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_cherfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zherfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cherfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zherfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_chesv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhesv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chesvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zhesvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_chesvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zhesvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_chetrd( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* d, float* e, lapack_complex_float* tau ); lapack_int LAPACKE_zhetrd( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* d, double* e, lapack_complex_double* tau ); lapack_int LAPACKE_chetrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zhetrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_chetri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zhetri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_chetrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhetrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chfrk( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, float alpha, const lapack_complex_float* a, lapack_int lda, float beta, lapack_complex_float* c ); lapack_int LAPACKE_zhfrk( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, double alpha, const lapack_complex_double* a, lapack_int lda, double beta, lapack_complex_double* c ); lapack_int LAPACKE_shgeqz( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* h, lapack_int ldh, float* t, lapack_int ldt, float* alphar, float* alphai, float* beta, float* q, lapack_int ldq, float* z, lapack_int ldz ); lapack_int LAPACKE_dhgeqz( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* h, lapack_int ldh, double* t, lapack_int ldt, double* alphar, double* alphai, double* beta, double* q, lapack_int ldq, double* z, lapack_int ldz ); lapack_int LAPACKE_chgeqz( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* h, lapack_int ldh, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhgeqz( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* h, lapack_int ldh, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chpcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zhpcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_chpev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhpev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chpevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhpevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chpevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* ap, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zhpevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* ap, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_chpgst( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_complex_float* bp ); lapack_int LAPACKE_zhpgst( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_complex_double* bp ); lapack_int LAPACKE_chpgv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhpgv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chpgvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhpgvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_chpgvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_zhpgvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_chprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zhprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_chpsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhpsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chpsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zhpsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_chptrd( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, float* d, float* e, lapack_complex_float* tau ); lapack_int LAPACKE_zhptrd( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, double* d, double* e, lapack_complex_double* tau ); lapack_int LAPACKE_chptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, lapack_int* ipiv ); lapack_int LAPACKE_zhptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, lapack_int* ipiv ); lapack_int LAPACKE_chptri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_int* ipiv ); lapack_int LAPACKE_zhptri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_int* ipiv ); lapack_int LAPACKE_chptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_shsein( int matrix_order, char job, char eigsrc, char initv, lapack_logical* select, lapack_int n, const float* h, lapack_int ldh, float* wr, const float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_dhsein( int matrix_order, char job, char eigsrc, char initv, lapack_logical* select, lapack_int n, const double* h, lapack_int ldh, double* wr, const double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_chsein( int matrix_order, char job, char eigsrc, char initv, const lapack_logical* select, lapack_int n, const lapack_complex_float* h, lapack_int ldh, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_zhsein( int matrix_order, char job, char eigsrc, char initv, const lapack_logical* select, lapack_int n, const lapack_complex_double* h, lapack_int ldh, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_shseqr( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* h, lapack_int ldh, float* wr, float* wi, float* z, lapack_int ldz ); lapack_int LAPACKE_dhseqr( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* h, lapack_int ldh, double* wr, double* wi, double* z, lapack_int ldz ); lapack_int LAPACKE_chseqr( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* h, lapack_int ldh, lapack_complex_float* w, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zhseqr( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* h, lapack_int ldh, lapack_complex_double* w, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_clacgv( lapack_int n, lapack_complex_float* x, lapack_int incx ); lapack_int LAPACKE_zlacgv( lapack_int n, lapack_complex_double* x, lapack_int incx ); lapack_int LAPACKE_slacpy( int matrix_order, char uplo, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dlacpy( int matrix_order, char uplo, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_clacpy( int matrix_order, char uplo, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zlacpy( int matrix_order, char uplo, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_zlag2c( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_float* sa, lapack_int ldsa ); lapack_int LAPACKE_slag2d( int matrix_order, lapack_int m, lapack_int n, const float* sa, lapack_int ldsa, double* a, lapack_int lda ); lapack_int LAPACKE_dlag2s( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, float* sa, lapack_int ldsa ); lapack_int LAPACKE_clag2z( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* sa, lapack_int ldsa, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slagge( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* d, float* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_dlagge( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* d, double* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_clagge( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_zlagge( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed ); float LAPACKE_slamch( char cmach ); double LAPACKE_dlamch( char cmach ); float LAPACKE_slange( int matrix_order, char norm, lapack_int m, lapack_int n, const float* a, lapack_int lda ); double LAPACKE_dlange( int matrix_order, char norm, lapack_int m, lapack_int n, const double* a, lapack_int lda ); float LAPACKE_clange( int matrix_order, char norm, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda ); double LAPACKE_zlange( int matrix_order, char norm, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda ); float LAPACKE_clanhe( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda ); double LAPACKE_zlanhe( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda ); float LAPACKE_slansy( int matrix_order, char norm, char uplo, lapack_int n, const float* a, lapack_int lda ); double LAPACKE_dlansy( int matrix_order, char norm, char uplo, lapack_int n, const double* a, lapack_int lda ); float LAPACKE_clansy( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda ); double LAPACKE_zlansy( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda ); float LAPACKE_slantr( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const float* a, lapack_int lda ); double LAPACKE_dlantr( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const double* a, lapack_int lda ); float LAPACKE_clantr( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda ); double LAPACKE_zlantr( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slarfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* c, lapack_int ldc ); lapack_int LAPACKE_dlarfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* c, lapack_int ldc ); lapack_int LAPACKE_clarfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zlarfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_slarfg( lapack_int n, float* alpha, float* x, lapack_int incx, float* tau ); lapack_int LAPACKE_dlarfg( lapack_int n, double* alpha, double* x, lapack_int incx, double* tau ); lapack_int LAPACKE_clarfg( lapack_int n, lapack_complex_float* alpha, lapack_complex_float* x, lapack_int incx, lapack_complex_float* tau ); lapack_int LAPACKE_zlarfg( lapack_int n, lapack_complex_double* alpha, lapack_complex_double* x, lapack_int incx, lapack_complex_double* tau ); lapack_int LAPACKE_slarft( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const float* v, lapack_int ldv, const float* tau, float* t, lapack_int ldt ); lapack_int LAPACKE_dlarft( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const double* v, lapack_int ldv, const double* tau, double* t, lapack_int ldt ); lapack_int LAPACKE_clarft( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* tau, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zlarft( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* tau, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_slarfx( int matrix_order, char side, lapack_int m, lapack_int n, const float* v, float tau, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dlarfx( int matrix_order, char side, lapack_int m, lapack_int n, const double* v, double tau, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_clarfx( int matrix_order, char side, lapack_int m, lapack_int n, const lapack_complex_float* v, lapack_complex_float tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work ); lapack_int LAPACKE_zlarfx( int matrix_order, char side, lapack_int m, lapack_int n, const lapack_complex_double* v, lapack_complex_double tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work ); lapack_int LAPACKE_slarnv( lapack_int idist, lapack_int* iseed, lapack_int n, float* x ); lapack_int LAPACKE_dlarnv( lapack_int idist, lapack_int* iseed, lapack_int n, double* x ); lapack_int LAPACKE_clarnv( lapack_int idist, lapack_int* iseed, lapack_int n, lapack_complex_float* x ); lapack_int LAPACKE_zlarnv( lapack_int idist, lapack_int* iseed, lapack_int n, lapack_complex_double* x ); lapack_int LAPACKE_slaset( int matrix_order, char uplo, lapack_int m, lapack_int n, float alpha, float beta, float* a, lapack_int lda ); lapack_int LAPACKE_dlaset( int matrix_order, char uplo, lapack_int m, lapack_int n, double alpha, double beta, double* a, lapack_int lda ); lapack_int LAPACKE_claset( int matrix_order, char uplo, lapack_int m, lapack_int n, lapack_complex_float alpha, lapack_complex_float beta, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zlaset( int matrix_order, char uplo, lapack_int m, lapack_int n, lapack_complex_double alpha, lapack_complex_double beta, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slasrt( char id, lapack_int n, float* d ); lapack_int LAPACKE_dlasrt( char id, lapack_int n, double* d ); lapack_int LAPACKE_slaswp( int matrix_order, lapack_int n, float* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_dlaswp( int matrix_order, lapack_int n, double* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_claswp( int matrix_order, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_zlaswp( int matrix_order, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_slatms( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, float* d, lapack_int mode, float cond, float dmax, lapack_int kl, lapack_int ku, char pack, float* a, lapack_int lda ); lapack_int LAPACKE_dlatms( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, double* d, lapack_int mode, double cond, double dmax, lapack_int kl, lapack_int ku, char pack, double* a, lapack_int lda ); lapack_int LAPACKE_clatms( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, float* d, lapack_int mode, float cond, float dmax, lapack_int kl, lapack_int ku, char pack, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zlatms( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, double* d, lapack_int mode, double cond, double dmax, lapack_int kl, lapack_int ku, char pack, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slauum( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dlauum( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_clauum( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zlauum( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_sopgtr( int matrix_order, char uplo, lapack_int n, const float* ap, const float* tau, float* q, lapack_int ldq ); lapack_int LAPACKE_dopgtr( int matrix_order, char uplo, lapack_int n, const double* ap, const double* tau, double* q, lapack_int ldq ); lapack_int LAPACKE_sopmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const float* ap, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dopmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const double* ap, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sorgbr( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorgbr( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorghr( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorghr( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorglq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorglq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorgql( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorgql( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorgqr( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorgqr( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorgrq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorgrq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sorgtr( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const float* tau ); lapack_int LAPACKE_dorgtr( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const double* tau ); lapack_int LAPACKE_sormbr( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormbr( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormhr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormhr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormlq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormlq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormql( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormql( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormqr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormqr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormrq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormrq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormrz( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormrz( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_sormtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc ); lapack_int LAPACKE_dormtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc ); lapack_int LAPACKE_spbcon( int matrix_order, char uplo, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float anorm, float* rcond ); lapack_int LAPACKE_dpbcon( int matrix_order, char uplo, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double anorm, double* rcond ); lapack_int LAPACKE_cpbcon( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float anorm, float* rcond ); lapack_int LAPACKE_zpbcon( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double anorm, double* rcond ); lapack_int LAPACKE_spbequ( int matrix_order, char uplo, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpbequ( int matrix_order, char uplo, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpbequ( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpbequ( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double* s, double* scond, double* amax ); lapack_int LAPACKE_spbrfs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dpbrfs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cpbrfs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zpbrfs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_spbstf( int matrix_order, char uplo, lapack_int n, lapack_int kb, float* bb, lapack_int ldbb ); lapack_int LAPACKE_dpbstf( int matrix_order, char uplo, lapack_int n, lapack_int kb, double* bb, lapack_int ldbb ); lapack_int LAPACKE_cpbstf( int matrix_order, char uplo, lapack_int n, lapack_int kb, lapack_complex_float* bb, lapack_int ldbb ); lapack_int LAPACKE_zpbstf( int matrix_order, char uplo, lapack_int n, lapack_int kb, lapack_complex_double* bb, lapack_int ldbb ); lapack_int LAPACKE_spbsv( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dpbsv( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_cpbsv( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpbsv( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spbsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dpbsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cpbsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zpbsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_spbtrf( int matrix_order, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab ); lapack_int LAPACKE_dpbtrf( int matrix_order, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab ); lapack_int LAPACKE_cpbtrf( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab ); lapack_int LAPACKE_zpbtrf( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab ); lapack_int LAPACKE_spbtrs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dpbtrs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_cpbtrs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpbtrs( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spftrf( int matrix_order, char transr, char uplo, lapack_int n, float* a ); lapack_int LAPACKE_dpftrf( int matrix_order, char transr, char uplo, lapack_int n, double* a ); lapack_int LAPACKE_cpftrf( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_zpftrf( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_spftri( int matrix_order, char transr, char uplo, lapack_int n, float* a ); lapack_int LAPACKE_dpftri( int matrix_order, char transr, char uplo, lapack_int n, double* a ); lapack_int LAPACKE_cpftri( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_zpftri( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_spftrs( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const float* a, float* b, lapack_int ldb ); lapack_int LAPACKE_dpftrs( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const double* a, double* b, lapack_int ldb ); lapack_int LAPACKE_cpftrs( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpftrs( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spocon( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float anorm, float* rcond ); lapack_int LAPACKE_dpocon( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double anorm, double* rcond ); lapack_int LAPACKE_cpocon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float anorm, float* rcond ); lapack_int LAPACKE_zpocon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double anorm, double* rcond ); lapack_int LAPACKE_spoequ( int matrix_order, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpoequ( int matrix_order, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpoequ( int matrix_order, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpoequ( int matrix_order, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_spoequb( int matrix_order, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpoequb( int matrix_order, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpoequb( int matrix_order, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpoequb( int matrix_order, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_sporfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dporfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cporfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zporfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sporfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const float* s, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dporfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const double* s, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cporfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zporfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_sposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_cposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_dsposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* x, lapack_int ldx, lapack_int* iter ); lapack_int LAPACKE_zcposv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, lapack_int* iter ); lapack_int LAPACKE_sposvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dposvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cposvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zposvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_sposvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dposvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_cposvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zposvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_spotrf( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dpotrf( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_cpotrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zpotrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_spotri( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dpotri( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_cpotri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zpotri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_spotrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dpotrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_cpotrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpotrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sppcon( int matrix_order, char uplo, lapack_int n, const float* ap, float anorm, float* rcond ); lapack_int LAPACKE_dppcon( int matrix_order, char uplo, lapack_int n, const double* ap, double anorm, double* rcond ); lapack_int LAPACKE_cppcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, float anorm, float* rcond ); lapack_int LAPACKE_zppcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, double anorm, double* rcond ); lapack_int LAPACKE_sppequ( int matrix_order, char uplo, lapack_int n, const float* ap, float* s, float* scond, float* amax ); lapack_int LAPACKE_dppequ( int matrix_order, char uplo, lapack_int n, const double* ap, double* s, double* scond, double* amax ); lapack_int LAPACKE_cppequ( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, float* s, float* scond, float* amax ); lapack_int LAPACKE_zppequ( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, double* s, double* scond, double* amax ); lapack_int LAPACKE_spprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const float* afp, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dpprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const double* afp, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cpprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zpprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sppsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dppsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_cppsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zppsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sppsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* ap, float* afp, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dppsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* ap, double* afp, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cppsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_complex_float* afp, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zppsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_complex_double* afp, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_spptrf( int matrix_order, char uplo, lapack_int n, float* ap ); lapack_int LAPACKE_dpptrf( int matrix_order, char uplo, lapack_int n, double* ap ); lapack_int LAPACKE_cpptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_zpptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_spptri( int matrix_order, char uplo, lapack_int n, float* ap ); lapack_int LAPACKE_dpptri( int matrix_order, char uplo, lapack_int n, double* ap ); lapack_int LAPACKE_cpptri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_zpptri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_spptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dpptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_cpptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spstrf( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, lapack_int* piv, lapack_int* rank, float tol ); lapack_int LAPACKE_dpstrf( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, lapack_int* piv, lapack_int* rank, double tol ); lapack_int LAPACKE_cpstrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* piv, lapack_int* rank, float tol ); lapack_int LAPACKE_zpstrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* piv, lapack_int* rank, double tol ); lapack_int LAPACKE_sptcon( lapack_int n, const float* d, const float* e, float anorm, float* rcond ); lapack_int LAPACKE_dptcon( lapack_int n, const double* d, const double* e, double anorm, double* rcond ); lapack_int LAPACKE_cptcon( lapack_int n, const float* d, const lapack_complex_float* e, float anorm, float* rcond ); lapack_int LAPACKE_zptcon( lapack_int n, const double* d, const lapack_complex_double* e, double anorm, double* rcond ); lapack_int LAPACKE_spteqr( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz ); lapack_int LAPACKE_dpteqr( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz ); lapack_int LAPACKE_cpteqr( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zpteqr( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_sptrfs( int matrix_order, lapack_int n, lapack_int nrhs, const float* d, const float* e, const float* df, const float* ef, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dptrfs( int matrix_order, lapack_int n, lapack_int nrhs, const double* d, const double* e, const double* df, const double* ef, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_cptrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, const float* df, const lapack_complex_float* ef, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zptrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, const double* df, const lapack_complex_double* ef, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sptsv( int matrix_order, lapack_int n, lapack_int nrhs, float* d, float* e, float* b, lapack_int ldb ); lapack_int LAPACKE_dptsv( int matrix_order, lapack_int n, lapack_int nrhs, double* d, double* e, double* b, lapack_int ldb ); lapack_int LAPACKE_cptsv( int matrix_order, lapack_int n, lapack_int nrhs, float* d, lapack_complex_float* e, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zptsv( int matrix_order, lapack_int n, lapack_int nrhs, double* d, lapack_complex_double* e, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sptsvx( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const float* d, const float* e, float* df, float* ef, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dptsvx( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const double* d, const double* e, double* df, double* ef, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cptsvx( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, float* df, lapack_complex_float* ef, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zptsvx( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, double* df, lapack_complex_double* ef, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_spttrf( lapack_int n, float* d, float* e ); lapack_int LAPACKE_dpttrf( lapack_int n, double* d, double* e ); lapack_int LAPACKE_cpttrf( lapack_int n, float* d, lapack_complex_float* e ); lapack_int LAPACKE_zpttrf( lapack_int n, double* d, lapack_complex_double* e ); lapack_int LAPACKE_spttrs( int matrix_order, lapack_int n, lapack_int nrhs, const float* d, const float* e, float* b, lapack_int ldb ); lapack_int LAPACKE_dpttrs( int matrix_order, lapack_int n, lapack_int nrhs, const double* d, const double* e, double* b, lapack_int ldb ); lapack_int LAPACKE_cpttrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpttrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_ssbev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dsbev( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_ssbevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dsbevd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_ssbevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dsbevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssbgst( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, const float* bb, lapack_int ldbb, float* x, lapack_int ldx ); lapack_int LAPACKE_dsbgst( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, const double* bb, lapack_int ldbb, double* x, lapack_int ldx ); lapack_int LAPACKE_ssbgv( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dsbgv( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_ssbgvd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dsbgvd( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_ssbgvx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dsbgvx( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssbtrd( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* d, float* e, float* q, lapack_int ldq ); lapack_int LAPACKE_dsbtrd( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* d, double* e, double* q, lapack_int ldq ); lapack_int LAPACKE_ssfrk( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, float alpha, const float* a, lapack_int lda, float beta, float* c ); lapack_int LAPACKE_dsfrk( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, double alpha, const double* a, lapack_int lda, double beta, double* c ); lapack_int LAPACKE_sspcon( int matrix_order, char uplo, lapack_int n, const float* ap, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_dspcon( int matrix_order, char uplo, lapack_int n, const double* ap, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_cspcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zspcon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_sspev( int matrix_order, char jobz, char uplo, lapack_int n, float* ap, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dspev( int matrix_order, char jobz, char uplo, lapack_int n, double* ap, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_sspevd( int matrix_order, char jobz, char uplo, lapack_int n, float* ap, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dspevd( int matrix_order, char jobz, char uplo, lapack_int n, double* ap, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_sspevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* ap, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dspevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* ap, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_sspgst( int matrix_order, lapack_int itype, char uplo, lapack_int n, float* ap, const float* bp ); lapack_int LAPACKE_dspgst( int matrix_order, lapack_int itype, char uplo, lapack_int n, double* ap, const double* bp ); lapack_int LAPACKE_sspgv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* ap, float* bp, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dspgv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* ap, double* bp, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_sspgvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* ap, float* bp, float* w, float* z, lapack_int ldz ); lapack_int LAPACKE_dspgvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* ap, double* bp, double* w, double* z, lapack_int ldz ); lapack_int LAPACKE_sspgvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, float* ap, float* bp, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dspgvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, double* ap, double* bp, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const float* afp, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dsprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const double* afp, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_csprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zsprfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_sspsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* ap, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dspsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* ap, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cspsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zspsv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sspsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const float* ap, float* afp, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dspsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const double* ap, double* afp, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_cspsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zspsvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_ssptrd( int matrix_order, char uplo, lapack_int n, float* ap, float* d, float* e, float* tau ); lapack_int LAPACKE_dsptrd( int matrix_order, char uplo, lapack_int n, double* ap, double* d, double* e, double* tau ); lapack_int LAPACKE_ssptrf( int matrix_order, char uplo, lapack_int n, float* ap, lapack_int* ipiv ); lapack_int LAPACKE_dsptrf( int matrix_order, char uplo, lapack_int n, double* ap, lapack_int* ipiv ); lapack_int LAPACKE_csptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, lapack_int* ipiv ); lapack_int LAPACKE_zsptrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, lapack_int* ipiv ); lapack_int LAPACKE_ssptri( int matrix_order, char uplo, lapack_int n, float* ap, const lapack_int* ipiv ); lapack_int LAPACKE_dsptri( int matrix_order, char uplo, lapack_int n, double* ap, const lapack_int* ipiv ); lapack_int LAPACKE_csptri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_int* ipiv ); lapack_int LAPACKE_zsptri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_int* ipiv ); lapack_int LAPACKE_ssptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dsptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_csptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zsptrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sstebz( char range, char order, lapack_int n, float vl, float vu, lapack_int il, lapack_int iu, float abstol, const float* d, const float* e, lapack_int* m, lapack_int* nsplit, float* w, lapack_int* iblock, lapack_int* isplit ); lapack_int LAPACKE_dstebz( char range, char order, lapack_int n, double vl, double vu, lapack_int il, lapack_int iu, double abstol, const double* d, const double* e, lapack_int* m, lapack_int* nsplit, double* w, lapack_int* iblock, lapack_int* isplit ); lapack_int LAPACKE_sstedc( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz ); lapack_int LAPACKE_dstedc( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz ); lapack_int LAPACKE_cstedc( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zstedc( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_sstegr( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_dstegr( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_cstegr( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_zstegr( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_sstein( int matrix_order, lapack_int n, const float* d, const float* e, lapack_int m, const float* w, const lapack_int* iblock, const lapack_int* isplit, float* z, lapack_int ldz, lapack_int* ifailv ); lapack_int LAPACKE_dstein( int matrix_order, lapack_int n, const double* d, const double* e, lapack_int m, const double* w, const lapack_int* iblock, const lapack_int* isplit, double* z, lapack_int ldz, lapack_int* ifailv ); lapack_int LAPACKE_cstein( int matrix_order, lapack_int n, const float* d, const float* e, lapack_int m, const float* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_float* z, lapack_int ldz, lapack_int* ifailv ); lapack_int LAPACKE_zstein( int matrix_order, lapack_int n, const double* d, const double* e, lapack_int m, const double* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_double* z, lapack_int ldz, lapack_int* ifailv ); lapack_int LAPACKE_sstemr( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac ); lapack_int LAPACKE_dstemr( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac ); lapack_int LAPACKE_cstemr( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac ); lapack_int LAPACKE_zstemr( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac ); lapack_int LAPACKE_ssteqr( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz ); lapack_int LAPACKE_dsteqr( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz ); lapack_int LAPACKE_csteqr( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zsteqr( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_ssterf( lapack_int n, float* d, float* e ); lapack_int LAPACKE_dsterf( lapack_int n, double* d, double* e ); lapack_int LAPACKE_sstev( int matrix_order, char jobz, lapack_int n, float* d, float* e, float* z, lapack_int ldz ); lapack_int LAPACKE_dstev( int matrix_order, char jobz, lapack_int n, double* d, double* e, double* z, lapack_int ldz ); lapack_int LAPACKE_sstevd( int matrix_order, char jobz, lapack_int n, float* d, float* e, float* z, lapack_int ldz ); lapack_int LAPACKE_dstevd( int matrix_order, char jobz, lapack_int n, double* d, double* e, double* z, lapack_int ldz ); lapack_int LAPACKE_sstevr( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_dstevr( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_sstevx( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dstevx( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssycon( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_dsycon( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_csycon( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond ); lapack_int LAPACKE_zsycon( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond ); lapack_int LAPACKE_ssyequb( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_dsyequb( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_csyequb( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zsyequb( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_ssyev( int matrix_order, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* w ); lapack_int LAPACKE_dsyev( int matrix_order, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* w ); lapack_int LAPACKE_ssyevd( int matrix_order, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* w ); lapack_int LAPACKE_dsyevd( int matrix_order, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* w ); lapack_int LAPACKE_ssyevr( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_dsyevr( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz ); lapack_int LAPACKE_ssyevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dsyevx( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssygst( int matrix_order, lapack_int itype, char uplo, lapack_int n, float* a, lapack_int lda, const float* b, lapack_int ldb ); lapack_int LAPACKE_dsygst( int matrix_order, lapack_int itype, char uplo, lapack_int n, double* a, lapack_int lda, const double* b, lapack_int ldb ); lapack_int LAPACKE_ssygv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* w ); lapack_int LAPACKE_dsygv( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* w ); lapack_int LAPACKE_ssygvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* w ); lapack_int LAPACKE_dsygvd( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* w ); lapack_int LAPACKE_ssygvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_dsygvx( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* ifail ); lapack_int LAPACKE_ssyrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dsyrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_csyrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_zsyrfs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_ssyrfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dsyrfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_csyrfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zsyrfsx( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_ssysv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dsysv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_csysv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zsysv( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_ssysvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_dsysvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_csysvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr ); lapack_int LAPACKE_zsysvx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr ); lapack_int LAPACKE_ssysvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_dsysvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_csysvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params ); lapack_int LAPACKE_zsysvxx( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params ); lapack_int LAPACKE_ssytrd( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, float* d, float* e, float* tau ); lapack_int LAPACKE_dsytrd( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, double* d, double* e, double* tau ); lapack_int LAPACKE_ssytrf( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_dsytrf( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_csytrf( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zsytrf( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_ssytri( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_dsytri( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_csytri( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zsytri( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_ssytrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dsytrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_csytrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zsytrs( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stbcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float* rcond ); lapack_int LAPACKE_dtbcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double* rcond ); lapack_int LAPACKE_ctbcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float* rcond ); lapack_int LAPACKE_ztbcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double* rcond ); lapack_int LAPACKE_stbrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dtbrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_ctbrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_ztbrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_stbtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dtbtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_ctbtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztbtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stfsm( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, float alpha, const float* a, float* b, lapack_int ldb ); lapack_int LAPACKE_dtfsm( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, double alpha, const double* a, double* b, lapack_int ldb ); lapack_int LAPACKE_ctfsm( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, lapack_complex_float alpha, const lapack_complex_float* a, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztfsm( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, lapack_complex_double alpha, const lapack_complex_double* a, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stftri( int matrix_order, char transr, char uplo, char diag, lapack_int n, float* a ); lapack_int LAPACKE_dtftri( int matrix_order, char transr, char uplo, char diag, lapack_int n, double* a ); lapack_int LAPACKE_ctftri( int matrix_order, char transr, char uplo, char diag, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_ztftri( int matrix_order, char transr, char uplo, char diag, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_stfttp( int matrix_order, char transr, char uplo, lapack_int n, const float* arf, float* ap ); lapack_int LAPACKE_dtfttp( int matrix_order, char transr, char uplo, lapack_int n, const double* arf, double* ap ); lapack_int LAPACKE_ctfttp( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* arf, lapack_complex_float* ap ); lapack_int LAPACKE_ztfttp( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* arf, lapack_complex_double* ap ); lapack_int LAPACKE_stfttr( int matrix_order, char transr, char uplo, lapack_int n, const float* arf, float* a, lapack_int lda ); lapack_int LAPACKE_dtfttr( int matrix_order, char transr, char uplo, lapack_int n, const double* arf, double* a, lapack_int lda ); lapack_int LAPACKE_ctfttr( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* arf, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztfttr( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* arf, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_stgevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const float* s, lapack_int lds, const float* p, lapack_int ldp, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_dtgevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const double* s, lapack_int lds, const double* p, lapack_int ldp, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ctgevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* s, lapack_int lds, const lapack_complex_float* p, lapack_int ldp, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ztgevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* s, lapack_int lds, const lapack_complex_double* p, lapack_int ldp, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_stgexc( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* q, lapack_int ldq, float* z, lapack_int ldz, lapack_int* ifst, lapack_int* ilst ); lapack_int LAPACKE_dtgexc( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* q, lapack_int ldq, double* z, lapack_int ldz, lapack_int* ifst, lapack_int* ilst ); lapack_int LAPACKE_ctgexc( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_ztgexc( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_stgsen( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* q, lapack_int ldq, float* z, lapack_int ldz, lapack_int* m, float* pl, float* pr, float* dif ); lapack_int LAPACKE_dtgsen( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* q, lapack_int ldq, double* z, lapack_int ldz, lapack_int* m, double* pl, double* pr, double* dif ); lapack_int LAPACKE_ctgsen( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz, lapack_int* m, float* pl, float* pr, float* dif ); lapack_int LAPACKE_ztgsen( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz, lapack_int* m, double* pl, double* pr, double* dif ); lapack_int LAPACKE_stgsja( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, float* a, lapack_int lda, float* b, lapack_int ldb, float tola, float tolb, float* alpha, float* beta, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq, lapack_int* ncycle ); lapack_int LAPACKE_dtgsja( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, double* a, lapack_int lda, double* b, lapack_int ldb, double tola, double tolb, double* alpha, double* beta, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq, lapack_int* ncycle ); lapack_int LAPACKE_ctgsja( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float tola, float tolb, float* alpha, float* beta, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq, lapack_int* ncycle ); lapack_int LAPACKE_ztgsja( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double tola, double tolb, double* alpha, double* beta, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq, lapack_int* ncycle ); lapack_int LAPACKE_stgsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, const float* vl, lapack_int ldvl, const float* vr, lapack_int ldvr, float* s, float* dif, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_dtgsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, const double* vl, lapack_int ldvl, const double* vr, lapack_int ldvr, double* s, double* dif, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ctgsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* vl, lapack_int ldvl, const lapack_complex_float* vr, lapack_int ldvr, float* s, float* dif, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ztgsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* vl, lapack_int ldvl, const lapack_complex_double* vr, lapack_int ldvr, double* s, double* dif, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_stgsyl( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, float* c, lapack_int ldc, const float* d, lapack_int ldd, const float* e, lapack_int lde, float* f, lapack_int ldf, float* scale, float* dif ); lapack_int LAPACKE_dtgsyl( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, double* c, lapack_int ldc, const double* d, lapack_int ldd, const double* e, lapack_int lde, double* f, lapack_int ldf, double* scale, double* dif ); lapack_int LAPACKE_ctgsyl( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_int ldc, const lapack_complex_float* d, lapack_int ldd, const lapack_complex_float* e, lapack_int lde, lapack_complex_float* f, lapack_int ldf, float* scale, float* dif ); lapack_int LAPACKE_ztgsyl( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_int ldc, const lapack_complex_double* d, lapack_int ldd, const lapack_complex_double* e, lapack_int lde, lapack_complex_double* f, lapack_int ldf, double* scale, double* dif ); lapack_int LAPACKE_stpcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const float* ap, float* rcond ); lapack_int LAPACKE_dtpcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const double* ap, double* rcond ); lapack_int LAPACKE_ctpcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_float* ap, float* rcond ); lapack_int LAPACKE_ztpcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_double* ap, double* rcond ); lapack_int LAPACKE_stprfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* ap, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dtprfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* ap, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_ctprfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_ztprfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_stptri( int matrix_order, char uplo, char diag, lapack_int n, float* ap ); lapack_int LAPACKE_dtptri( int matrix_order, char uplo, char diag, lapack_int n, double* ap ); lapack_int LAPACKE_ctptri( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_ztptri( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_stptrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dtptrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_ctptrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztptrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stpttf( int matrix_order, char transr, char uplo, lapack_int n, const float* ap, float* arf ); lapack_int LAPACKE_dtpttf( int matrix_order, char transr, char uplo, lapack_int n, const double* ap, double* arf ); lapack_int LAPACKE_ctpttf( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* ap, lapack_complex_float* arf ); lapack_int LAPACKE_ztpttf( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* ap, lapack_complex_double* arf ); lapack_int LAPACKE_stpttr( int matrix_order, char uplo, lapack_int n, const float* ap, float* a, lapack_int lda ); lapack_int LAPACKE_dtpttr( int matrix_order, char uplo, lapack_int n, const double* ap, double* a, lapack_int lda ); lapack_int LAPACKE_ctpttr( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztpttr( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_strcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const float* a, lapack_int lda, float* rcond ); lapack_int LAPACKE_dtrcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const double* a, lapack_int lda, double* rcond ); lapack_int LAPACKE_ctrcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* rcond ); lapack_int LAPACKE_ztrcon( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* rcond ); lapack_int LAPACKE_strevc( int matrix_order, char side, char howmny, lapack_logical* select, lapack_int n, const float* t, lapack_int ldt, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_dtrevc( int matrix_order, char side, char howmny, lapack_logical* select, lapack_int n, const double* t, lapack_int ldt, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ctrevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ztrevc( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_strexc( int matrix_order, char compq, lapack_int n, float* t, lapack_int ldt, float* q, lapack_int ldq, lapack_int* ifst, lapack_int* ilst ); lapack_int LAPACKE_dtrexc( int matrix_order, char compq, lapack_int n, double* t, lapack_int ldt, double* q, lapack_int ldq, lapack_int* ifst, lapack_int* ilst ); lapack_int LAPACKE_ctrexc( int matrix_order, char compq, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* q, lapack_int ldq, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_ztrexc( int matrix_order, char compq, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* q, lapack_int ldq, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_strrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_dtrrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_ctrrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr ); lapack_int LAPACKE_ztrrfs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr ); lapack_int LAPACKE_strsen( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, float* t, lapack_int ldt, float* q, lapack_int ldq, float* wr, float* wi, lapack_int* m, float* s, float* sep ); lapack_int LAPACKE_dtrsen( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, double* t, lapack_int ldt, double* q, lapack_int ldq, double* wr, double* wi, lapack_int* m, double* s, double* sep ); lapack_int LAPACKE_ctrsen( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* w, lapack_int* m, float* s, float* sep ); lapack_int LAPACKE_ztrsen( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* w, lapack_int* m, double* s, double* sep ); lapack_int LAPACKE_strsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const float* t, lapack_int ldt, const float* vl, lapack_int ldvl, const float* vr, lapack_int ldvr, float* s, float* sep, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_dtrsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const double* t, lapack_int ldt, const double* vl, lapack_int ldvl, const double* vr, lapack_int ldvr, double* s, double* sep, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ctrsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* t, lapack_int ldt, const lapack_complex_float* vl, lapack_int ldvl, const lapack_complex_float* vr, lapack_int ldvr, float* s, float* sep, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_ztrsna( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* t, lapack_int ldt, const lapack_complex_double* vl, lapack_int ldvl, const lapack_complex_double* vr, lapack_int ldvr, double* s, double* sep, lapack_int mm, lapack_int* m ); lapack_int LAPACKE_strsyl( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, float* c, lapack_int ldc, float* scale ); lapack_int LAPACKE_dtrsyl( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, double* c, lapack_int ldc, double* scale ); lapack_int LAPACKE_ctrsyl( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_int ldc, float* scale ); lapack_int LAPACKE_ztrsyl( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_int ldc, double* scale ); lapack_int LAPACKE_strtri( int matrix_order, char uplo, char diag, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dtrtri( int matrix_order, char uplo, char diag, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_ctrtri( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztrtri( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_strtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dtrtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_ctrtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztrtrs( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_strttf( int matrix_order, char transr, char uplo, lapack_int n, const float* a, lapack_int lda, float* arf ); lapack_int LAPACKE_dtrttf( int matrix_order, char transr, char uplo, lapack_int n, const double* a, lapack_int lda, double* arf ); lapack_int LAPACKE_ctrttf( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* arf ); lapack_int LAPACKE_ztrttf( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* arf ); lapack_int LAPACKE_strttp( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float* ap ); lapack_int LAPACKE_dtrttp( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double* ap ); lapack_int LAPACKE_ctrttp( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* ap ); lapack_int LAPACKE_ztrttp( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* ap ); lapack_int LAPACKE_stzrzf( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau ); lapack_int LAPACKE_dtzrzf( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau ); lapack_int LAPACKE_ctzrzf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau ); lapack_int LAPACKE_ztzrzf( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau ); lapack_int LAPACKE_cungbr( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zungbr( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cunghr( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zunghr( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cunglq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zunglq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cungql( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zungql( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cungqr( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zungqr( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cungrq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zungrq( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cungtr( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau ); lapack_int LAPACKE_zungtr( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau ); lapack_int LAPACKE_cunmbr( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmbr( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmhr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmhr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmlq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmlq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmql( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmql( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmqr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmqr( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmrq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmrq( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmrz( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmrz( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cunmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zunmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_cupgtr( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* q, lapack_int ldq ); lapack_int LAPACKE_zupgtr( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* q, lapack_int ldq ); lapack_int LAPACKE_cupmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zupmtr( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_sbdsdc_work( int matrix_order, char uplo, char compq, lapack_int n, float* d, float* e, float* u, lapack_int ldu, float* vt, lapack_int ldvt, float* q, lapack_int* iq, float* work, lapack_int* iwork ); lapack_int LAPACKE_dbdsdc_work( int matrix_order, char uplo, char compq, lapack_int n, double* d, double* e, double* u, lapack_int ldu, double* vt, lapack_int ldvt, double* q, lapack_int* iq, double* work, lapack_int* iwork ); lapack_int LAPACKE_sbdsqr_work( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, float* d, float* e, float* vt, lapack_int ldvt, float* u, lapack_int ldu, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dbdsqr_work( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, double* d, double* e, double* vt, lapack_int ldvt, double* u, lapack_int ldu, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_cbdsqr_work( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, float* d, float* e, lapack_complex_float* vt, lapack_int ldvt, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_zbdsqr_work( int matrix_order, char uplo, lapack_int n, lapack_int ncvt, lapack_int nru, lapack_int ncc, double* d, double* e, lapack_complex_double* vt, lapack_int ldvt, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_sdisna_work( char job, lapack_int m, lapack_int n, const float* d, float* sep ); lapack_int LAPACKE_ddisna_work( char job, lapack_int m, lapack_int n, const double* d, double* sep ); lapack_int LAPACKE_sgbbrd_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, float* ab, lapack_int ldab, float* d, float* e, float* q, lapack_int ldq, float* pt, lapack_int ldpt, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dgbbrd_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, double* ab, lapack_int ldab, double* d, double* e, double* q, lapack_int ldq, double* pt, lapack_int ldpt, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_cgbbrd_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, lapack_complex_float* ab, lapack_int ldab, float* d, float* e, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* pt, lapack_int ldpt, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbbrd_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int ncc, lapack_int kl, lapack_int ku, lapack_complex_double* ab, lapack_int ldab, double* d, double* e, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* pt, lapack_int ldpt, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbcon_work( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, const lapack_int* ipiv, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgbcon_work( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, const lapack_int* ipiv, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgbcon_work( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbcon_work( int matrix_order, char norm, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbequ_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgbequ_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgbequ_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgbequ_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgbequb_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgbequb_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgbequb_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_float* ab, lapack_int ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgbequb_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const lapack_complex_double* ab, lapack_int ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgbrfs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgbrfs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgbrfs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbrfs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbrfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgbrfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgbrfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbrfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbsv_work( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgbsv_work( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgbsv_work( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgbsv_work( int matrix_order, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgbsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgbsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgbsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbsvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgbsvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgbsvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgbsvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgbtrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, float* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_dgbtrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, double* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_cgbtrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, lapack_complex_float* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_zgbtrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, lapack_complex_double* ab, lapack_int ldab, lapack_int* ipiv ); lapack_int LAPACKE_sgbtrs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const float* ab, lapack_int ldab, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgbtrs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const double* ab, lapack_int ldab, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgbtrs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgbtrs_work( int matrix_order, char trans, lapack_int n, lapack_int kl, lapack_int ku, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgebak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* scale, lapack_int m, float* v, lapack_int ldv ); lapack_int LAPACKE_dgebak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* scale, lapack_int m, double* v, lapack_int ldv ); lapack_int LAPACKE_cgebak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* scale, lapack_int m, lapack_complex_float* v, lapack_int ldv ); lapack_int LAPACKE_zgebak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* scale, lapack_int m, lapack_complex_double* v, lapack_int ldv ); lapack_int LAPACKE_sgebal_work( int matrix_order, char job, lapack_int n, float* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, float* scale ); lapack_int LAPACKE_dgebal_work( int matrix_order, char job, lapack_int n, double* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, double* scale ); lapack_int LAPACKE_cgebal_work( int matrix_order, char job, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, float* scale ); lapack_int LAPACKE_zgebal_work( int matrix_order, char job, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ilo, lapack_int* ihi, double* scale ); lapack_int LAPACKE_sgebrd_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* d, float* e, float* tauq, float* taup, float* work, lapack_int lwork ); lapack_int LAPACKE_dgebrd_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* d, double* e, double* tauq, double* taup, double* work, lapack_int lwork ); lapack_int LAPACKE_cgebrd_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* d, float* e, lapack_complex_float* tauq, lapack_complex_float* taup, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgebrd_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* d, double* e, lapack_complex_double* tauq, lapack_complex_double* taup, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgecon_work( int matrix_order, char norm, lapack_int n, const float* a, lapack_int lda, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgecon_work( int matrix_order, char norm, lapack_int n, const double* a, lapack_int lda, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgecon_work( int matrix_order, char norm, lapack_int n, const lapack_complex_float* a, lapack_int lda, float anorm, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgecon_work( int matrix_order, char norm, lapack_int n, const lapack_complex_double* a, lapack_int lda, double anorm, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgeequ_work( int matrix_order, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgeequ_work( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgeequ_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgeequ_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgeequb_work( int matrix_order, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_dgeequb_work( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_cgeequb_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax ); lapack_int LAPACKE_zgeequb_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax ); lapack_int LAPACKE_sgees_work( int matrix_order, char jobvs, char sort, LAPACK_S_SELECT2 select, lapack_int n, float* a, lapack_int lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int ldvs, float* work, lapack_int lwork, lapack_logical* bwork ); lapack_int LAPACKE_dgees_work( int matrix_order, char jobvs, char sort, LAPACK_D_SELECT2 select, lapack_int n, double* a, lapack_int lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int ldvs, double* work, lapack_int lwork, lapack_logical* bwork ); lapack_int LAPACKE_cgees_work( int matrix_order, char jobvs, char sort, LAPACK_C_SELECT1 select, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int ldvs, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_logical* bwork ); lapack_int LAPACKE_zgees_work( int matrix_order, char jobvs, char sort, LAPACK_Z_SELECT1 select, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int ldvs, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_logical* bwork ); lapack_int LAPACKE_sgeesx_work( int matrix_order, char jobvs, char sort, LAPACK_S_SELECT2 select, char sense, lapack_int n, float* a, lapack_int lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int ldvs, float* rconde, float* rcondv, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_dgeesx_work( int matrix_order, char jobvs, char sort, LAPACK_D_SELECT2 select, char sense, lapack_int n, double* a, lapack_int lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int ldvs, double* rconde, double* rcondv, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_cgeesx_work( int matrix_order, char jobvs, char sort, LAPACK_C_SELECT1 select, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int ldvs, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_logical* bwork ); lapack_int LAPACKE_zgeesx_work( int matrix_order, char jobvs, char sort, LAPACK_Z_SELECT1 select, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int ldvs, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_logical* bwork ); lapack_int LAPACKE_sgeev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, float* a, lapack_int lda, float* wr, float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, float* work, lapack_int lwork ); lapack_int LAPACKE_dgeev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, double* a, lapack_int lda, double* wr, double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, double* work, lapack_int lwork ); lapack_int LAPACKE_cgeev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgeev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgeevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, float* a, lapack_int lda, float* wr, float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dgeevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, double* a, lapack_int lda, double* wr, double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_cgeevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgeevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgehrd_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgehrd_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgehrd_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgehrd_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgejsv_work( int matrix_order, char joba, char jobu, char jobv, char jobr, char jobt, char jobp, lapack_int m, lapack_int n, float* a, lapack_int lda, float* sva, float* u, lapack_int ldu, float* v, lapack_int ldv, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dgejsv_work( int matrix_order, char joba, char jobu, char jobv, char jobr, char jobt, char jobp, lapack_int m, lapack_int n, double* a, lapack_int lda, double* sva, double* u, lapack_int ldu, double* v, lapack_int ldv, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_sgelq2_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work ); lapack_int LAPACKE_dgelq2_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work ); lapack_int LAPACKE_cgelq2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work ); lapack_int LAPACKE_zgelq2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work ); lapack_int LAPACKE_sgelqf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgelqf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgelqf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgelqf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgels_work( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, float* work, lapack_int lwork ); lapack_int LAPACKE_dgels_work( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* work, lapack_int lwork ); lapack_int LAPACKE_cgels_work( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgels_work( int matrix_order, char trans, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgelsd_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dgelsd_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_cgelsd_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork ); lapack_int LAPACKE_zgelsd_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork ); lapack_int LAPACKE_sgelss_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank, float* work, lapack_int lwork ); lapack_int LAPACKE_dgelss_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank, double* work, lapack_int lwork ); lapack_int LAPACKE_cgelss_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* s, float rcond, lapack_int* rank, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgelss_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* s, double rcond, lapack_int* rank, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgelsy_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* jpvt, float rcond, lapack_int* rank, float* work, lapack_int lwork ); lapack_int LAPACKE_dgelsy_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* jpvt, double rcond, lapack_int* rank, double* work, lapack_int lwork ); lapack_int LAPACKE_cgelsy_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* jpvt, float rcond, lapack_int* rank, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgelsy_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* jpvt, double rcond, lapack_int* rank, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgeqlf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgeqlf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgeqlf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgeqlf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgeqp3_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* jpvt, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgeqp3_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* jpvt, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgeqp3_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* jpvt, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgeqp3_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* jpvt, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgeqpf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* jpvt, float* tau, float* work ); lapack_int LAPACKE_dgeqpf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* jpvt, double* tau, double* work ); lapack_int LAPACKE_cgeqpf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* jpvt, lapack_complex_float* tau, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgeqpf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* jpvt, lapack_complex_double* tau, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgeqr2_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work ); lapack_int LAPACKE_dgeqr2_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work ); lapack_int LAPACKE_cgeqr2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work ); lapack_int LAPACKE_zgeqr2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work ); lapack_int LAPACKE_sgeqrf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgeqrf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgeqrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgeqrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgeqrfp_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgeqrfp_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgeqrfp_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgeqrfp_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgerfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgerfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgerfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgerfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgerfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgerfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgerfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgerfsx_work( int matrix_order, char trans, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgerqf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dgerqf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_cgerqf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgerqf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgesdd_work( int matrix_order, char jobz, lapack_int m, lapack_int n, float* a, lapack_int lda, float* s, float* u, lapack_int ldu, float* vt, lapack_int ldvt, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dgesdd_work( int matrix_order, char jobz, lapack_int m, lapack_int n, double* a, lapack_int lda, double* s, double* u, lapack_int ldu, double* vt, lapack_int ldvt, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_cgesdd_work( int matrix_order, char jobz, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* s, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* vt, lapack_int ldvt, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork ); lapack_int LAPACKE_zgesdd_work( int matrix_order, char jobz, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* s, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* vt, lapack_int ldvt, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork ); lapack_int LAPACKE_sgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_dsgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb, double* x, lapack_int ldx, double* work, float* swork, lapack_int* iter ); lapack_int LAPACKE_zcgesv_work( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, lapack_complex_double* work, lapack_complex_float* swork, double* rwork, lapack_int* iter ); lapack_int LAPACKE_sgesvd_work( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, float* a, lapack_int lda, float* s, float* u, lapack_int ldu, float* vt, lapack_int ldvt, float* work, lapack_int lwork ); lapack_int LAPACKE_dgesvd_work( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, double* a, lapack_int lda, double* s, double* u, lapack_int ldu, double* vt, lapack_int ldvt, double* work, lapack_int lwork ); lapack_int LAPACKE_cgesvd_work( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, float* s, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* vt, lapack_int ldvt, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zgesvd_work( int matrix_order, char jobu, char jobvt, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, double* s, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* vt, lapack_int ldvt, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sgesvj_work( int matrix_order, char joba, char jobu, char jobv, lapack_int m, lapack_int n, float* a, lapack_int lda, float* sva, lapack_int mv, float* v, lapack_int ldv, float* work, lapack_int lwork ); lapack_int LAPACKE_dgesvj_work( int matrix_order, char joba, char jobu, char jobv, lapack_int m, lapack_int n, double* a, lapack_int lda, double* sva, lapack_int mv, double* v, lapack_int ldv, double* work, lapack_int lwork ); lapack_int LAPACKE_sgesvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgesvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgesvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgesvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgesvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgesvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgesvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgesvxx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgetf2_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_dgetf2_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_cgetf2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zgetf2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_sgetrf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_dgetrf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_cgetrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_zgetrf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv ); lapack_int LAPACKE_sgetri_work( int matrix_order, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, float* work, lapack_int lwork ); lapack_int LAPACKE_dgetri_work( int matrix_order, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, double* work, lapack_int lwork ); lapack_int LAPACKE_cgetri_work( int matrix_order, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgetri_work( int matrix_order, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgetrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgetrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgetrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgetrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sggbak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* lscale, const float* rscale, lapack_int m, float* v, lapack_int ldv ); lapack_int LAPACKE_dggbak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* lscale, const double* rscale, lapack_int m, double* v, lapack_int ldv ); lapack_int LAPACKE_cggbak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const float* lscale, const float* rscale, lapack_int m, lapack_complex_float* v, lapack_int ldv ); lapack_int LAPACKE_zggbak_work( int matrix_order, char job, char side, lapack_int n, lapack_int ilo, lapack_int ihi, const double* lscale, const double* rscale, lapack_int m, lapack_complex_double* v, lapack_int ldv ); lapack_int LAPACKE_sggbal_work( int matrix_order, char job, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* work ); lapack_int LAPACKE_dggbal_work( int matrix_order, char job, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* work ); lapack_int LAPACKE_cggbal_work( int matrix_order, char job, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* work ); lapack_int LAPACKE_zggbal_work( int matrix_order, char job, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* work ); lapack_int LAPACKE_sgges_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_S_SELECT3 selctg, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int ldvsl, float* vsr, lapack_int ldvsr, float* work, lapack_int lwork, lapack_logical* bwork ); lapack_int LAPACKE_dgges_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_D_SELECT3 selctg, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int ldvsl, double* vsr, lapack_int ldvsr, double* work, lapack_int lwork, lapack_logical* bwork ); lapack_int LAPACKE_cgges_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_C_SELECT2 selctg, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int ldvsl, lapack_complex_float* vsr, lapack_int ldvsr, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_logical* bwork ); lapack_int LAPACKE_zgges_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_Z_SELECT2 selctg, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int ldvsl, lapack_complex_double* vsr, lapack_int ldvsr, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_logical* bwork ); lapack_int LAPACKE_sggesx_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_S_SELECT3 selctg, char sense, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int ldvsl, float* vsr, lapack_int ldvsr, float* rconde, float* rcondv, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_dggesx_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_D_SELECT3 selctg, char sense, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int ldvsl, double* vsr, lapack_int ldvsr, double* rconde, double* rcondv, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_cggesx_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_C_SELECT2 selctg, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int ldvsl, lapack_complex_float* vsr, lapack_int ldvsr, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_zggesx_work( int matrix_order, char jobvsl, char jobvsr, char sort, LAPACK_Z_SELECT2 selctg, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int ldvsl, lapack_complex_double* vsr, lapack_int ldvsr, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork, lapack_int liwork, lapack_logical* bwork ); lapack_int LAPACKE_sggev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, float* work, lapack_int lwork ); lapack_int LAPACKE_dggev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, double* work, lapack_int lwork ); lapack_int LAPACKE_cggev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zggev_work( int matrix_order, char jobvl, char jobvr, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_sggevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv, float* work, lapack_int lwork, lapack_int* iwork, lapack_logical* bwork ); lapack_int LAPACKE_dggevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv, double* work, lapack_int lwork, lapack_int* iwork, lapack_logical* bwork ); lapack_int LAPACKE_cggevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork, lapack_logical* bwork ); lapack_int LAPACKE_zggevx_work( int matrix_order, char balanc, char jobvl, char jobvr, char sense, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork, lapack_logical* bwork ); lapack_int LAPACKE_sggglm_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, float* a, lapack_int lda, float* b, lapack_int ldb, float* d, float* x, float* y, float* work, lapack_int lwork ); lapack_int LAPACKE_dggglm_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, double* a, lapack_int lda, double* b, lapack_int ldb, double* d, double* x, double* y, double* work, lapack_int lwork ); lapack_int LAPACKE_cggglm_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* d, lapack_complex_float* x, lapack_complex_float* y, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zggglm_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* d, lapack_complex_double* x, lapack_complex_double* y, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sgghrd_work( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, float* b, lapack_int ldb, float* q, lapack_int ldq, float* z, lapack_int ldz ); lapack_int LAPACKE_dgghrd_work( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, double* b, lapack_int ldb, double* q, lapack_int ldq, double* z, lapack_int ldz ); lapack_int LAPACKE_cgghrd_work( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz ); lapack_int LAPACKE_zgghrd_work( int matrix_order, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz ); lapack_int LAPACKE_sgglse_work( int matrix_order, lapack_int m, lapack_int n, lapack_int p, float* a, lapack_int lda, float* b, lapack_int ldb, float* c, float* d, float* x, float* work, lapack_int lwork ); lapack_int LAPACKE_dgglse_work( int matrix_order, lapack_int m, lapack_int n, lapack_int p, double* a, lapack_int lda, double* b, lapack_int ldb, double* c, double* d, double* x, double* work, lapack_int lwork ); lapack_int LAPACKE_cgglse_work( int matrix_order, lapack_int m, lapack_int n, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_complex_float* d, lapack_complex_float* x, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zgglse_work( int matrix_order, lapack_int m, lapack_int n, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_complex_double* d, lapack_complex_double* x, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sggqrf_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, float* a, lapack_int lda, float* taua, float* b, lapack_int ldb, float* taub, float* work, lapack_int lwork ); lapack_int LAPACKE_dggqrf_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, double* a, lapack_int lda, double* taua, double* b, lapack_int ldb, double* taub, double* work, lapack_int lwork ); lapack_int LAPACKE_cggqrf_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_float* a, lapack_int lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* taub, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zggqrf_work( int matrix_order, lapack_int n, lapack_int m, lapack_int p, lapack_complex_double* a, lapack_int lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* taub, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sggrqf_work( int matrix_order, lapack_int m, lapack_int p, lapack_int n, float* a, lapack_int lda, float* taua, float* b, lapack_int ldb, float* taub, float* work, lapack_int lwork ); lapack_int LAPACKE_dggrqf_work( int matrix_order, lapack_int m, lapack_int p, lapack_int n, double* a, lapack_int lda, double* taua, double* b, lapack_int ldb, double* taub, double* work, lapack_int lwork ); lapack_int LAPACKE_cggrqf_work( int matrix_order, lapack_int m, lapack_int p, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* taub, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zggrqf_work( int matrix_order, lapack_int m, lapack_int p, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* taub, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_sggsvd_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, float* a, lapack_int lda, float* b, lapack_int ldb, float* alpha, float* beta, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq, float* work, lapack_int* iwork ); lapack_int LAPACKE_dggsvd_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, double* a, lapack_int lda, double* b, lapack_int ldb, double* alpha, double* beta, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq, double* work, lapack_int* iwork ); lapack_int LAPACKE_cggsvd_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* alpha, float* beta, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* work, float* rwork, lapack_int* iwork ); lapack_int LAPACKE_zggsvd_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int n, lapack_int p, lapack_int* k, lapack_int* l, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* alpha, double* beta, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* work, double* rwork, lapack_int* iwork ); lapack_int LAPACKE_sggsvp_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float tola, float tolb, lapack_int* k, lapack_int* l, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq, lapack_int* iwork, float* tau, float* work ); lapack_int LAPACKE_dggsvp_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double tola, double tolb, lapack_int* k, lapack_int* l, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq, lapack_int* iwork, double* tau, double* work ); lapack_int LAPACKE_cggsvp_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float tola, float tolb, lapack_int* k, lapack_int* l, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq, lapack_int* iwork, float* rwork, lapack_complex_float* tau, lapack_complex_float* work ); lapack_int LAPACKE_zggsvp_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double tola, double tolb, lapack_int* k, lapack_int* l, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq, lapack_int* iwork, double* rwork, lapack_complex_double* tau, lapack_complex_double* work ); lapack_int LAPACKE_sgtcon_work( char norm, lapack_int n, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgtcon_work( char norm, lapack_int n, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgtcon_work( char norm, lapack_int n, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work ); lapack_int LAPACKE_zgtcon_work( char norm, lapack_int n, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work ); lapack_int LAPACKE_sgtrfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, const float* dlf, const float* df, const float* duf, const float* du2, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgtrfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, const double* dlf, const double* df, const double* duf, const double* du2, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgtrfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* dlf, const lapack_complex_float* df, const lapack_complex_float* duf, const lapack_complex_float* du2, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgtrfs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* dlf, const lapack_complex_double* df, const lapack_complex_double* duf, const lapack_complex_double* du2, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgtsv_work( int matrix_order, lapack_int n, lapack_int nrhs, float* dl, float* d, float* du, float* b, lapack_int ldb ); lapack_int LAPACKE_dgtsv_work( int matrix_order, lapack_int n, lapack_int nrhs, double* dl, double* d, double* du, double* b, lapack_int ldb ); lapack_int LAPACKE_cgtsv_work( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgtsv_work( int matrix_order, lapack_int n, lapack_int nrhs, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sgtsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, float* dlf, float* df, float* duf, float* du2, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dgtsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, double* dlf, double* df, double* duf, double* du2, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cgtsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, lapack_complex_float* dlf, lapack_complex_float* df, lapack_complex_float* duf, lapack_complex_float* du2, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zgtsvx_work( int matrix_order, char fact, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, lapack_complex_double* dlf, lapack_complex_double* df, lapack_complex_double* duf, lapack_complex_double* du2, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sgttrf_work( lapack_int n, float* dl, float* d, float* du, float* du2, lapack_int* ipiv ); lapack_int LAPACKE_dgttrf_work( lapack_int n, double* dl, double* d, double* du, double* du2, lapack_int* ipiv ); lapack_int LAPACKE_cgttrf_work( lapack_int n, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* du2, lapack_int* ipiv ); lapack_int LAPACKE_zgttrf_work( lapack_int n, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* du2, lapack_int* ipiv ); lapack_int LAPACKE_sgttrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dgttrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cgttrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zgttrs_work( int matrix_order, char trans, lapack_int n, lapack_int nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chbev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhbev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chbevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zhbevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_chbevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zhbevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_chbgst_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* bb, lapack_int ldbb, lapack_complex_float* x, lapack_int ldx, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhbgst_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* bb, lapack_int ldbb, lapack_complex_double* x, lapack_int ldx, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chbgv_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhbgv_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chbgvd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zhbgvd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_chbgvx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* bb, lapack_int ldbb, lapack_complex_float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zhbgvx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* bb, lapack_int ldbb, lapack_complex_double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_chbtrd_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab, float* d, float* e, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* work ); lapack_int LAPACKE_zhbtrd_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab, double* d, double* e, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* work ); lapack_int LAPACKE_checon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work ); lapack_int LAPACKE_zhecon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work ); lapack_int LAPACKE_cheequb_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax, lapack_complex_float* work ); lapack_int LAPACKE_zheequb_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax, lapack_complex_double* work ); lapack_int LAPACKE_cheev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* w, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zheev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* w, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_cheevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* w, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zheevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* w, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_cheevr_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* isuppz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zheevr_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* isuppz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_cheevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zheevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_chegst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhegst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chegv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* w, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zhegv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* w, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_chegvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float* w, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zhegvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double* w, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_chegvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zhegvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_cherfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zherfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_cherfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zherfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chesv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zhesv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_chesvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zhesvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_chesvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhesvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chetrd_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, float* d, float* e, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zhetrd_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, double* d, double* e, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_chetrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zhetrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_chetri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work ); lapack_int LAPACKE_zhetri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work ); lapack_int LAPACKE_chetrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhetrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chfrk_work( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, float alpha, const lapack_complex_float* a, lapack_int lda, float beta, lapack_complex_float* c ); lapack_int LAPACKE_zhfrk_work( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, double alpha, const lapack_complex_double* a, lapack_int lda, double beta, lapack_complex_double* c ); lapack_int LAPACKE_shgeqz_work( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* h, lapack_int ldh, float* t, lapack_int ldt, float* alphar, float* alphai, float* beta, float* q, lapack_int ldq, float* z, lapack_int ldz, float* work, lapack_int lwork ); lapack_int LAPACKE_dhgeqz_work( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* h, lapack_int ldh, double* t, lapack_int ldt, double* alphar, double* alphai, double* beta, double* q, lapack_int ldq, double* z, lapack_int ldz, double* work, lapack_int lwork ); lapack_int LAPACKE_chgeqz_work( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* h, lapack_int ldh, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zhgeqz_work( int matrix_order, char job, char compq, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* h, lapack_int ldh, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_chpcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work ); lapack_int LAPACKE_zhpcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work ); lapack_int LAPACKE_chpev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhpev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chpevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zhpevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_chpevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* ap, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zhpevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* ap, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_chpgst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_complex_float* bp ); lapack_int LAPACKE_zhpgst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_complex_double* bp ); lapack_int LAPACKE_chpgv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhpgv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chpgvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zhpgvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_chpgvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_float* ap, lapack_complex_float* bp, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_zhpgvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, lapack_complex_double* ap, lapack_complex_double* bp, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_chprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chpsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhpsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_chpsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zhpsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_chptrd_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, float* d, float* e, lapack_complex_float* tau ); lapack_int LAPACKE_zhptrd_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, double* d, double* e, lapack_complex_double* tau ); lapack_int LAPACKE_chptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, lapack_int* ipiv ); lapack_int LAPACKE_zhptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, lapack_int* ipiv ); lapack_int LAPACKE_chptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* work ); lapack_int LAPACKE_zhptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* work ); lapack_int LAPACKE_chptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zhptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_shsein_work( int matrix_order, char job, char eigsrc, char initv, lapack_logical* select, lapack_int n, const float* h, lapack_int ldh, float* wr, const float* wi, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, float* work, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_dhsein_work( int matrix_order, char job, char eigsrc, char initv, lapack_logical* select, lapack_int n, const double* h, lapack_int ldh, double* wr, const double* wi, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, double* work, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_chsein_work( int matrix_order, char job, char eigsrc, char initv, const lapack_logical* select, lapack_int n, const lapack_complex_float* h, lapack_int ldh, lapack_complex_float* w, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_float* work, float* rwork, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_zhsein_work( int matrix_order, char job, char eigsrc, char initv, const lapack_logical* select, lapack_int n, const lapack_complex_double* h, lapack_int ldh, lapack_complex_double* w, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_double* work, double* rwork, lapack_int* ifaill, lapack_int* ifailr ); lapack_int LAPACKE_shseqr_work( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, float* h, lapack_int ldh, float* wr, float* wi, float* z, lapack_int ldz, float* work, lapack_int lwork ); lapack_int LAPACKE_dhseqr_work( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, double* h, lapack_int ldh, double* wr, double* wi, double* z, lapack_int ldz, double* work, lapack_int lwork ); lapack_int LAPACKE_chseqr_work( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* h, lapack_int ldh, lapack_complex_float* w, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zhseqr_work( int matrix_order, char job, char compz, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* h, lapack_int ldh, lapack_complex_double* w, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_clacgv_work( lapack_int n, lapack_complex_float* x, lapack_int incx ); lapack_int LAPACKE_zlacgv_work( lapack_int n, lapack_complex_double* x, lapack_int incx ); lapack_int LAPACKE_slacpy_work( int matrix_order, char uplo, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dlacpy_work( int matrix_order, char uplo, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_clacpy_work( int matrix_order, char uplo, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zlacpy_work( int matrix_order, char uplo, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_zlag2c_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_float* sa, lapack_int ldsa ); lapack_int LAPACKE_slag2d_work( int matrix_order, lapack_int m, lapack_int n, const float* sa, lapack_int ldsa, double* a, lapack_int lda ); lapack_int LAPACKE_dlag2s_work( int matrix_order, lapack_int m, lapack_int n, const double* a, lapack_int lda, float* sa, lapack_int ldsa ); lapack_int LAPACKE_clag2z_work( int matrix_order, lapack_int m, lapack_int n, const lapack_complex_float* sa, lapack_int ldsa, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slagge_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* d, float* a, lapack_int lda, lapack_int* iseed, float* work ); lapack_int LAPACKE_dlagge_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* d, double* a, lapack_int lda, lapack_int* iseed, double* work ); lapack_int LAPACKE_clagge_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed, lapack_complex_float* work ); lapack_int LAPACKE_zlagge_work( int matrix_order, lapack_int m, lapack_int n, lapack_int kl, lapack_int ku, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed, lapack_complex_double* work ); lapack_int LAPACKE_claghe_work( int matrix_order, lapack_int n, lapack_int k, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed, lapack_complex_float* work ); lapack_int LAPACKE_zlaghe_work( int matrix_order, lapack_int n, lapack_int k, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed, lapack_complex_double* work ); lapack_int LAPACKE_slagsy_work( int matrix_order, lapack_int n, lapack_int k, const float* d, float* a, lapack_int lda, lapack_int* iseed, float* work ); lapack_int LAPACKE_dlagsy_work( int matrix_order, lapack_int n, lapack_int k, const double* d, double* a, lapack_int lda, lapack_int* iseed, double* work ); lapack_int LAPACKE_clagsy_work( int matrix_order, lapack_int n, lapack_int k, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed, lapack_complex_float* work ); lapack_int LAPACKE_zlagsy_work( int matrix_order, lapack_int n, lapack_int k, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed, lapack_complex_double* work ); lapack_int LAPACKE_slapmr_work( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, float* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_dlapmr_work( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, double* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_clapmr_work( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, lapack_complex_float* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_zlapmr_work( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, lapack_complex_double* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_slartgp_work( float f, float g, float* cs, float* sn, float* r ); lapack_int LAPACKE_dlartgp_work( double f, double g, double* cs, double* sn, double* r ); lapack_int LAPACKE_slartgs_work( float x, float y, float sigma, float* cs, float* sn ); lapack_int LAPACKE_dlartgs_work( double x, double y, double sigma, double* cs, double* sn ); float LAPACKE_slapy2_work( float x, float y ); double LAPACKE_dlapy2_work( double x, double y ); float LAPACKE_slapy3_work( float x, float y, float z ); double LAPACKE_dlapy3_work( double x, double y, double z ); float LAPACKE_slamch_work( char cmach ); double LAPACKE_dlamch_work( char cmach ); float LAPACKE_slange_work( int matrix_order, char norm, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* work ); double LAPACKE_dlange_work( int matrix_order, char norm, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* work ); float LAPACKE_clange_work( int matrix_order, char norm, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* work ); double LAPACKE_zlange_work( int matrix_order, char norm, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* work ); float LAPACKE_clanhe_work( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* work ); double LAPACKE_zlanhe_work( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* work ); float LAPACKE_slansy_work( int matrix_order, char norm, char uplo, lapack_int n, const float* a, lapack_int lda, float* work ); double LAPACKE_dlansy_work( int matrix_order, char norm, char uplo, lapack_int n, const double* a, lapack_int lda, double* work ); float LAPACKE_clansy_work( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* work ); double LAPACKE_zlansy_work( int matrix_order, char norm, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* work ); float LAPACKE_slantr_work( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const float* a, lapack_int lda, float* work ); double LAPACKE_dlantr_work( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const double* a, lapack_int lda, double* work ); float LAPACKE_clantr_work( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* work ); double LAPACKE_zlantr_work( int matrix_order, char norm, char uplo, char diag, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* work ); lapack_int LAPACKE_slarfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* c, lapack_int ldc, float* work, lapack_int ldwork ); lapack_int LAPACKE_dlarfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* c, lapack_int ldc, double* work, lapack_int ldwork ); lapack_int LAPACKE_clarfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int ldwork ); lapack_int LAPACKE_zlarfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int ldwork ); lapack_int LAPACKE_slarfg_work( lapack_int n, float* alpha, float* x, lapack_int incx, float* tau ); lapack_int LAPACKE_dlarfg_work( lapack_int n, double* alpha, double* x, lapack_int incx, double* tau ); lapack_int LAPACKE_clarfg_work( lapack_int n, lapack_complex_float* alpha, lapack_complex_float* x, lapack_int incx, lapack_complex_float* tau ); lapack_int LAPACKE_zlarfg_work( lapack_int n, lapack_complex_double* alpha, lapack_complex_double* x, lapack_int incx, lapack_complex_double* tau ); lapack_int LAPACKE_slarft_work( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const float* v, lapack_int ldv, const float* tau, float* t, lapack_int ldt ); lapack_int LAPACKE_dlarft_work( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const double* v, lapack_int ldv, const double* tau, double* t, lapack_int ldt ); lapack_int LAPACKE_clarft_work( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* tau, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zlarft_work( int matrix_order, char direct, char storev, lapack_int n, lapack_int k, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* tau, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_slarfx_work( int matrix_order, char side, lapack_int m, lapack_int n, const float* v, float tau, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dlarfx_work( int matrix_order, char side, lapack_int m, lapack_int n, const double* v, double tau, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_clarfx_work( int matrix_order, char side, lapack_int m, lapack_int n, const lapack_complex_float* v, lapack_complex_float tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work ); lapack_int LAPACKE_zlarfx_work( int matrix_order, char side, lapack_int m, lapack_int n, const lapack_complex_double* v, lapack_complex_double tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work ); lapack_int LAPACKE_slarnv_work( lapack_int idist, lapack_int* iseed, lapack_int n, float* x ); lapack_int LAPACKE_dlarnv_work( lapack_int idist, lapack_int* iseed, lapack_int n, double* x ); lapack_int LAPACKE_clarnv_work( lapack_int idist, lapack_int* iseed, lapack_int n, lapack_complex_float* x ); lapack_int LAPACKE_zlarnv_work( lapack_int idist, lapack_int* iseed, lapack_int n, lapack_complex_double* x ); lapack_int LAPACKE_slaset_work( int matrix_order, char uplo, lapack_int m, lapack_int n, float alpha, float beta, float* a, lapack_int lda ); lapack_int LAPACKE_dlaset_work( int matrix_order, char uplo, lapack_int m, lapack_int n, double alpha, double beta, double* a, lapack_int lda ); lapack_int LAPACKE_claset_work( int matrix_order, char uplo, lapack_int m, lapack_int n, lapack_complex_float alpha, lapack_complex_float beta, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zlaset_work( int matrix_order, char uplo, lapack_int m, lapack_int n, lapack_complex_double alpha, lapack_complex_double beta, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_slasrt_work( char id, lapack_int n, float* d ); lapack_int LAPACKE_dlasrt_work( char id, lapack_int n, double* d ); lapack_int LAPACKE_slaswp_work( int matrix_order, lapack_int n, float* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_dlaswp_work( int matrix_order, lapack_int n, double* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_claswp_work( int matrix_order, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_zlaswp_work( int matrix_order, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int k1, lapack_int k2, const lapack_int* ipiv, lapack_int incx ); lapack_int LAPACKE_slatms_work( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, float* d, lapack_int mode, float cond, float dmax, lapack_int kl, lapack_int ku, char pack, float* a, lapack_int lda, float* work ); lapack_int LAPACKE_dlatms_work( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, double* d, lapack_int mode, double cond, double dmax, lapack_int kl, lapack_int ku, char pack, double* a, lapack_int lda, double* work ); lapack_int LAPACKE_clatms_work( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, float* d, lapack_int mode, float cond, float dmax, lapack_int kl, lapack_int ku, char pack, lapack_complex_float* a, lapack_int lda, lapack_complex_float* work ); lapack_int LAPACKE_zlatms_work( int matrix_order, lapack_int m, lapack_int n, char dist, lapack_int* iseed, char sym, double* d, lapack_int mode, double cond, double dmax, lapack_int kl, lapack_int ku, char pack, lapack_complex_double* a, lapack_int lda, lapack_complex_double* work ); lapack_int LAPACKE_slauum_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dlauum_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_clauum_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zlauum_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_sopgtr_work( int matrix_order, char uplo, lapack_int n, const float* ap, const float* tau, float* q, lapack_int ldq, float* work ); lapack_int LAPACKE_dopgtr_work( int matrix_order, char uplo, lapack_int n, const double* ap, const double* tau, double* q, lapack_int ldq, double* work ); lapack_int LAPACKE_sopmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const float* ap, const float* tau, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dopmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const double* ap, const double* tau, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_sorgbr_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorgbr_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorghr_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorghr_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorglq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorglq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorgql_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorgql_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorgqr_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorgqr_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorgrq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorgrq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sorgtr_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dorgtr_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_sormbr_work( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormbr_work( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormhr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormhr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormlq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormlq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormql_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormql_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormqr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormqr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormrq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormrq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormrz_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormrz_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_sormtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* tau, float* c, lapack_int ldc, float* work, lapack_int lwork ); lapack_int LAPACKE_dormtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* tau, double* c, lapack_int ldc, double* work, lapack_int lwork ); lapack_int LAPACKE_spbcon_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dpbcon_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cpbcon_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float anorm, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zpbcon_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double anorm, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spbequ_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpbequ_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpbequ_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpbequ_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double* s, double* scond, double* amax ); lapack_int LAPACKE_spbrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, const float* afb, lapack_int ldafb, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dpbrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, const double* afb, lapack_int ldafb, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cpbrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* afb, lapack_int ldafb, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zpbrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* afb, lapack_int ldafb, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spbstf_work( int matrix_order, char uplo, lapack_int n, lapack_int kb, float* bb, lapack_int ldbb ); lapack_int LAPACKE_dpbstf_work( int matrix_order, char uplo, lapack_int n, lapack_int kb, double* bb, lapack_int ldbb ); lapack_int LAPACKE_cpbstf_work( int matrix_order, char uplo, lapack_int n, lapack_int kb, lapack_complex_float* bb, lapack_int ldbb ); lapack_int LAPACKE_zpbstf_work( int matrix_order, char uplo, lapack_int n, lapack_int kb, lapack_complex_double* bb, lapack_int ldbb ); lapack_int LAPACKE_spbsv_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dpbsv_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_cpbsv_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpbsv_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spbsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, float* ab, lapack_int ldab, float* afb, lapack_int ldafb, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dpbsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, double* ab, lapack_int ldab, double* afb, lapack_int ldafb, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cpbsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* afb, lapack_int ldafb, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zpbsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* afb, lapack_int ldafb, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spbtrf_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab ); lapack_int LAPACKE_dpbtrf_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab ); lapack_int LAPACKE_cpbtrf_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_complex_float* ab, lapack_int ldab ); lapack_int LAPACKE_zpbtrf_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_complex_double* ab, lapack_int ldab ); lapack_int LAPACKE_spbtrs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dpbtrs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_cpbtrs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpbtrs_work( int matrix_order, char uplo, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spftrf_work( int matrix_order, char transr, char uplo, lapack_int n, float* a ); lapack_int LAPACKE_dpftrf_work( int matrix_order, char transr, char uplo, lapack_int n, double* a ); lapack_int LAPACKE_cpftrf_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_zpftrf_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_spftri_work( int matrix_order, char transr, char uplo, lapack_int n, float* a ); lapack_int LAPACKE_dpftri_work( int matrix_order, char transr, char uplo, lapack_int n, double* a ); lapack_int LAPACKE_cpftri_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_zpftri_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_spftrs_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const float* a, float* b, lapack_int ldb ); lapack_int LAPACKE_dpftrs_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const double* a, double* b, lapack_int ldb ); lapack_int LAPACKE_cpftrs_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpftrs_work( int matrix_order, char transr, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spocon_work( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dpocon_work( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cpocon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float anorm, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zpocon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double anorm, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spoequ_work( int matrix_order, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpoequ_work( int matrix_order, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpoequ_work( int matrix_order, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpoequ_work( int matrix_order, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_spoequb_work( int matrix_order, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_dpoequb_work( int matrix_order, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_cpoequb_work( int matrix_order, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax ); lapack_int LAPACKE_zpoequb_work( int matrix_order, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax ); lapack_int LAPACKE_sporfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dporfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cporfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zporfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sporfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const float* s, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dporfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const double* s, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cporfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zporfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_cposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_dsposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* b, lapack_int ldb, double* x, lapack_int ldx, double* work, float* swork, lapack_int* iter ); lapack_int LAPACKE_zcposv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, lapack_complex_double* work, lapack_complex_float* swork, double* rwork, lapack_int* iter ); lapack_int LAPACKE_sposvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dposvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cposvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zposvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sposvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dposvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_cposvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zposvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spotrf_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dpotrf_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_cpotrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zpotrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_spotri_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dpotri_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_cpotri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zpotri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_spotrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dpotrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_cpotrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpotrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sppcon_work( int matrix_order, char uplo, lapack_int n, const float* ap, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dppcon_work( int matrix_order, char uplo, lapack_int n, const double* ap, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cppcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, float anorm, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zppcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, double anorm, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sppequ_work( int matrix_order, char uplo, lapack_int n, const float* ap, float* s, float* scond, float* amax ); lapack_int LAPACKE_dppequ_work( int matrix_order, char uplo, lapack_int n, const double* ap, double* s, double* scond, double* amax ); lapack_int LAPACKE_cppequ_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, float* s, float* scond, float* amax ); lapack_int LAPACKE_zppequ_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, double* s, double* scond, double* amax ); lapack_int LAPACKE_spprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const float* afp, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dpprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const double* afp, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cpprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zpprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sppsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dppsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_cppsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zppsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sppsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* ap, float* afp, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dppsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* ap, double* afp, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cppsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_complex_float* afp, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zppsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_complex_double* afp, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spptrf_work( int matrix_order, char uplo, lapack_int n, float* ap ); lapack_int LAPACKE_dpptrf_work( int matrix_order, char uplo, lapack_int n, double* ap ); lapack_int LAPACKE_cpptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_zpptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_spptri_work( int matrix_order, char uplo, lapack_int n, float* ap ); lapack_int LAPACKE_dpptri_work( int matrix_order, char uplo, lapack_int n, double* ap ); lapack_int LAPACKE_cpptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_zpptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_spptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dpptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_cpptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_spstrf_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, lapack_int* piv, lapack_int* rank, float tol, float* work ); lapack_int LAPACKE_dpstrf_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, lapack_int* piv, lapack_int* rank, double tol, double* work ); lapack_int LAPACKE_cpstrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* piv, lapack_int* rank, float tol, float* work ); lapack_int LAPACKE_zpstrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* piv, lapack_int* rank, double tol, double* work ); lapack_int LAPACKE_sptcon_work( lapack_int n, const float* d, const float* e, float anorm, float* rcond, float* work ); lapack_int LAPACKE_dptcon_work( lapack_int n, const double* d, const double* e, double anorm, double* rcond, double* work ); lapack_int LAPACKE_cptcon_work( lapack_int n, const float* d, const lapack_complex_float* e, float anorm, float* rcond, float* work ); lapack_int LAPACKE_zptcon_work( lapack_int n, const double* d, const lapack_complex_double* e, double anorm, double* rcond, double* work ); lapack_int LAPACKE_spteqr_work( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dpteqr_work( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_cpteqr_work( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_zpteqr_work( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_sptrfs_work( int matrix_order, lapack_int n, lapack_int nrhs, const float* d, const float* e, const float* df, const float* ef, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work ); lapack_int LAPACKE_dptrfs_work( int matrix_order, lapack_int n, lapack_int nrhs, const double* d, const double* e, const double* df, const double* ef, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work ); lapack_int LAPACKE_cptrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, const float* df, const lapack_complex_float* ef, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zptrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, const double* df, const lapack_complex_double* ef, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sptsv_work( int matrix_order, lapack_int n, lapack_int nrhs, float* d, float* e, float* b, lapack_int ldb ); lapack_int LAPACKE_dptsv_work( int matrix_order, lapack_int n, lapack_int nrhs, double* d, double* e, double* b, lapack_int ldb ); lapack_int LAPACKE_cptsv_work( int matrix_order, lapack_int n, lapack_int nrhs, float* d, lapack_complex_float* e, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zptsv_work( int matrix_order, lapack_int n, lapack_int nrhs, double* d, lapack_complex_double* e, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sptsvx_work( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const float* d, const float* e, float* df, float* ef, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work ); lapack_int LAPACKE_dptsvx_work( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const double* d, const double* e, double* df, double* ef, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work ); lapack_int LAPACKE_cptsvx_work( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, float* df, lapack_complex_float* ef, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zptsvx_work( int matrix_order, char fact, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, double* df, lapack_complex_double* ef, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_spttrf_work( lapack_int n, float* d, float* e ); lapack_int LAPACKE_dpttrf_work( lapack_int n, double* d, double* e ); lapack_int LAPACKE_cpttrf_work( lapack_int n, float* d, lapack_complex_float* e ); lapack_int LAPACKE_zpttrf_work( lapack_int n, double* d, lapack_complex_double* e ); lapack_int LAPACKE_spttrs_work( int matrix_order, lapack_int n, lapack_int nrhs, const float* d, const float* e, float* b, lapack_int ldb ); lapack_int LAPACKE_dpttrs_work( int matrix_order, lapack_int n, lapack_int nrhs, const double* d, const double* e, double* b, lapack_int ldb ); lapack_int LAPACKE_cpttrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* d, const lapack_complex_float* e, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zpttrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* d, const lapack_complex_double* e, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_ssbev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* w, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dsbev_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* w, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_ssbevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dsbevd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssbevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dsbevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssbgst_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, const float* bb, lapack_int ldbb, float* x, lapack_int ldx, float* work ); lapack_int LAPACKE_dsbgst_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, const double* bb, lapack_int ldbb, double* x, lapack_int ldx, double* work ); lapack_int LAPACKE_ssbgv_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* w, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dsbgv_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* w, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_ssbgvd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dsbgvd_work( int matrix_order, char jobz, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssbgvx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, float* ab, lapack_int ldab, float* bb, lapack_int ldbb, float* q, lapack_int ldq, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dsbgvx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, lapack_int ka, lapack_int kb, double* ab, lapack_int ldab, double* bb, lapack_int ldbb, double* q, lapack_int ldq, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssbtrd_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, float* ab, lapack_int ldab, float* d, float* e, float* q, lapack_int ldq, float* work ); lapack_int LAPACKE_dsbtrd_work( int matrix_order, char vect, char uplo, lapack_int n, lapack_int kd, double* ab, lapack_int ldab, double* d, double* e, double* q, lapack_int ldq, double* work ); lapack_int LAPACKE_ssfrk_work( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, float alpha, const float* a, lapack_int lda, float beta, float* c ); lapack_int LAPACKE_dsfrk_work( int matrix_order, char transr, char uplo, char trans, lapack_int n, lapack_int k, double alpha, const double* a, lapack_int lda, double beta, double* c ); lapack_int LAPACKE_sspcon_work( int matrix_order, char uplo, lapack_int n, const float* ap, const lapack_int* ipiv, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dspcon_work( int matrix_order, char uplo, lapack_int n, const double* ap, const lapack_int* ipiv, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_cspcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work ); lapack_int LAPACKE_zspcon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work ); lapack_int LAPACKE_sspev_work( int matrix_order, char jobz, char uplo, lapack_int n, float* ap, float* w, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dspev_work( int matrix_order, char jobz, char uplo, lapack_int n, double* ap, double* w, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_sspevd_work( int matrix_order, char jobz, char uplo, lapack_int n, float* ap, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dspevd_work( int matrix_order, char jobz, char uplo, lapack_int n, double* ap, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sspevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* ap, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dspevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* ap, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_sspgst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, float* ap, const float* bp ); lapack_int LAPACKE_dspgst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, double* ap, const double* bp ); lapack_int LAPACKE_sspgv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* ap, float* bp, float* w, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dspgv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* ap, double* bp, double* w, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_sspgvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* ap, float* bp, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dspgvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* ap, double* bp, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sspgvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, float* ap, float* bp, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dspgvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, double* ap, double* bp, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const float* afp, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dsprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const double* afp, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_csprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zsprfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_sspsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* ap, lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dspsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* ap, lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_cspsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zspsv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sspsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const float* ap, float* afp, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dspsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const double* ap, double* afp, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_cspsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zspsvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_ssptrd_work( int matrix_order, char uplo, lapack_int n, float* ap, float* d, float* e, float* tau ); lapack_int LAPACKE_dsptrd_work( int matrix_order, char uplo, lapack_int n, double* ap, double* d, double* e, double* tau ); lapack_int LAPACKE_ssptrf_work( int matrix_order, char uplo, lapack_int n, float* ap, lapack_int* ipiv ); lapack_int LAPACKE_dsptrf_work( int matrix_order, char uplo, lapack_int n, double* ap, lapack_int* ipiv ); lapack_int LAPACKE_csptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, lapack_int* ipiv ); lapack_int LAPACKE_zsptrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, lapack_int* ipiv ); lapack_int LAPACKE_ssptri_work( int matrix_order, char uplo, lapack_int n, float* ap, const lapack_int* ipiv, float* work ); lapack_int LAPACKE_dsptri_work( int matrix_order, char uplo, lapack_int n, double* ap, const lapack_int* ipiv, double* work ); lapack_int LAPACKE_csptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* work ); lapack_int LAPACKE_zsptri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* work ); lapack_int LAPACKE_ssptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* ap, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dsptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* ap, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_csptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zsptrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_sstebz_work( char range, char order, lapack_int n, float vl, float vu, lapack_int il, lapack_int iu, float abstol, const float* d, const float* e, lapack_int* m, lapack_int* nsplit, float* w, lapack_int* iblock, lapack_int* isplit, float* work, lapack_int* iwork ); lapack_int LAPACKE_dstebz_work( char range, char order, lapack_int n, double vl, double vu, lapack_int il, lapack_int iu, double abstol, const double* d, const double* e, lapack_int* m, lapack_int* nsplit, double* w, lapack_int* iblock, lapack_int* isplit, double* work, lapack_int* iwork ); lapack_int LAPACKE_sstedc_work( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dstedc_work( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_cstedc_work( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zstedc_work( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sstegr_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dstegr_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_cstegr_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int* isuppz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zstegr_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int* isuppz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sstein_work( int matrix_order, lapack_int n, const float* d, const float* e, lapack_int m, const float* w, const lapack_int* iblock, const lapack_int* isplit, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifailv ); lapack_int LAPACKE_dstein_work( int matrix_order, lapack_int n, const double* d, const double* e, lapack_int m, const double* w, const lapack_int* iblock, const lapack_int* isplit, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifailv ); lapack_int LAPACKE_cstein_work( int matrix_order, lapack_int n, const float* d, const float* e, lapack_int m, const float* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifailv ); lapack_int LAPACKE_zstein_work( int matrix_order, lapack_int n, const double* d, const double* e, lapack_int m, const double* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifailv ); lapack_int LAPACKE_sstemr_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dstemr_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_cstemr_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, lapack_int* m, float* w, lapack_complex_float* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_zstemr_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, lapack_int* m, double* w, lapack_complex_double* z, lapack_int ldz, lapack_int nzc, lapack_int* isuppz, lapack_logical* tryrac, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssteqr_work( int matrix_order, char compz, lapack_int n, float* d, float* e, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dsteqr_work( int matrix_order, char compz, lapack_int n, double* d, double* e, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_csteqr_work( int matrix_order, char compz, lapack_int n, float* d, float* e, lapack_complex_float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_zsteqr_work( int matrix_order, char compz, lapack_int n, double* d, double* e, lapack_complex_double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_ssterf_work( lapack_int n, float* d, float* e ); lapack_int LAPACKE_dsterf_work( lapack_int n, double* d, double* e ); lapack_int LAPACKE_sstev_work( int matrix_order, char jobz, lapack_int n, float* d, float* e, float* z, lapack_int ldz, float* work ); lapack_int LAPACKE_dstev_work( int matrix_order, char jobz, lapack_int n, double* d, double* e, double* z, lapack_int ldz, double* work ); lapack_int LAPACKE_sstevd_work( int matrix_order, char jobz, lapack_int n, float* d, float* e, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dstevd_work( int matrix_order, char jobz, lapack_int n, double* d, double* e, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sstevr_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dstevr_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_sstevx_work( int matrix_order, char jobz, char range, lapack_int n, float* d, float* e, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dstevx_work( int matrix_order, char jobz, char range, lapack_int n, double* d, double* e, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssycon_work( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dsycon_work( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_csycon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, float anorm, float* rcond, lapack_complex_float* work ); lapack_int LAPACKE_zsycon_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, double anorm, double* rcond, lapack_complex_double* work ); lapack_int LAPACKE_ssyequb_work( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float* s, float* scond, float* amax, float* work ); lapack_int LAPACKE_dsyequb_work( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double* s, double* scond, double* amax, double* work ); lapack_int LAPACKE_csyequb_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* s, float* scond, float* amax, lapack_complex_float* work ); lapack_int LAPACKE_zsyequb_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* s, double* scond, double* amax, lapack_complex_double* work ); lapack_int LAPACKE_ssyev_work( int matrix_order, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* w, float* work, lapack_int lwork ); lapack_int LAPACKE_dsyev_work( int matrix_order, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* w, double* work, lapack_int lwork ); lapack_int LAPACKE_ssyevd_work( int matrix_order, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* w, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dsyevd_work( int matrix_order, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* w, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssyevr_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, lapack_int* isuppz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dsyevr_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, lapack_int* isuppz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssyevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dsyevx_work( int matrix_order, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssygst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, float* a, lapack_int lda, const float* b, lapack_int ldb ); lapack_int LAPACKE_dsygst_work( int matrix_order, lapack_int itype, char uplo, lapack_int n, double* a, lapack_int lda, const double* b, lapack_int ldb ); lapack_int LAPACKE_ssygv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* w, float* work, lapack_int lwork ); lapack_int LAPACKE_dsygv_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* w, double* work, lapack_int lwork ); lapack_int LAPACKE_ssygvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* w, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dsygvd_work( int matrix_order, lapack_int itype, char jobz, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* w, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ssygvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float vl, float vu, lapack_int il, lapack_int iu, float abstol, lapack_int* m, float* w, float* z, lapack_int ldz, float* work, lapack_int lwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_dsygvx_work( int matrix_order, lapack_int itype, char jobz, char range, char uplo, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double vl, double vu, lapack_int il, lapack_int iu, double abstol, lapack_int* m, double* w, double* z, lapack_int ldz, double* work, lapack_int lwork, lapack_int* iwork, lapack_int* ifail ); lapack_int LAPACKE_ssyrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dsyrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_csyrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zsyrfs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_ssyrfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dsyrfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_csyrfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* af, lapack_int ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zsyrfsx_work( int matrix_order, char uplo, char equed, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* af, lapack_int ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_ssysv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, lapack_int* ipiv, float* b, lapack_int ldb, float* work, lapack_int lwork ); lapack_int LAPACKE_dsysv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, lapack_int* ipiv, double* b, lapack_int ldb, double* work, lapack_int lwork ); lapack_int LAPACKE_csysv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zsysv_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_ssysvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, const float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dsysvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, const double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_csysvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, lapack_int lwork, float* rwork ); lapack_int LAPACKE_zsysvx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, lapack_int lwork, double* rwork ); lapack_int LAPACKE_ssysvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, float* a, lapack_int lda, float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, float* b, lapack_int ldb, float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, float* work, lapack_int* iwork ); lapack_int LAPACKE_dsysvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, double* a, lapack_int lda, double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, double* b, lapack_int ldb, double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, double* work, lapack_int* iwork ); lapack_int LAPACKE_csysvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_float* a, lapack_int lda, lapack_complex_float* af, lapack_int ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* x, lapack_int ldx, float* rcond, float* rpvgrw, float* berr, lapack_int n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int nparams, float* params, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_zsysvxx_work( int matrix_order, char fact, char uplo, lapack_int n, lapack_int nrhs, lapack_complex_double* a, lapack_int lda, lapack_complex_double* af, lapack_int ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* x, lapack_int ldx, double* rcond, double* rpvgrw, double* berr, lapack_int n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int nparams, double* params, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_ssytrd_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, float* d, float* e, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dsytrd_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, double* d, double* e, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_ssytrf_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, lapack_int* ipiv, float* work, lapack_int lwork ); lapack_int LAPACKE_dsytrf_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, lapack_int* ipiv, double* work, lapack_int lwork ); lapack_int LAPACKE_csytrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zsytrf_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_ssytri_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, float* work ); lapack_int LAPACKE_dsytri_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, double* work ); lapack_int LAPACKE_csytri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work ); lapack_int LAPACKE_zsytri_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work ); lapack_int LAPACKE_ssytrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_dsytrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_csytrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_zsytrs_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stbcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const float* ab, lapack_int ldab, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtbcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const double* ab, lapack_int ldab, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctbcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const lapack_complex_float* ab, lapack_int ldab, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztbcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, lapack_int kd, const lapack_complex_double* ab, lapack_int ldab, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_stbrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtbrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctbrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztbrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_stbtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const float* ab, lapack_int ldab, float* b, lapack_int ldb ); lapack_int LAPACKE_dtbtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const double* ab, lapack_int ldab, double* b, lapack_int ldb ); lapack_int LAPACKE_ctbtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_float* ab, lapack_int ldab, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztbtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int kd, lapack_int nrhs, const lapack_complex_double* ab, lapack_int ldab, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stfsm_work( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, float alpha, const float* a, float* b, lapack_int ldb ); lapack_int LAPACKE_dtfsm_work( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, double alpha, const double* a, double* b, lapack_int ldb ); lapack_int LAPACKE_ctfsm_work( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, lapack_complex_float alpha, const lapack_complex_float* a, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztfsm_work( int matrix_order, char transr, char side, char uplo, char trans, char diag, lapack_int m, lapack_int n, lapack_complex_double alpha, const lapack_complex_double* a, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stftri_work( int matrix_order, char transr, char uplo, char diag, lapack_int n, float* a ); lapack_int LAPACKE_dtftri_work( int matrix_order, char transr, char uplo, char diag, lapack_int n, double* a ); lapack_int LAPACKE_ctftri_work( int matrix_order, char transr, char uplo, char diag, lapack_int n, lapack_complex_float* a ); lapack_int LAPACKE_ztftri_work( int matrix_order, char transr, char uplo, char diag, lapack_int n, lapack_complex_double* a ); lapack_int LAPACKE_stfttp_work( int matrix_order, char transr, char uplo, lapack_int n, const float* arf, float* ap ); lapack_int LAPACKE_dtfttp_work( int matrix_order, char transr, char uplo, lapack_int n, const double* arf, double* ap ); lapack_int LAPACKE_ctfttp_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* arf, lapack_complex_float* ap ); lapack_int LAPACKE_ztfttp_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* arf, lapack_complex_double* ap ); lapack_int LAPACKE_stfttr_work( int matrix_order, char transr, char uplo, lapack_int n, const float* arf, float* a, lapack_int lda ); lapack_int LAPACKE_dtfttr_work( int matrix_order, char transr, char uplo, lapack_int n, const double* arf, double* a, lapack_int lda ); lapack_int LAPACKE_ctfttr_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* arf, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztfttr_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* arf, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_stgevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const float* s, lapack_int lds, const float* p, lapack_int ldp, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, float* work ); lapack_int LAPACKE_dtgevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const double* s, lapack_int lds, const double* p, lapack_int ldp, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, double* work ); lapack_int LAPACKE_ctgevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* s, lapack_int lds, const lapack_complex_float* p, lapack_int ldp, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztgevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* s, lapack_int lds, const lapack_complex_double* p, lapack_int ldp, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_stgexc_work( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* q, lapack_int ldq, float* z, lapack_int ldz, lapack_int* ifst, lapack_int* ilst, float* work, lapack_int lwork ); lapack_int LAPACKE_dtgexc_work( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* q, lapack_int ldq, double* z, lapack_int ldz, lapack_int* ifst, lapack_int* ilst, double* work, lapack_int lwork ); lapack_int LAPACKE_ctgexc_work( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_ztgexc_work( int matrix_order, lapack_logical wantq, lapack_logical wantz, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_stgsen_work( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* alphar, float* alphai, float* beta, float* q, lapack_int ldq, float* z, lapack_int ldz, lapack_int* m, float* pl, float* pr, float* dif, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dtgsen_work( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* alphar, double* alphai, double* beta, double* q, lapack_int ldq, double* z, lapack_int ldz, lapack_int* m, double* pl, double* pr, double* dif, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ctgsen_work( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* z, lapack_int ldz, lapack_int* m, float* pl, float* pr, float* dif, lapack_complex_float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ztgsen_work( int matrix_order, lapack_int ijob, lapack_logical wantq, lapack_logical wantz, const lapack_logical* select, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* z, lapack_int ldz, lapack_int* m, double* pl, double* pr, double* dif, lapack_complex_double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_stgsja_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, float* a, lapack_int lda, float* b, lapack_int ldb, float tola, float tolb, float* alpha, float* beta, float* u, lapack_int ldu, float* v, lapack_int ldv, float* q, lapack_int ldq, float* work, lapack_int* ncycle ); lapack_int LAPACKE_dtgsja_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, double* a, lapack_int lda, double* b, lapack_int ldb, double tola, double tolb, double* alpha, double* beta, double* u, lapack_int ldu, double* v, lapack_int ldv, double* q, lapack_int ldq, double* work, lapack_int* ncycle ); lapack_int LAPACKE_ctgsja_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, float tola, float tolb, float* alpha, float* beta, lapack_complex_float* u, lapack_int ldu, lapack_complex_float* v, lapack_int ldv, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* work, lapack_int* ncycle ); lapack_int LAPACKE_ztgsja_work( int matrix_order, char jobu, char jobv, char jobq, lapack_int m, lapack_int p, lapack_int n, lapack_int k, lapack_int l, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, double tola, double tolb, double* alpha, double* beta, lapack_complex_double* u, lapack_int ldu, lapack_complex_double* v, lapack_int ldv, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* work, lapack_int* ncycle ); lapack_int LAPACKE_stgsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, const float* vl, lapack_int ldvl, const float* vr, lapack_int ldvr, float* s, float* dif, lapack_int mm, lapack_int* m, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dtgsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, const double* vl, lapack_int ldvl, const double* vr, lapack_int ldvr, double* s, double* dif, lapack_int mm, lapack_int* m, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_ctgsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* vl, lapack_int ldvl, const lapack_complex_float* vr, lapack_int ldvr, float* s, float* dif, lapack_int mm, lapack_int* m, lapack_complex_float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_ztgsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* vl, lapack_int ldvl, const lapack_complex_double* vr, lapack_int ldvr, double* s, double* dif, lapack_int mm, lapack_int* m, lapack_complex_double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_stgsyl_work( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, float* c, lapack_int ldc, const float* d, lapack_int ldd, const float* e, lapack_int lde, float* f, lapack_int ldf, float* scale, float* dif, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dtgsyl_work( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, double* c, lapack_int ldc, const double* d, lapack_int ldd, const double* e, lapack_int lde, double* f, lapack_int ldf, double* scale, double* dif, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_ctgsyl_work( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_int ldc, const lapack_complex_float* d, lapack_int ldd, const lapack_complex_float* e, lapack_int lde, lapack_complex_float* f, lapack_int ldf, float* scale, float* dif, lapack_complex_float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_ztgsyl_work( int matrix_order, char trans, lapack_int ijob, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_int ldc, const lapack_complex_double* d, lapack_int ldd, const lapack_complex_double* e, lapack_int lde, lapack_complex_double* f, lapack_int ldf, double* scale, double* dif, lapack_complex_double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_stpcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const float* ap, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtpcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const double* ap, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctpcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_float* ap, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztpcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_double* ap, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_stprfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* ap, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtprfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* ap, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctprfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztprfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_stptri_work( int matrix_order, char uplo, char diag, lapack_int n, float* ap ); lapack_int LAPACKE_dtptri_work( int matrix_order, char uplo, char diag, lapack_int n, double* ap ); lapack_int LAPACKE_ctptri_work( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_float* ap ); lapack_int LAPACKE_ztptri_work( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_double* ap ); lapack_int LAPACKE_stptrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* ap, float* b, lapack_int ldb ); lapack_int LAPACKE_dtptrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* ap, double* b, lapack_int ldb ); lapack_int LAPACKE_ctptrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztptrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_stpttf_work( int matrix_order, char transr, char uplo, lapack_int n, const float* ap, float* arf ); lapack_int LAPACKE_dtpttf_work( int matrix_order, char transr, char uplo, lapack_int n, const double* ap, double* arf ); lapack_int LAPACKE_ctpttf_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* ap, lapack_complex_float* arf ); lapack_int LAPACKE_ztpttf_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* ap, lapack_complex_double* arf ); lapack_int LAPACKE_stpttr_work( int matrix_order, char uplo, lapack_int n, const float* ap, float* a, lapack_int lda ); lapack_int LAPACKE_dtpttr_work( int matrix_order, char uplo, lapack_int n, const double* ap, double* a, lapack_int lda ); lapack_int LAPACKE_ctpttr_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztpttr_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_strcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const float* a, lapack_int lda, float* rcond, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtrcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const double* a, lapack_int lda, double* rcond, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctrcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_float* a, lapack_int lda, float* rcond, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztrcon_work( int matrix_order, char norm, char uplo, char diag, lapack_int n, const lapack_complex_double* a, lapack_int lda, double* rcond, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_strevc_work( int matrix_order, char side, char howmny, lapack_logical* select, lapack_int n, const float* t, lapack_int ldt, float* vl, lapack_int ldvl, float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, float* work ); lapack_int LAPACKE_dtrevc_work( int matrix_order, char side, char howmny, lapack_logical* select, lapack_int n, const double* t, lapack_int ldt, double* vl, lapack_int ldvl, double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, double* work ); lapack_int LAPACKE_ctrevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* vl, lapack_int ldvl, lapack_complex_float* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztrevc_work( int matrix_order, char side, char howmny, const lapack_logical* select, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* vl, lapack_int ldvl, lapack_complex_double* vr, lapack_int ldvr, lapack_int mm, lapack_int* m, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_strexc_work( int matrix_order, char compq, lapack_int n, float* t, lapack_int ldt, float* q, lapack_int ldq, lapack_int* ifst, lapack_int* ilst, float* work ); lapack_int LAPACKE_dtrexc_work( int matrix_order, char compq, lapack_int n, double* t, lapack_int ldt, double* q, lapack_int ldq, lapack_int* ifst, lapack_int* ilst, double* work ); lapack_int LAPACKE_ctrexc_work( int matrix_order, char compq, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* q, lapack_int ldq, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_ztrexc_work( int matrix_order, char compq, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* q, lapack_int ldq, lapack_int ifst, lapack_int ilst ); lapack_int LAPACKE_strrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const float* b, lapack_int ldb, const float* x, lapack_int ldx, float* ferr, float* berr, float* work, lapack_int* iwork ); lapack_int LAPACKE_dtrrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const double* b, lapack_int ldb, const double* x, lapack_int ldx, double* ferr, double* berr, double* work, lapack_int* iwork ); lapack_int LAPACKE_ctrrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, const lapack_complex_float* x, lapack_int ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork ); lapack_int LAPACKE_ztrrfs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, const lapack_complex_double* x, lapack_int ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork ); lapack_int LAPACKE_strsen_work( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, float* t, lapack_int ldt, float* q, lapack_int ldq, float* wr, float* wi, lapack_int* m, float* s, float* sep, float* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_dtrsen_work( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, double* t, lapack_int ldt, double* q, lapack_int ldq, double* wr, double* wi, lapack_int* m, double* s, double* sep, double* work, lapack_int lwork, lapack_int* iwork, lapack_int liwork ); lapack_int LAPACKE_ctrsen_work( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* w, lapack_int* m, float* s, float* sep, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_ztrsen_work( int matrix_order, char job, char compq, const lapack_logical* select, lapack_int n, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* w, lapack_int* m, double* s, double* sep, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_strsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const float* t, lapack_int ldt, const float* vl, lapack_int ldvl, const float* vr, lapack_int ldvr, float* s, float* sep, lapack_int mm, lapack_int* m, float* work, lapack_int ldwork, lapack_int* iwork ); lapack_int LAPACKE_dtrsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const double* t, lapack_int ldt, const double* vl, lapack_int ldvl, const double* vr, lapack_int ldvr, double* s, double* sep, lapack_int mm, lapack_int* m, double* work, lapack_int ldwork, lapack_int* iwork ); lapack_int LAPACKE_ctrsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_float* t, lapack_int ldt, const lapack_complex_float* vl, lapack_int ldvl, const lapack_complex_float* vr, lapack_int ldvr, float* s, float* sep, lapack_int mm, lapack_int* m, lapack_complex_float* work, lapack_int ldwork, float* rwork ); lapack_int LAPACKE_ztrsna_work( int matrix_order, char job, char howmny, const lapack_logical* select, lapack_int n, const lapack_complex_double* t, lapack_int ldt, const lapack_complex_double* vl, lapack_int ldvl, const lapack_complex_double* vr, lapack_int ldvr, double* s, double* sep, lapack_int mm, lapack_int* m, lapack_complex_double* work, lapack_int ldwork, double* rwork ); lapack_int LAPACKE_strsyl_work( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const float* a, lapack_int lda, const float* b, lapack_int ldb, float* c, lapack_int ldc, float* scale ); lapack_int LAPACKE_dtrsyl_work( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const double* a, lapack_int lda, const double* b, lapack_int ldb, double* c, lapack_int ldc, double* scale ); lapack_int LAPACKE_ctrsyl_work( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* b, lapack_int ldb, lapack_complex_float* c, lapack_int ldc, float* scale ); lapack_int LAPACKE_ztrsyl_work( int matrix_order, char trana, char tranb, lapack_int isgn, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* b, lapack_int ldb, lapack_complex_double* c, lapack_int ldc, double* scale ); lapack_int LAPACKE_strtri_work( int matrix_order, char uplo, char diag, lapack_int n, float* a, lapack_int lda ); lapack_int LAPACKE_dtrtri_work( int matrix_order, char uplo, char diag, lapack_int n, double* a, lapack_int lda ); lapack_int LAPACKE_ctrtri_work( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_ztrtri_work( int matrix_order, char uplo, char diag, lapack_int n, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_strtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dtrtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_ctrtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztrtrs_work( int matrix_order, char uplo, char trans, char diag, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_strttf_work( int matrix_order, char transr, char uplo, lapack_int n, const float* a, lapack_int lda, float* arf ); lapack_int LAPACKE_dtrttf_work( int matrix_order, char transr, char uplo, lapack_int n, const double* a, lapack_int lda, double* arf ); lapack_int LAPACKE_ctrttf_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* arf ); lapack_int LAPACKE_ztrttf_work( int matrix_order, char transr, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* arf ); lapack_int LAPACKE_strttp_work( int matrix_order, char uplo, lapack_int n, const float* a, lapack_int lda, float* ap ); lapack_int LAPACKE_dtrttp_work( int matrix_order, char uplo, lapack_int n, const double* a, lapack_int lda, double* ap ); lapack_int LAPACKE_ctrttp_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* a, lapack_int lda, lapack_complex_float* ap ); lapack_int LAPACKE_ztrttp_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* a, lapack_int lda, lapack_complex_double* ap ); lapack_int LAPACKE_stzrzf_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* tau, float* work, lapack_int lwork ); lapack_int LAPACKE_dtzrzf_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* tau, double* work, lapack_int lwork ); lapack_int LAPACKE_ctzrzf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_ztzrzf_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cungbr_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zungbr_work( int matrix_order, char vect, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunghr_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunghr_work( int matrix_order, lapack_int n, lapack_int ilo, lapack_int ihi, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunglq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunglq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cungql_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zungql_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cungqr_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zungqr_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cungrq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zungrq_work( int matrix_order, lapack_int m, lapack_int n, lapack_int k, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cungtr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zungtr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmbr_work( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmbr_work( int matrix_order, char vect, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmhr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmhr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int ilo, lapack_int ihi, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmlq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmlq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmql_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmql_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmqr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmqr_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmrq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmrq_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmrz_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmrz_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cunmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_float* a, lapack_int lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_zunmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_double* a, lapack_int lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_cupgtr_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* q, lapack_int ldq, lapack_complex_float* work ); lapack_int LAPACKE_zupgtr_work( int matrix_order, char uplo, lapack_int n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* q, lapack_int ldq, lapack_complex_double* work ); lapack_int LAPACKE_cupmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work ); lapack_int LAPACKE_zupmtr_work( int matrix_order, char side, char uplo, char trans, lapack_int m, lapack_int n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work ); lapack_int LAPACKE_claghe( int matrix_order, lapack_int n, lapack_int k, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_zlaghe( int matrix_order, lapack_int n, lapack_int k, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_slagsy( int matrix_order, lapack_int n, lapack_int k, const float* d, float* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_dlagsy( int matrix_order, lapack_int n, lapack_int k, const double* d, double* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_clagsy( int matrix_order, lapack_int n, lapack_int k, const float* d, lapack_complex_float* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_zlagsy( int matrix_order, lapack_int n, lapack_int k, const double* d, lapack_complex_double* a, lapack_int lda, lapack_int* iseed ); lapack_int LAPACKE_slapmr( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, float* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_dlapmr( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, double* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_clapmr( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, lapack_complex_float* x, lapack_int ldx, lapack_int* k ); lapack_int LAPACKE_zlapmr( int matrix_order, lapack_logical forwrd, lapack_int m, lapack_int n, lapack_complex_double* x, lapack_int ldx, lapack_int* k ); float LAPACKE_slapy2( float x, float y ); double LAPACKE_dlapy2( double x, double y ); float LAPACKE_slapy3( float x, float y, float z ); double LAPACKE_dlapy3( double x, double y, double z ); lapack_int LAPACKE_slartgp( float f, float g, float* cs, float* sn, float* r ); lapack_int LAPACKE_dlartgp( double f, double g, double* cs, double* sn, double* r ); lapack_int LAPACKE_slartgs( float x, float y, float sigma, float* cs, float* sn ); lapack_int LAPACKE_dlartgs( double x, double y, double sigma, double* cs, double* sn ); //LAPACK 3.3.0 lapack_int LAPACKE_cbbcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, float* theta, float* phi, lapack_complex_float* u1, lapack_int ldu1, lapack_complex_float* u2, lapack_int ldu2, lapack_complex_float* v1t, lapack_int ldv1t, lapack_complex_float* v2t, lapack_int ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e ); lapack_int LAPACKE_cbbcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, float* theta, float* phi, lapack_complex_float* u1, lapack_int ldu1, lapack_complex_float* u2, lapack_int ldu2, lapack_complex_float* v1t, lapack_int ldv1t, lapack_complex_float* v2t, lapack_int ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e, float* rwork, lapack_int lrwork ); lapack_int LAPACKE_cheswapr( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_cheswapr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_chetri2( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_chetri2_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_chetri2x( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_chetri2x_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int nb ); lapack_int LAPACKE_chetrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_chetrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work ); lapack_int LAPACKE_csyconv( int matrix_order, char uplo, char way, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_csyconv_work( int matrix_order, char uplo, char way, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work ); lapack_int LAPACKE_csyswapr( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_csyswapr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_csytri2( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_csytri2_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_csytri2x( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_csytri2x_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int nb ); lapack_int LAPACKE_csytrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_csytrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work ); lapack_int LAPACKE_cunbdb( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_float* x11, lapack_int ldx11, lapack_complex_float* x12, lapack_int ldx12, lapack_complex_float* x21, lapack_int ldx21, lapack_complex_float* x22, lapack_int ldx22, float* theta, float* phi, lapack_complex_float* taup1, lapack_complex_float* taup2, lapack_complex_float* tauq1, lapack_complex_float* tauq2 ); lapack_int LAPACKE_cunbdb_work( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_float* x11, lapack_int ldx11, lapack_complex_float* x12, lapack_int ldx12, lapack_complex_float* x21, lapack_int ldx21, lapack_complex_float* x22, lapack_int ldx22, float* theta, float* phi, lapack_complex_float* taup1, lapack_complex_float* taup2, lapack_complex_float* tauq1, lapack_complex_float* tauq2, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_cuncsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_float* x11, lapack_int ldx11, lapack_complex_float* x12, lapack_int ldx12, lapack_complex_float* x21, lapack_int ldx21, lapack_complex_float* x22, lapack_int ldx22, float* theta, lapack_complex_float* u1, lapack_int ldu1, lapack_complex_float* u2, lapack_int ldu2, lapack_complex_float* v1t, lapack_int ldv1t, lapack_complex_float* v2t, lapack_int ldv2t ); lapack_int LAPACKE_cuncsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_float* x11, lapack_int ldx11, lapack_complex_float* x12, lapack_int ldx12, lapack_complex_float* x21, lapack_int ldx21, lapack_complex_float* x22, lapack_int ldx22, float* theta, lapack_complex_float* u1, lapack_int ldu1, lapack_complex_float* u2, lapack_int ldu2, lapack_complex_float* v1t, lapack_int ldv1t, lapack_complex_float* v2t, lapack_int ldv2t, lapack_complex_float* work, lapack_int lwork, float* rwork, lapack_int lrwork, lapack_int* iwork ); lapack_int LAPACKE_dbbcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, double* theta, double* phi, double* u1, lapack_int ldu1, double* u2, lapack_int ldu2, double* v1t, lapack_int ldv1t, double* v2t, lapack_int ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e ); lapack_int LAPACKE_dbbcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, double* theta, double* phi, double* u1, lapack_int ldu1, double* u2, lapack_int ldu2, double* v1t, lapack_int ldv1t, double* v2t, lapack_int ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e, double* work, lapack_int lwork ); lapack_int LAPACKE_dorbdb( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, double* x11, lapack_int ldx11, double* x12, lapack_int ldx12, double* x21, lapack_int ldx21, double* x22, lapack_int ldx22, double* theta, double* phi, double* taup1, double* taup2, double* tauq1, double* tauq2 ); lapack_int LAPACKE_dorbdb_work( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, double* x11, lapack_int ldx11, double* x12, lapack_int ldx12, double* x21, lapack_int ldx21, double* x22, lapack_int ldx22, double* theta, double* phi, double* taup1, double* taup2, double* tauq1, double* tauq2, double* work, lapack_int lwork ); lapack_int LAPACKE_dorcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, double* x11, lapack_int ldx11, double* x12, lapack_int ldx12, double* x21, lapack_int ldx21, double* x22, lapack_int ldx22, double* theta, double* u1, lapack_int ldu1, double* u2, lapack_int ldu2, double* v1t, lapack_int ldv1t, double* v2t, lapack_int ldv2t ); lapack_int LAPACKE_dorcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, double* x11, lapack_int ldx11, double* x12, lapack_int ldx12, double* x21, lapack_int ldx21, double* x22, lapack_int ldx22, double* theta, double* u1, lapack_int ldu1, double* u2, lapack_int ldu2, double* v1t, lapack_int ldv1t, double* v2t, lapack_int ldv2t, double* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_dsyconv( int matrix_order, char uplo, char way, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_dsyconv_work( int matrix_order, char uplo, char way, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, double* work ); lapack_int LAPACKE_dsyswapr( int matrix_order, char uplo, lapack_int n, double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_dsyswapr_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_dsytri2( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_dsytri2_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_dsytri2x( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_dsytri2x_work( int matrix_order, char uplo, lapack_int n, double* a, lapack_int lda, const lapack_int* ipiv, double* work, lapack_int nb ); lapack_int LAPACKE_dsytrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb ); lapack_int LAPACKE_dsytrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const double* a, lapack_int lda, const lapack_int* ipiv, double* b, lapack_int ldb, double* work ); lapack_int LAPACKE_sbbcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, float* theta, float* phi, float* u1, lapack_int ldu1, float* u2, lapack_int ldu2, float* v1t, lapack_int ldv1t, float* v2t, lapack_int ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e ); lapack_int LAPACKE_sbbcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, float* theta, float* phi, float* u1, lapack_int ldu1, float* u2, lapack_int ldu2, float* v1t, lapack_int ldv1t, float* v2t, lapack_int ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e, float* work, lapack_int lwork ); lapack_int LAPACKE_sorbdb( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, float* x11, lapack_int ldx11, float* x12, lapack_int ldx12, float* x21, lapack_int ldx21, float* x22, lapack_int ldx22, float* theta, float* phi, float* taup1, float* taup2, float* tauq1, float* tauq2 ); lapack_int LAPACKE_sorbdb_work( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, float* x11, lapack_int ldx11, float* x12, lapack_int ldx12, float* x21, lapack_int ldx21, float* x22, lapack_int ldx22, float* theta, float* phi, float* taup1, float* taup2, float* tauq1, float* tauq2, float* work, lapack_int lwork ); lapack_int LAPACKE_sorcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, float* x11, lapack_int ldx11, float* x12, lapack_int ldx12, float* x21, lapack_int ldx21, float* x22, lapack_int ldx22, float* theta, float* u1, lapack_int ldu1, float* u2, lapack_int ldu2, float* v1t, lapack_int ldv1t, float* v2t, lapack_int ldv2t ); lapack_int LAPACKE_sorcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, float* x11, lapack_int ldx11, float* x12, lapack_int ldx12, float* x21, lapack_int ldx21, float* x22, lapack_int ldx22, float* theta, float* u1, lapack_int ldu1, float* u2, lapack_int ldu2, float* v1t, lapack_int ldv1t, float* v2t, lapack_int ldv2t, float* work, lapack_int lwork, lapack_int* iwork ); lapack_int LAPACKE_ssyconv( int matrix_order, char uplo, char way, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_ssyconv_work( int matrix_order, char uplo, char way, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, float* work ); lapack_int LAPACKE_ssyswapr( int matrix_order, char uplo, lapack_int n, float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_ssyswapr_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_ssytri2( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_ssytri2_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int lwork ); lapack_int LAPACKE_ssytri2x( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_ssytri2x_work( int matrix_order, char uplo, lapack_int n, float* a, lapack_int lda, const lapack_int* ipiv, float* work, lapack_int nb ); lapack_int LAPACKE_ssytrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb ); lapack_int LAPACKE_ssytrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const float* a, lapack_int lda, const lapack_int* ipiv, float* b, lapack_int ldb, float* work ); lapack_int LAPACKE_zbbcsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, double* theta, double* phi, lapack_complex_double* u1, lapack_int ldu1, lapack_complex_double* u2, lapack_int ldu2, lapack_complex_double* v1t, lapack_int ldv1t, lapack_complex_double* v2t, lapack_int ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e ); lapack_int LAPACKE_zbbcsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, lapack_int m, lapack_int p, lapack_int q, double* theta, double* phi, lapack_complex_double* u1, lapack_int ldu1, lapack_complex_double* u2, lapack_int ldu2, lapack_complex_double* v1t, lapack_int ldv1t, lapack_complex_double* v2t, lapack_int ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e, double* rwork, lapack_int lrwork ); lapack_int LAPACKE_zheswapr( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_zheswapr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_zhetri2( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zhetri2_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_zhetri2x( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_zhetri2x_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int nb ); lapack_int LAPACKE_zhetrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_zhetrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work ); lapack_int LAPACKE_zsyconv( int matrix_order, char uplo, char way, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zsyconv_work( int matrix_order, char uplo, char way, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work ); lapack_int LAPACKE_zsyswapr( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_zsyswapr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int i1, lapack_int i2 ); lapack_int LAPACKE_zsytri2( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv ); lapack_int LAPACKE_zsytri2_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_zsytri2x( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_int nb ); lapack_int LAPACKE_zsytri2x_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int nb ); lapack_int LAPACKE_zsytrs2( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_zsytrs2_work( int matrix_order, char uplo, lapack_int n, lapack_int nrhs, const lapack_complex_double* a, lapack_int lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work ); lapack_int LAPACKE_zunbdb( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_double* x11, lapack_int ldx11, lapack_complex_double* x12, lapack_int ldx12, lapack_complex_double* x21, lapack_int ldx21, lapack_complex_double* x22, lapack_int ldx22, double* theta, double* phi, lapack_complex_double* taup1, lapack_complex_double* taup2, lapack_complex_double* tauq1, lapack_complex_double* tauq2 ); lapack_int LAPACKE_zunbdb_work( int matrix_order, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_double* x11, lapack_int ldx11, lapack_complex_double* x12, lapack_int ldx12, lapack_complex_double* x21, lapack_int ldx21, lapack_complex_double* x22, lapack_int ldx22, double* theta, double* phi, lapack_complex_double* taup1, lapack_complex_double* taup2, lapack_complex_double* tauq1, lapack_complex_double* tauq2, lapack_complex_double* work, lapack_int lwork ); lapack_int LAPACKE_zuncsd( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_double* x11, lapack_int ldx11, lapack_complex_double* x12, lapack_int ldx12, lapack_complex_double* x21, lapack_int ldx21, lapack_complex_double* x22, lapack_int ldx22, double* theta, lapack_complex_double* u1, lapack_int ldu1, lapack_complex_double* u2, lapack_int ldu2, lapack_complex_double* v1t, lapack_int ldv1t, lapack_complex_double* v2t, lapack_int ldv2t ); lapack_int LAPACKE_zuncsd_work( int matrix_order, char jobu1, char jobu2, char jobv1t, char jobv2t, char trans, char signs, lapack_int m, lapack_int p, lapack_int q, lapack_complex_double* x11, lapack_int ldx11, lapack_complex_double* x12, lapack_int ldx12, lapack_complex_double* x21, lapack_int ldx21, lapack_complex_double* x22, lapack_int ldx22, double* theta, lapack_complex_double* u1, lapack_int ldu1, lapack_complex_double* u2, lapack_int ldu2, lapack_complex_double* v1t, lapack_int ldv1t, lapack_complex_double* v2t, lapack_int ldv2t, lapack_complex_double* work, lapack_int lwork, double* rwork, lapack_int lrwork, lapack_int* iwork ); //LAPACK 3.4.0 lapack_int LAPACKE_sgemqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* c, lapack_int ldc ); lapack_int LAPACKE_dgemqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* c, lapack_int ldc ); lapack_int LAPACKE_cgemqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* c, lapack_int ldc ); lapack_int LAPACKE_zgemqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* c, lapack_int ldc ); lapack_int LAPACKE_sgeqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, float* a, lapack_int lda, float* t, lapack_int ldt ); lapack_int LAPACKE_dgeqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, double* a, lapack_int lda, double* t, lapack_int ldt ); lapack_int LAPACKE_cgeqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zgeqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_sgeqrt2( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* t, lapack_int ldt ); lapack_int LAPACKE_dgeqrt2( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* t, lapack_int ldt ); lapack_int LAPACKE_cgeqrt2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zgeqrt2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_sgeqrt3( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* t, lapack_int ldt ); lapack_int LAPACKE_dgeqrt3( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* t, lapack_int ldt ); lapack_int LAPACKE_cgeqrt3( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zgeqrt3( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_stpmqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* a, lapack_int lda, float* b, lapack_int ldb ); lapack_int LAPACKE_dtpmqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* a, lapack_int lda, double* b, lapack_int ldb ); lapack_int LAPACKE_ctpmqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb ); lapack_int LAPACKE_ztpmqrt( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb ); lapack_int LAPACKE_dtpqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, double* a, lapack_int lda, double* b, lapack_int ldb, double* t, lapack_int ldt ); lapack_int LAPACKE_ctpqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_complex_float* b, lapack_int ldb, lapack_int ldt ); lapack_int LAPACKE_ztpqrt( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_stpqrt2( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* t, lapack_int ldt ); lapack_int LAPACKE_dtpqrt2( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* t, lapack_int ldt ); lapack_int LAPACKE_ctpqrt2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_ztpqrt2( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_stprfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* a, lapack_int lda, float* b, lapack_int ldb, lapack_int myldwork ); lapack_int LAPACKE_dtprfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* a, lapack_int lda, double* b, lapack_int ldb, lapack_int myldwork ); lapack_int LAPACKE_ctprfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_int myldwork ); lapack_int LAPACKE_ztprfb( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_int myldwork ); lapack_int LAPACKE_sgemqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* c, lapack_int ldc, float* work ); lapack_int LAPACKE_dgemqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* c, lapack_int ldc, double* work ); lapack_int LAPACKE_cgemqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* c, lapack_int ldc, lapack_complex_float* work ); lapack_int LAPACKE_zgemqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int nb, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* c, lapack_int ldc, lapack_complex_double* work ); lapack_int LAPACKE_sgeqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, float* a, lapack_int lda, float* t, lapack_int ldt, float* work ); lapack_int LAPACKE_dgeqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, double* a, lapack_int lda, double* t, lapack_int ldt, double* work ); lapack_int LAPACKE_cgeqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt, lapack_complex_float* work ); lapack_int LAPACKE_zgeqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int nb, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* work ); lapack_int LAPACKE_sgeqrt2_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* t, lapack_int ldt ); lapack_int LAPACKE_dgeqrt2_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* t, lapack_int ldt ); lapack_int LAPACKE_cgeqrt2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zgeqrt2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_sgeqrt3_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* t, lapack_int ldt ); lapack_int LAPACKE_dgeqrt3_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* t, lapack_int ldt ); lapack_int LAPACKE_cgeqrt3_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_zgeqrt3_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_stpmqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* a, lapack_int lda, float* b, lapack_int ldb, float* work ); lapack_int LAPACKE_dtpmqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* a, lapack_int lda, double* b, lapack_int ldb, double* work ); lapack_int LAPACKE_ctpmqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* work ); lapack_int LAPACKE_ztpmqrt_work( int matrix_order, char side, char trans, lapack_int m, lapack_int n, lapack_int k, lapack_int l, lapack_int nb, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* work ); lapack_int LAPACKE_dtpqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, double* a, lapack_int lda, double* b, lapack_int ldb, double* t, lapack_int ldt, double* work ); lapack_int LAPACKE_ctpqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, lapack_complex_float* a, lapack_int lda, lapack_complex_float* t, lapack_complex_float* b, lapack_int ldb, lapack_int ldt, lapack_complex_float* work ); lapack_int LAPACKE_ztpqrt_work( int matrix_order, lapack_int m, lapack_int n, lapack_int l, lapack_int nb, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* t, lapack_int ldt, lapack_complex_double* work ); lapack_int LAPACKE_stpqrt2_work( int matrix_order, lapack_int m, lapack_int n, float* a, lapack_int lda, float* b, lapack_int ldb, float* t, lapack_int ldt ); lapack_int LAPACKE_dtpqrt2_work( int matrix_order, lapack_int m, lapack_int n, double* a, lapack_int lda, double* b, lapack_int ldb, double* t, lapack_int ldt ); lapack_int LAPACKE_ctpqrt2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, lapack_complex_float* t, lapack_int ldt ); lapack_int LAPACKE_ztpqrt2_work( int matrix_order, lapack_int m, lapack_int n, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, lapack_complex_double* t, lapack_int ldt ); lapack_int LAPACKE_stprfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const float* v, lapack_int ldv, const float* t, lapack_int ldt, float* a, lapack_int lda, float* b, lapack_int ldb, const float* mywork, lapack_int myldwork ); lapack_int LAPACKE_dtprfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const double* v, lapack_int ldv, const double* t, lapack_int ldt, double* a, lapack_int lda, double* b, lapack_int ldb, const double* mywork, lapack_int myldwork ); lapack_int LAPACKE_ctprfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_float* v, lapack_int ldv, const lapack_complex_float* t, lapack_int ldt, lapack_complex_float* a, lapack_int lda, lapack_complex_float* b, lapack_int ldb, const float* mywork, lapack_int myldwork ); lapack_int LAPACKE_ztprfb_work( int matrix_order, char side, char trans, char direct, char storev, lapack_int m, lapack_int n, lapack_int k, lapack_int l, const lapack_complex_double* v, lapack_int ldv, const lapack_complex_double* t, lapack_int ldt, lapack_complex_double* a, lapack_int lda, lapack_complex_double* b, lapack_int ldb, const double* mywork, lapack_int myldwork ); //LAPACK 3.X.X lapack_int LAPACKE_csyr( int matrix_order, char uplo, lapack_int n, lapack_complex_float alpha, const lapack_complex_float* x, lapack_int incx, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zsyr( int matrix_order, char uplo, lapack_int n, lapack_complex_double alpha, const lapack_complex_double* x, lapack_int incx, lapack_complex_double* a, lapack_int lda ); lapack_int LAPACKE_csyr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_float alpha, const lapack_complex_float* x, lapack_int incx, lapack_complex_float* a, lapack_int lda ); lapack_int LAPACKE_zsyr_work( int matrix_order, char uplo, lapack_int n, lapack_complex_double alpha, const lapack_complex_double* x, lapack_int incx, lapack_complex_double* a, lapack_int lda ); #define LAPACK_sgetrf LAPACK_GLOBAL(sgetrf,SGETRF) #define LAPACK_dgetrf LAPACK_GLOBAL(dgetrf,DGETRF) #define LAPACK_cgetrf LAPACK_GLOBAL(cgetrf,CGETRF) #define LAPACK_zgetrf LAPACK_GLOBAL(zgetrf,ZGETRF) #define LAPACK_sgbtrf LAPACK_GLOBAL(sgbtrf,SGBTRF) #define LAPACK_dgbtrf LAPACK_GLOBAL(dgbtrf,DGBTRF) #define LAPACK_cgbtrf LAPACK_GLOBAL(cgbtrf,CGBTRF) #define LAPACK_zgbtrf LAPACK_GLOBAL(zgbtrf,ZGBTRF) #define LAPACK_sgttrf LAPACK_GLOBAL(sgttrf,SGTTRF) #define LAPACK_dgttrf LAPACK_GLOBAL(dgttrf,DGTTRF) #define LAPACK_cgttrf LAPACK_GLOBAL(cgttrf,CGTTRF) #define LAPACK_zgttrf LAPACK_GLOBAL(zgttrf,ZGTTRF) #define LAPACK_spotrf LAPACK_GLOBAL(spotrf,SPOTRF) #define LAPACK_dpotrf LAPACK_GLOBAL(dpotrf,DPOTRF) #define LAPACK_cpotrf LAPACK_GLOBAL(cpotrf,CPOTRF) #define LAPACK_zpotrf LAPACK_GLOBAL(zpotrf,ZPOTRF) #define LAPACK_dpstrf LAPACK_GLOBAL(dpstrf,DPSTRF) #define LAPACK_spstrf LAPACK_GLOBAL(spstrf,SPSTRF) #define LAPACK_zpstrf LAPACK_GLOBAL(zpstrf,ZPSTRF) #define LAPACK_cpstrf LAPACK_GLOBAL(cpstrf,CPSTRF) #define LAPACK_dpftrf LAPACK_GLOBAL(dpftrf,DPFTRF) #define LAPACK_spftrf LAPACK_GLOBAL(spftrf,SPFTRF) #define LAPACK_zpftrf LAPACK_GLOBAL(zpftrf,ZPFTRF) #define LAPACK_cpftrf LAPACK_GLOBAL(cpftrf,CPFTRF) #define LAPACK_spptrf LAPACK_GLOBAL(spptrf,SPPTRF) #define LAPACK_dpptrf LAPACK_GLOBAL(dpptrf,DPPTRF) #define LAPACK_cpptrf LAPACK_GLOBAL(cpptrf,CPPTRF) #define LAPACK_zpptrf LAPACK_GLOBAL(zpptrf,ZPPTRF) #define LAPACK_spbtrf LAPACK_GLOBAL(spbtrf,SPBTRF) #define LAPACK_dpbtrf LAPACK_GLOBAL(dpbtrf,DPBTRF) #define LAPACK_cpbtrf LAPACK_GLOBAL(cpbtrf,CPBTRF) #define LAPACK_zpbtrf LAPACK_GLOBAL(zpbtrf,ZPBTRF) #define LAPACK_spttrf LAPACK_GLOBAL(spttrf,SPTTRF) #define LAPACK_dpttrf LAPACK_GLOBAL(dpttrf,DPTTRF) #define LAPACK_cpttrf LAPACK_GLOBAL(cpttrf,CPTTRF) #define LAPACK_zpttrf LAPACK_GLOBAL(zpttrf,ZPTTRF) #define LAPACK_ssytrf LAPACK_GLOBAL(ssytrf,SSYTRF) #define LAPACK_dsytrf LAPACK_GLOBAL(dsytrf,DSYTRF) #define LAPACK_csytrf LAPACK_GLOBAL(csytrf,CSYTRF) #define LAPACK_zsytrf LAPACK_GLOBAL(zsytrf,ZSYTRF) #define LAPACK_chetrf LAPACK_GLOBAL(chetrf,CHETRF) #define LAPACK_zhetrf LAPACK_GLOBAL(zhetrf,ZHETRF) #define LAPACK_ssptrf LAPACK_GLOBAL(ssptrf,SSPTRF) #define LAPACK_dsptrf LAPACK_GLOBAL(dsptrf,DSPTRF) #define LAPACK_csptrf LAPACK_GLOBAL(csptrf,CSPTRF) #define LAPACK_zsptrf LAPACK_GLOBAL(zsptrf,ZSPTRF) #define LAPACK_chptrf LAPACK_GLOBAL(chptrf,CHPTRF) #define LAPACK_zhptrf LAPACK_GLOBAL(zhptrf,ZHPTRF) #define LAPACK_sgetrs LAPACK_GLOBAL(sgetrs,SGETRS) #define LAPACK_dgetrs LAPACK_GLOBAL(dgetrs,DGETRS) #define LAPACK_cgetrs LAPACK_GLOBAL(cgetrs,CGETRS) #define LAPACK_zgetrs LAPACK_GLOBAL(zgetrs,ZGETRS) #define LAPACK_sgbtrs LAPACK_GLOBAL(sgbtrs,SGBTRS) #define LAPACK_dgbtrs LAPACK_GLOBAL(dgbtrs,DGBTRS) #define LAPACK_cgbtrs LAPACK_GLOBAL(cgbtrs,CGBTRS) #define LAPACK_zgbtrs LAPACK_GLOBAL(zgbtrs,ZGBTRS) #define LAPACK_sgttrs LAPACK_GLOBAL(sgttrs,SGTTRS) #define LAPACK_dgttrs LAPACK_GLOBAL(dgttrs,DGTTRS) #define LAPACK_cgttrs LAPACK_GLOBAL(cgttrs,CGTTRS) #define LAPACK_zgttrs LAPACK_GLOBAL(zgttrs,ZGTTRS) #define LAPACK_spotrs LAPACK_GLOBAL(spotrs,SPOTRS) #define LAPACK_dpotrs LAPACK_GLOBAL(dpotrs,DPOTRS) #define LAPACK_cpotrs LAPACK_GLOBAL(cpotrs,CPOTRS) #define LAPACK_zpotrs LAPACK_GLOBAL(zpotrs,ZPOTRS) #define LAPACK_dpftrs LAPACK_GLOBAL(dpftrs,DPFTRS) #define LAPACK_spftrs LAPACK_GLOBAL(spftrs,SPFTRS) #define LAPACK_zpftrs LAPACK_GLOBAL(zpftrs,ZPFTRS) #define LAPACK_cpftrs LAPACK_GLOBAL(cpftrs,CPFTRS) #define LAPACK_spptrs LAPACK_GLOBAL(spptrs,SPPTRS) #define LAPACK_dpptrs LAPACK_GLOBAL(dpptrs,DPPTRS) #define LAPACK_cpptrs LAPACK_GLOBAL(cpptrs,CPPTRS) #define LAPACK_zpptrs LAPACK_GLOBAL(zpptrs,ZPPTRS) #define LAPACK_spbtrs LAPACK_GLOBAL(spbtrs,SPBTRS) #define LAPACK_dpbtrs LAPACK_GLOBAL(dpbtrs,DPBTRS) #define LAPACK_cpbtrs LAPACK_GLOBAL(cpbtrs,CPBTRS) #define LAPACK_zpbtrs LAPACK_GLOBAL(zpbtrs,ZPBTRS) #define LAPACK_spttrs LAPACK_GLOBAL(spttrs,SPTTRS) #define LAPACK_dpttrs LAPACK_GLOBAL(dpttrs,DPTTRS) #define LAPACK_cpttrs LAPACK_GLOBAL(cpttrs,CPTTRS) #define LAPACK_zpttrs LAPACK_GLOBAL(zpttrs,ZPTTRS) #define LAPACK_ssytrs LAPACK_GLOBAL(ssytrs,SSYTRS) #define LAPACK_dsytrs LAPACK_GLOBAL(dsytrs,DSYTRS) #define LAPACK_csytrs LAPACK_GLOBAL(csytrs,CSYTRS) #define LAPACK_zsytrs LAPACK_GLOBAL(zsytrs,ZSYTRS) #define LAPACK_chetrs LAPACK_GLOBAL(chetrs,CHETRS) #define LAPACK_zhetrs LAPACK_GLOBAL(zhetrs,ZHETRS) #define LAPACK_ssptrs LAPACK_GLOBAL(ssptrs,SSPTRS) #define LAPACK_dsptrs LAPACK_GLOBAL(dsptrs,DSPTRS) #define LAPACK_csptrs LAPACK_GLOBAL(csptrs,CSPTRS) #define LAPACK_zsptrs LAPACK_GLOBAL(zsptrs,ZSPTRS) #define LAPACK_chptrs LAPACK_GLOBAL(chptrs,CHPTRS) #define LAPACK_zhptrs LAPACK_GLOBAL(zhptrs,ZHPTRS) #define LAPACK_strtrs LAPACK_GLOBAL(strtrs,STRTRS) #define LAPACK_dtrtrs LAPACK_GLOBAL(dtrtrs,DTRTRS) #define LAPACK_ctrtrs LAPACK_GLOBAL(ctrtrs,CTRTRS) #define LAPACK_ztrtrs LAPACK_GLOBAL(ztrtrs,ZTRTRS) #define LAPACK_stptrs LAPACK_GLOBAL(stptrs,STPTRS) #define LAPACK_dtptrs LAPACK_GLOBAL(dtptrs,DTPTRS) #define LAPACK_ctptrs LAPACK_GLOBAL(ctptrs,CTPTRS) #define LAPACK_ztptrs LAPACK_GLOBAL(ztptrs,ZTPTRS) #define LAPACK_stbtrs LAPACK_GLOBAL(stbtrs,STBTRS) #define LAPACK_dtbtrs LAPACK_GLOBAL(dtbtrs,DTBTRS) #define LAPACK_ctbtrs LAPACK_GLOBAL(ctbtrs,CTBTRS) #define LAPACK_ztbtrs LAPACK_GLOBAL(ztbtrs,ZTBTRS) #define LAPACK_sgecon LAPACK_GLOBAL(sgecon,SGECON) #define LAPACK_dgecon LAPACK_GLOBAL(dgecon,DGECON) #define LAPACK_cgecon LAPACK_GLOBAL(cgecon,CGECON) #define LAPACK_zgecon LAPACK_GLOBAL(zgecon,ZGECON) #define LAPACK_sgbcon LAPACK_GLOBAL(sgbcon,SGBCON) #define LAPACK_dgbcon LAPACK_GLOBAL(dgbcon,DGBCON) #define LAPACK_cgbcon LAPACK_GLOBAL(cgbcon,CGBCON) #define LAPACK_zgbcon LAPACK_GLOBAL(zgbcon,ZGBCON) #define LAPACK_sgtcon LAPACK_GLOBAL(sgtcon,SGTCON) #define LAPACK_dgtcon LAPACK_GLOBAL(dgtcon,DGTCON) #define LAPACK_cgtcon LAPACK_GLOBAL(cgtcon,CGTCON) #define LAPACK_zgtcon LAPACK_GLOBAL(zgtcon,ZGTCON) #define LAPACK_spocon LAPACK_GLOBAL(spocon,SPOCON) #define LAPACK_dpocon LAPACK_GLOBAL(dpocon,DPOCON) #define LAPACK_cpocon LAPACK_GLOBAL(cpocon,CPOCON) #define LAPACK_zpocon LAPACK_GLOBAL(zpocon,ZPOCON) #define LAPACK_sppcon LAPACK_GLOBAL(sppcon,SPPCON) #define LAPACK_dppcon LAPACK_GLOBAL(dppcon,DPPCON) #define LAPACK_cppcon LAPACK_GLOBAL(cppcon,CPPCON) #define LAPACK_zppcon LAPACK_GLOBAL(zppcon,ZPPCON) #define LAPACK_spbcon LAPACK_GLOBAL(spbcon,SPBCON) #define LAPACK_dpbcon LAPACK_GLOBAL(dpbcon,DPBCON) #define LAPACK_cpbcon LAPACK_GLOBAL(cpbcon,CPBCON) #define LAPACK_zpbcon LAPACK_GLOBAL(zpbcon,ZPBCON) #define LAPACK_sptcon LAPACK_GLOBAL(sptcon,SPTCON) #define LAPACK_dptcon LAPACK_GLOBAL(dptcon,DPTCON) #define LAPACK_cptcon LAPACK_GLOBAL(cptcon,CPTCON) #define LAPACK_zptcon LAPACK_GLOBAL(zptcon,ZPTCON) #define LAPACK_ssycon LAPACK_GLOBAL(ssycon,SSYCON) #define LAPACK_dsycon LAPACK_GLOBAL(dsycon,DSYCON) #define LAPACK_csycon LAPACK_GLOBAL(csycon,CSYCON) #define LAPACK_zsycon LAPACK_GLOBAL(zsycon,ZSYCON) #define LAPACK_checon LAPACK_GLOBAL(checon,CHECON) #define LAPACK_zhecon LAPACK_GLOBAL(zhecon,ZHECON) #define LAPACK_sspcon LAPACK_GLOBAL(sspcon,SSPCON) #define LAPACK_dspcon LAPACK_GLOBAL(dspcon,DSPCON) #define LAPACK_cspcon LAPACK_GLOBAL(cspcon,CSPCON) #define LAPACK_zspcon LAPACK_GLOBAL(zspcon,ZSPCON) #define LAPACK_chpcon LAPACK_GLOBAL(chpcon,CHPCON) #define LAPACK_zhpcon LAPACK_GLOBAL(zhpcon,ZHPCON) #define LAPACK_strcon LAPACK_GLOBAL(strcon,STRCON) #define LAPACK_dtrcon LAPACK_GLOBAL(dtrcon,DTRCON) #define LAPACK_ctrcon LAPACK_GLOBAL(ctrcon,CTRCON) #define LAPACK_ztrcon LAPACK_GLOBAL(ztrcon,ZTRCON) #define LAPACK_stpcon LAPACK_GLOBAL(stpcon,STPCON) #define LAPACK_dtpcon LAPACK_GLOBAL(dtpcon,DTPCON) #define LAPACK_ctpcon LAPACK_GLOBAL(ctpcon,CTPCON) #define LAPACK_ztpcon LAPACK_GLOBAL(ztpcon,ZTPCON) #define LAPACK_stbcon LAPACK_GLOBAL(stbcon,STBCON) #define LAPACK_dtbcon LAPACK_GLOBAL(dtbcon,DTBCON) #define LAPACK_ctbcon LAPACK_GLOBAL(ctbcon,CTBCON) #define LAPACK_ztbcon LAPACK_GLOBAL(ztbcon,ZTBCON) #define LAPACK_sgerfs LAPACK_GLOBAL(sgerfs,SGERFS) #define LAPACK_dgerfs LAPACK_GLOBAL(dgerfs,DGERFS) #define LAPACK_cgerfs LAPACK_GLOBAL(cgerfs,CGERFS) #define LAPACK_zgerfs LAPACK_GLOBAL(zgerfs,ZGERFS) #define LAPACK_dgerfsx LAPACK_GLOBAL(dgerfsx,DGERFSX) #define LAPACK_sgerfsx LAPACK_GLOBAL(sgerfsx,SGERFSX) #define LAPACK_zgerfsx LAPACK_GLOBAL(zgerfsx,ZGERFSX) #define LAPACK_cgerfsx LAPACK_GLOBAL(cgerfsx,CGERFSX) #define LAPACK_sgbrfs LAPACK_GLOBAL(sgbrfs,SGBRFS) #define LAPACK_dgbrfs LAPACK_GLOBAL(dgbrfs,DGBRFS) #define LAPACK_cgbrfs LAPACK_GLOBAL(cgbrfs,CGBRFS) #define LAPACK_zgbrfs LAPACK_GLOBAL(zgbrfs,ZGBRFS) #define LAPACK_dgbrfsx LAPACK_GLOBAL(dgbrfsx,DGBRFSX) #define LAPACK_sgbrfsx LAPACK_GLOBAL(sgbrfsx,SGBRFSX) #define LAPACK_zgbrfsx LAPACK_GLOBAL(zgbrfsx,ZGBRFSX) #define LAPACK_cgbrfsx LAPACK_GLOBAL(cgbrfsx,CGBRFSX) #define LAPACK_sgtrfs LAPACK_GLOBAL(sgtrfs,SGTRFS) #define LAPACK_dgtrfs LAPACK_GLOBAL(dgtrfs,DGTRFS) #define LAPACK_cgtrfs LAPACK_GLOBAL(cgtrfs,CGTRFS) #define LAPACK_zgtrfs LAPACK_GLOBAL(zgtrfs,ZGTRFS) #define LAPACK_sporfs LAPACK_GLOBAL(sporfs,SPORFS) #define LAPACK_dporfs LAPACK_GLOBAL(dporfs,DPORFS) #define LAPACK_cporfs LAPACK_GLOBAL(cporfs,CPORFS) #define LAPACK_zporfs LAPACK_GLOBAL(zporfs,ZPORFS) #define LAPACK_dporfsx LAPACK_GLOBAL(dporfsx,DPORFSX) #define LAPACK_sporfsx LAPACK_GLOBAL(sporfsx,SPORFSX) #define LAPACK_zporfsx LAPACK_GLOBAL(zporfsx,ZPORFSX) #define LAPACK_cporfsx LAPACK_GLOBAL(cporfsx,CPORFSX) #define LAPACK_spprfs LAPACK_GLOBAL(spprfs,SPPRFS) #define LAPACK_dpprfs LAPACK_GLOBAL(dpprfs,DPPRFS) #define LAPACK_cpprfs LAPACK_GLOBAL(cpprfs,CPPRFS) #define LAPACK_zpprfs LAPACK_GLOBAL(zpprfs,ZPPRFS) #define LAPACK_spbrfs LAPACK_GLOBAL(spbrfs,SPBRFS) #define LAPACK_dpbrfs LAPACK_GLOBAL(dpbrfs,DPBRFS) #define LAPACK_cpbrfs LAPACK_GLOBAL(cpbrfs,CPBRFS) #define LAPACK_zpbrfs LAPACK_GLOBAL(zpbrfs,ZPBRFS) #define LAPACK_sptrfs LAPACK_GLOBAL(sptrfs,SPTRFS) #define LAPACK_dptrfs LAPACK_GLOBAL(dptrfs,DPTRFS) #define LAPACK_cptrfs LAPACK_GLOBAL(cptrfs,CPTRFS) #define LAPACK_zptrfs LAPACK_GLOBAL(zptrfs,ZPTRFS) #define LAPACK_ssyrfs LAPACK_GLOBAL(ssyrfs,SSYRFS) #define LAPACK_dsyrfs LAPACK_GLOBAL(dsyrfs,DSYRFS) #define LAPACK_csyrfs LAPACK_GLOBAL(csyrfs,CSYRFS) #define LAPACK_zsyrfs LAPACK_GLOBAL(zsyrfs,ZSYRFS) #define LAPACK_dsyrfsx LAPACK_GLOBAL(dsyrfsx,DSYRFSX) #define LAPACK_ssyrfsx LAPACK_GLOBAL(ssyrfsx,SSYRFSX) #define LAPACK_zsyrfsx LAPACK_GLOBAL(zsyrfsx,ZSYRFSX) #define LAPACK_csyrfsx LAPACK_GLOBAL(csyrfsx,CSYRFSX) #define LAPACK_cherfs LAPACK_GLOBAL(cherfs,CHERFS) #define LAPACK_zherfs LAPACK_GLOBAL(zherfs,ZHERFS) #define LAPACK_zherfsx LAPACK_GLOBAL(zherfsx,ZHERFSX) #define LAPACK_cherfsx LAPACK_GLOBAL(cherfsx,CHERFSX) #define LAPACK_ssprfs LAPACK_GLOBAL(ssprfs,SSPRFS) #define LAPACK_dsprfs LAPACK_GLOBAL(dsprfs,DSPRFS) #define LAPACK_csprfs LAPACK_GLOBAL(csprfs,CSPRFS) #define LAPACK_zsprfs LAPACK_GLOBAL(zsprfs,ZSPRFS) #define LAPACK_chprfs LAPACK_GLOBAL(chprfs,CHPRFS) #define LAPACK_zhprfs LAPACK_GLOBAL(zhprfs,ZHPRFS) #define LAPACK_strrfs LAPACK_GLOBAL(strrfs,STRRFS) #define LAPACK_dtrrfs LAPACK_GLOBAL(dtrrfs,DTRRFS) #define LAPACK_ctrrfs LAPACK_GLOBAL(ctrrfs,CTRRFS) #define LAPACK_ztrrfs LAPACK_GLOBAL(ztrrfs,ZTRRFS) #define LAPACK_stprfs LAPACK_GLOBAL(stprfs,STPRFS) #define LAPACK_dtprfs LAPACK_GLOBAL(dtprfs,DTPRFS) #define LAPACK_ctprfs LAPACK_GLOBAL(ctprfs,CTPRFS) #define LAPACK_ztprfs LAPACK_GLOBAL(ztprfs,ZTPRFS) #define LAPACK_stbrfs LAPACK_GLOBAL(stbrfs,STBRFS) #define LAPACK_dtbrfs LAPACK_GLOBAL(dtbrfs,DTBRFS) #define LAPACK_ctbrfs LAPACK_GLOBAL(ctbrfs,CTBRFS) #define LAPACK_ztbrfs LAPACK_GLOBAL(ztbrfs,ZTBRFS) #define LAPACK_sgetri LAPACK_GLOBAL(sgetri,SGETRI) #define LAPACK_dgetri LAPACK_GLOBAL(dgetri,DGETRI) #define LAPACK_cgetri LAPACK_GLOBAL(cgetri,CGETRI) #define LAPACK_zgetri LAPACK_GLOBAL(zgetri,ZGETRI) #define LAPACK_spotri LAPACK_GLOBAL(spotri,SPOTRI) #define LAPACK_dpotri LAPACK_GLOBAL(dpotri,DPOTRI) #define LAPACK_cpotri LAPACK_GLOBAL(cpotri,CPOTRI) #define LAPACK_zpotri LAPACK_GLOBAL(zpotri,ZPOTRI) #define LAPACK_dpftri LAPACK_GLOBAL(dpftri,DPFTRI) #define LAPACK_spftri LAPACK_GLOBAL(spftri,SPFTRI) #define LAPACK_zpftri LAPACK_GLOBAL(zpftri,ZPFTRI) #define LAPACK_cpftri LAPACK_GLOBAL(cpftri,CPFTRI) #define LAPACK_spptri LAPACK_GLOBAL(spptri,SPPTRI) #define LAPACK_dpptri LAPACK_GLOBAL(dpptri,DPPTRI) #define LAPACK_cpptri LAPACK_GLOBAL(cpptri,CPPTRI) #define LAPACK_zpptri LAPACK_GLOBAL(zpptri,ZPPTRI) #define LAPACK_ssytri LAPACK_GLOBAL(ssytri,SSYTRI) #define LAPACK_dsytri LAPACK_GLOBAL(dsytri,DSYTRI) #define LAPACK_csytri LAPACK_GLOBAL(csytri,CSYTRI) #define LAPACK_zsytri LAPACK_GLOBAL(zsytri,ZSYTRI) #define LAPACK_chetri LAPACK_GLOBAL(chetri,CHETRI) #define LAPACK_zhetri LAPACK_GLOBAL(zhetri,ZHETRI) #define LAPACK_ssptri LAPACK_GLOBAL(ssptri,SSPTRI) #define LAPACK_dsptri LAPACK_GLOBAL(dsptri,DSPTRI) #define LAPACK_csptri LAPACK_GLOBAL(csptri,CSPTRI) #define LAPACK_zsptri LAPACK_GLOBAL(zsptri,ZSPTRI) #define LAPACK_chptri LAPACK_GLOBAL(chptri,CHPTRI) #define LAPACK_zhptri LAPACK_GLOBAL(zhptri,ZHPTRI) #define LAPACK_strtri LAPACK_GLOBAL(strtri,STRTRI) #define LAPACK_dtrtri LAPACK_GLOBAL(dtrtri,DTRTRI) #define LAPACK_ctrtri LAPACK_GLOBAL(ctrtri,CTRTRI) #define LAPACK_ztrtri LAPACK_GLOBAL(ztrtri,ZTRTRI) #define LAPACK_dtftri LAPACK_GLOBAL(dtftri,DTFTRI) #define LAPACK_stftri LAPACK_GLOBAL(stftri,STFTRI) #define LAPACK_ztftri LAPACK_GLOBAL(ztftri,ZTFTRI) #define LAPACK_ctftri LAPACK_GLOBAL(ctftri,CTFTRI) #define LAPACK_stptri LAPACK_GLOBAL(stptri,STPTRI) #define LAPACK_dtptri LAPACK_GLOBAL(dtptri,DTPTRI) #define LAPACK_ctptri LAPACK_GLOBAL(ctptri,CTPTRI) #define LAPACK_ztptri LAPACK_GLOBAL(ztptri,ZTPTRI) #define LAPACK_sgeequ LAPACK_GLOBAL(sgeequ,SGEEQU) #define LAPACK_dgeequ LAPACK_GLOBAL(dgeequ,DGEEQU) #define LAPACK_cgeequ LAPACK_GLOBAL(cgeequ,CGEEQU) #define LAPACK_zgeequ LAPACK_GLOBAL(zgeequ,ZGEEQU) #define LAPACK_dgeequb LAPACK_GLOBAL(dgeequb,DGEEQUB) #define LAPACK_sgeequb LAPACK_GLOBAL(sgeequb,SGEEQUB) #define LAPACK_zgeequb LAPACK_GLOBAL(zgeequb,ZGEEQUB) #define LAPACK_cgeequb LAPACK_GLOBAL(cgeequb,CGEEQUB) #define LAPACK_sgbequ LAPACK_GLOBAL(sgbequ,SGBEQU) #define LAPACK_dgbequ LAPACK_GLOBAL(dgbequ,DGBEQU) #define LAPACK_cgbequ LAPACK_GLOBAL(cgbequ,CGBEQU) #define LAPACK_zgbequ LAPACK_GLOBAL(zgbequ,ZGBEQU) #define LAPACK_dgbequb LAPACK_GLOBAL(dgbequb,DGBEQUB) #define LAPACK_sgbequb LAPACK_GLOBAL(sgbequb,SGBEQUB) #define LAPACK_zgbequb LAPACK_GLOBAL(zgbequb,ZGBEQUB) #define LAPACK_cgbequb LAPACK_GLOBAL(cgbequb,CGBEQUB) #define LAPACK_spoequ LAPACK_GLOBAL(spoequ,SPOEQU) #define LAPACK_dpoequ LAPACK_GLOBAL(dpoequ,DPOEQU) #define LAPACK_cpoequ LAPACK_GLOBAL(cpoequ,CPOEQU) #define LAPACK_zpoequ LAPACK_GLOBAL(zpoequ,ZPOEQU) #define LAPACK_dpoequb LAPACK_GLOBAL(dpoequb,DPOEQUB) #define LAPACK_spoequb LAPACK_GLOBAL(spoequb,SPOEQUB) #define LAPACK_zpoequb LAPACK_GLOBAL(zpoequb,ZPOEQUB) #define LAPACK_cpoequb LAPACK_GLOBAL(cpoequb,CPOEQUB) #define LAPACK_sppequ LAPACK_GLOBAL(sppequ,SPPEQU) #define LAPACK_dppequ LAPACK_GLOBAL(dppequ,DPPEQU) #define LAPACK_cppequ LAPACK_GLOBAL(cppequ,CPPEQU) #define LAPACK_zppequ LAPACK_GLOBAL(zppequ,ZPPEQU) #define LAPACK_spbequ LAPACK_GLOBAL(spbequ,SPBEQU) #define LAPACK_dpbequ LAPACK_GLOBAL(dpbequ,DPBEQU) #define LAPACK_cpbequ LAPACK_GLOBAL(cpbequ,CPBEQU) #define LAPACK_zpbequ LAPACK_GLOBAL(zpbequ,ZPBEQU) #define LAPACK_dsyequb LAPACK_GLOBAL(dsyequb,DSYEQUB) #define LAPACK_ssyequb LAPACK_GLOBAL(ssyequb,SSYEQUB) #define LAPACK_zsyequb LAPACK_GLOBAL(zsyequb,ZSYEQUB) #define LAPACK_csyequb LAPACK_GLOBAL(csyequb,CSYEQUB) #define LAPACK_zheequb LAPACK_GLOBAL(zheequb,ZHEEQUB) #define LAPACK_cheequb LAPACK_GLOBAL(cheequb,CHEEQUB) #define LAPACK_sgesv LAPACK_GLOBAL(sgesv,SGESV) #define LAPACK_dgesv LAPACK_GLOBAL(dgesv,DGESV) #define LAPACK_cgesv LAPACK_GLOBAL(cgesv,CGESV) #define LAPACK_zgesv LAPACK_GLOBAL(zgesv,ZGESV) #define LAPACK_dsgesv LAPACK_GLOBAL(dsgesv,DSGESV) #define LAPACK_zcgesv LAPACK_GLOBAL(zcgesv,ZCGESV) #define LAPACK_sgesvx LAPACK_GLOBAL(sgesvx,SGESVX) #define LAPACK_dgesvx LAPACK_GLOBAL(dgesvx,DGESVX) #define LAPACK_cgesvx LAPACK_GLOBAL(cgesvx,CGESVX) #define LAPACK_zgesvx LAPACK_GLOBAL(zgesvx,ZGESVX) #define LAPACK_dgesvxx LAPACK_GLOBAL(dgesvxx,DGESVXX) #define LAPACK_sgesvxx LAPACK_GLOBAL(sgesvxx,SGESVXX) #define LAPACK_zgesvxx LAPACK_GLOBAL(zgesvxx,ZGESVXX) #define LAPACK_cgesvxx LAPACK_GLOBAL(cgesvxx,CGESVXX) #define LAPACK_sgbsv LAPACK_GLOBAL(sgbsv,SGBSV) #define LAPACK_dgbsv LAPACK_GLOBAL(dgbsv,DGBSV) #define LAPACK_cgbsv LAPACK_GLOBAL(cgbsv,CGBSV) #define LAPACK_zgbsv LAPACK_GLOBAL(zgbsv,ZGBSV) #define LAPACK_sgbsvx LAPACK_GLOBAL(sgbsvx,SGBSVX) #define LAPACK_dgbsvx LAPACK_GLOBAL(dgbsvx,DGBSVX) #define LAPACK_cgbsvx LAPACK_GLOBAL(cgbsvx,CGBSVX) #define LAPACK_zgbsvx LAPACK_GLOBAL(zgbsvx,ZGBSVX) #define LAPACK_dgbsvxx LAPACK_GLOBAL(dgbsvxx,DGBSVXX) #define LAPACK_sgbsvxx LAPACK_GLOBAL(sgbsvxx,SGBSVXX) #define LAPACK_zgbsvxx LAPACK_GLOBAL(zgbsvxx,ZGBSVXX) #define LAPACK_cgbsvxx LAPACK_GLOBAL(cgbsvxx,CGBSVXX) #define LAPACK_sgtsv LAPACK_GLOBAL(sgtsv,SGTSV) #define LAPACK_dgtsv LAPACK_GLOBAL(dgtsv,DGTSV) #define LAPACK_cgtsv LAPACK_GLOBAL(cgtsv,CGTSV) #define LAPACK_zgtsv LAPACK_GLOBAL(zgtsv,ZGTSV) #define LAPACK_sgtsvx LAPACK_GLOBAL(sgtsvx,SGTSVX) #define LAPACK_dgtsvx LAPACK_GLOBAL(dgtsvx,DGTSVX) #define LAPACK_cgtsvx LAPACK_GLOBAL(cgtsvx,CGTSVX) #define LAPACK_zgtsvx LAPACK_GLOBAL(zgtsvx,ZGTSVX) #define LAPACK_sposv LAPACK_GLOBAL(sposv,SPOSV) #define LAPACK_dposv LAPACK_GLOBAL(dposv,DPOSV) #define LAPACK_cposv LAPACK_GLOBAL(cposv,CPOSV) #define LAPACK_zposv LAPACK_GLOBAL(zposv,ZPOSV) #define LAPACK_dsposv LAPACK_GLOBAL(dsposv,DSPOSV) #define LAPACK_zcposv LAPACK_GLOBAL(zcposv,ZCPOSV) #define LAPACK_sposvx LAPACK_GLOBAL(sposvx,SPOSVX) #define LAPACK_dposvx LAPACK_GLOBAL(dposvx,DPOSVX) #define LAPACK_cposvx LAPACK_GLOBAL(cposvx,CPOSVX) #define LAPACK_zposvx LAPACK_GLOBAL(zposvx,ZPOSVX) #define LAPACK_dposvxx LAPACK_GLOBAL(dposvxx,DPOSVXX) #define LAPACK_sposvxx LAPACK_GLOBAL(sposvxx,SPOSVXX) #define LAPACK_zposvxx LAPACK_GLOBAL(zposvxx,ZPOSVXX) #define LAPACK_cposvxx LAPACK_GLOBAL(cposvxx,CPOSVXX) #define LAPACK_sppsv LAPACK_GLOBAL(sppsv,SPPSV) #define LAPACK_dppsv LAPACK_GLOBAL(dppsv,DPPSV) #define LAPACK_cppsv LAPACK_GLOBAL(cppsv,CPPSV) #define LAPACK_zppsv LAPACK_GLOBAL(zppsv,ZPPSV) #define LAPACK_sppsvx LAPACK_GLOBAL(sppsvx,SPPSVX) #define LAPACK_dppsvx LAPACK_GLOBAL(dppsvx,DPPSVX) #define LAPACK_cppsvx LAPACK_GLOBAL(cppsvx,CPPSVX) #define LAPACK_zppsvx LAPACK_GLOBAL(zppsvx,ZPPSVX) #define LAPACK_spbsv LAPACK_GLOBAL(spbsv,SPBSV) #define LAPACK_dpbsv LAPACK_GLOBAL(dpbsv,DPBSV) #define LAPACK_cpbsv LAPACK_GLOBAL(cpbsv,CPBSV) #define LAPACK_zpbsv LAPACK_GLOBAL(zpbsv,ZPBSV) #define LAPACK_spbsvx LAPACK_GLOBAL(spbsvx,SPBSVX) #define LAPACK_dpbsvx LAPACK_GLOBAL(dpbsvx,DPBSVX) #define LAPACK_cpbsvx LAPACK_GLOBAL(cpbsvx,CPBSVX) #define LAPACK_zpbsvx LAPACK_GLOBAL(zpbsvx,ZPBSVX) #define LAPACK_sptsv LAPACK_GLOBAL(sptsv,SPTSV) #define LAPACK_dptsv LAPACK_GLOBAL(dptsv,DPTSV) #define LAPACK_cptsv LAPACK_GLOBAL(cptsv,CPTSV) #define LAPACK_zptsv LAPACK_GLOBAL(zptsv,ZPTSV) #define LAPACK_sptsvx LAPACK_GLOBAL(sptsvx,SPTSVX) #define LAPACK_dptsvx LAPACK_GLOBAL(dptsvx,DPTSVX) #define LAPACK_cptsvx LAPACK_GLOBAL(cptsvx,CPTSVX) #define LAPACK_zptsvx LAPACK_GLOBAL(zptsvx,ZPTSVX) #define LAPACK_ssysv LAPACK_GLOBAL(ssysv,SSYSV) #define LAPACK_dsysv LAPACK_GLOBAL(dsysv,DSYSV) #define LAPACK_csysv LAPACK_GLOBAL(csysv,CSYSV) #define LAPACK_zsysv LAPACK_GLOBAL(zsysv,ZSYSV) #define LAPACK_ssysvx LAPACK_GLOBAL(ssysvx,SSYSVX) #define LAPACK_dsysvx LAPACK_GLOBAL(dsysvx,DSYSVX) #define LAPACK_csysvx LAPACK_GLOBAL(csysvx,CSYSVX) #define LAPACK_zsysvx LAPACK_GLOBAL(zsysvx,ZSYSVX) #define LAPACK_dsysvxx LAPACK_GLOBAL(dsysvxx,DSYSVXX) #define LAPACK_ssysvxx LAPACK_GLOBAL(ssysvxx,SSYSVXX) #define LAPACK_zsysvxx LAPACK_GLOBAL(zsysvxx,ZSYSVXX) #define LAPACK_csysvxx LAPACK_GLOBAL(csysvxx,CSYSVXX) #define LAPACK_chesv LAPACK_GLOBAL(chesv,CHESV) #define LAPACK_zhesv LAPACK_GLOBAL(zhesv,ZHESV) #define LAPACK_chesvx LAPACK_GLOBAL(chesvx,CHESVX) #define LAPACK_zhesvx LAPACK_GLOBAL(zhesvx,ZHESVX) #define LAPACK_zhesvxx LAPACK_GLOBAL(zhesvxx,ZHESVXX) #define LAPACK_chesvxx LAPACK_GLOBAL(chesvxx,CHESVXX) #define LAPACK_sspsv LAPACK_GLOBAL(sspsv,SSPSV) #define LAPACK_dspsv LAPACK_GLOBAL(dspsv,DSPSV) #define LAPACK_cspsv LAPACK_GLOBAL(cspsv,CSPSV) #define LAPACK_zspsv LAPACK_GLOBAL(zspsv,ZSPSV) #define LAPACK_sspsvx LAPACK_GLOBAL(sspsvx,SSPSVX) #define LAPACK_dspsvx LAPACK_GLOBAL(dspsvx,DSPSVX) #define LAPACK_cspsvx LAPACK_GLOBAL(cspsvx,CSPSVX) #define LAPACK_zspsvx LAPACK_GLOBAL(zspsvx,ZSPSVX) #define LAPACK_chpsv LAPACK_GLOBAL(chpsv,CHPSV) #define LAPACK_zhpsv LAPACK_GLOBAL(zhpsv,ZHPSV) #define LAPACK_chpsvx LAPACK_GLOBAL(chpsvx,CHPSVX) #define LAPACK_zhpsvx LAPACK_GLOBAL(zhpsvx,ZHPSVX) #define LAPACK_sgeqrf LAPACK_GLOBAL(sgeqrf,SGEQRF) #define LAPACK_dgeqrf LAPACK_GLOBAL(dgeqrf,DGEQRF) #define LAPACK_cgeqrf LAPACK_GLOBAL(cgeqrf,CGEQRF) #define LAPACK_zgeqrf LAPACK_GLOBAL(zgeqrf,ZGEQRF) #define LAPACK_sgeqpf LAPACK_GLOBAL(sgeqpf,SGEQPF) #define LAPACK_dgeqpf LAPACK_GLOBAL(dgeqpf,DGEQPF) #define LAPACK_cgeqpf LAPACK_GLOBAL(cgeqpf,CGEQPF) #define LAPACK_zgeqpf LAPACK_GLOBAL(zgeqpf,ZGEQPF) #define LAPACK_sgeqp3 LAPACK_GLOBAL(sgeqp3,SGEQP3) #define LAPACK_dgeqp3 LAPACK_GLOBAL(dgeqp3,DGEQP3) #define LAPACK_cgeqp3 LAPACK_GLOBAL(cgeqp3,CGEQP3) #define LAPACK_zgeqp3 LAPACK_GLOBAL(zgeqp3,ZGEQP3) #define LAPACK_sorgqr LAPACK_GLOBAL(sorgqr,SORGQR) #define LAPACK_dorgqr LAPACK_GLOBAL(dorgqr,DORGQR) #define LAPACK_sormqr LAPACK_GLOBAL(sormqr,SORMQR) #define LAPACK_dormqr LAPACK_GLOBAL(dormqr,DORMQR) #define LAPACK_cungqr LAPACK_GLOBAL(cungqr,CUNGQR) #define LAPACK_zungqr LAPACK_GLOBAL(zungqr,ZUNGQR) #define LAPACK_cunmqr LAPACK_GLOBAL(cunmqr,CUNMQR) #define LAPACK_zunmqr LAPACK_GLOBAL(zunmqr,ZUNMQR) #define LAPACK_sgelqf LAPACK_GLOBAL(sgelqf,SGELQF) #define LAPACK_dgelqf LAPACK_GLOBAL(dgelqf,DGELQF) #define LAPACK_cgelqf LAPACK_GLOBAL(cgelqf,CGELQF) #define LAPACK_zgelqf LAPACK_GLOBAL(zgelqf,ZGELQF) #define LAPACK_sorglq LAPACK_GLOBAL(sorglq,SORGLQ) #define LAPACK_dorglq LAPACK_GLOBAL(dorglq,DORGLQ) #define LAPACK_sormlq LAPACK_GLOBAL(sormlq,SORMLQ) #define LAPACK_dormlq LAPACK_GLOBAL(dormlq,DORMLQ) #define LAPACK_cunglq LAPACK_GLOBAL(cunglq,CUNGLQ) #define LAPACK_zunglq LAPACK_GLOBAL(zunglq,ZUNGLQ) #define LAPACK_cunmlq LAPACK_GLOBAL(cunmlq,CUNMLQ) #define LAPACK_zunmlq LAPACK_GLOBAL(zunmlq,ZUNMLQ) #define LAPACK_sgeqlf LAPACK_GLOBAL(sgeqlf,SGEQLF) #define LAPACK_dgeqlf LAPACK_GLOBAL(dgeqlf,DGEQLF) #define LAPACK_cgeqlf LAPACK_GLOBAL(cgeqlf,CGEQLF) #define LAPACK_zgeqlf LAPACK_GLOBAL(zgeqlf,ZGEQLF) #define LAPACK_sorgql LAPACK_GLOBAL(sorgql,SORGQL) #define LAPACK_dorgql LAPACK_GLOBAL(dorgql,DORGQL) #define LAPACK_cungql LAPACK_GLOBAL(cungql,CUNGQL) #define LAPACK_zungql LAPACK_GLOBAL(zungql,ZUNGQL) #define LAPACK_sormql LAPACK_GLOBAL(sormql,SORMQL) #define LAPACK_dormql LAPACK_GLOBAL(dormql,DORMQL) #define LAPACK_cunmql LAPACK_GLOBAL(cunmql,CUNMQL) #define LAPACK_zunmql LAPACK_GLOBAL(zunmql,ZUNMQL) #define LAPACK_sgerqf LAPACK_GLOBAL(sgerqf,SGERQF) #define LAPACK_dgerqf LAPACK_GLOBAL(dgerqf,DGERQF) #define LAPACK_cgerqf LAPACK_GLOBAL(cgerqf,CGERQF) #define LAPACK_zgerqf LAPACK_GLOBAL(zgerqf,ZGERQF) #define LAPACK_sorgrq LAPACK_GLOBAL(sorgrq,SORGRQ) #define LAPACK_dorgrq LAPACK_GLOBAL(dorgrq,DORGRQ) #define LAPACK_cungrq LAPACK_GLOBAL(cungrq,CUNGRQ) #define LAPACK_zungrq LAPACK_GLOBAL(zungrq,ZUNGRQ) #define LAPACK_sormrq LAPACK_GLOBAL(sormrq,SORMRQ) #define LAPACK_dormrq LAPACK_GLOBAL(dormrq,DORMRQ) #define LAPACK_cunmrq LAPACK_GLOBAL(cunmrq,CUNMRQ) #define LAPACK_zunmrq LAPACK_GLOBAL(zunmrq,ZUNMRQ) #define LAPACK_stzrzf LAPACK_GLOBAL(stzrzf,STZRZF) #define LAPACK_dtzrzf LAPACK_GLOBAL(dtzrzf,DTZRZF) #define LAPACK_ctzrzf LAPACK_GLOBAL(ctzrzf,CTZRZF) #define LAPACK_ztzrzf LAPACK_GLOBAL(ztzrzf,ZTZRZF) #define LAPACK_sormrz LAPACK_GLOBAL(sormrz,SORMRZ) #define LAPACK_dormrz LAPACK_GLOBAL(dormrz,DORMRZ) #define LAPACK_cunmrz LAPACK_GLOBAL(cunmrz,CUNMRZ) #define LAPACK_zunmrz LAPACK_GLOBAL(zunmrz,ZUNMRZ) #define LAPACK_sggqrf LAPACK_GLOBAL(sggqrf,SGGQRF) #define LAPACK_dggqrf LAPACK_GLOBAL(dggqrf,DGGQRF) #define LAPACK_cggqrf LAPACK_GLOBAL(cggqrf,CGGQRF) #define LAPACK_zggqrf LAPACK_GLOBAL(zggqrf,ZGGQRF) #define LAPACK_sggrqf LAPACK_GLOBAL(sggrqf,SGGRQF) #define LAPACK_dggrqf LAPACK_GLOBAL(dggrqf,DGGRQF) #define LAPACK_cggrqf LAPACK_GLOBAL(cggrqf,CGGRQF) #define LAPACK_zggrqf LAPACK_GLOBAL(zggrqf,ZGGRQF) #define LAPACK_sgebrd LAPACK_GLOBAL(sgebrd,SGEBRD) #define LAPACK_dgebrd LAPACK_GLOBAL(dgebrd,DGEBRD) #define LAPACK_cgebrd LAPACK_GLOBAL(cgebrd,CGEBRD) #define LAPACK_zgebrd LAPACK_GLOBAL(zgebrd,ZGEBRD) #define LAPACK_sgbbrd LAPACK_GLOBAL(sgbbrd,SGBBRD) #define LAPACK_dgbbrd LAPACK_GLOBAL(dgbbrd,DGBBRD) #define LAPACK_cgbbrd LAPACK_GLOBAL(cgbbrd,CGBBRD) #define LAPACK_zgbbrd LAPACK_GLOBAL(zgbbrd,ZGBBRD) #define LAPACK_sorgbr LAPACK_GLOBAL(sorgbr,SORGBR) #define LAPACK_dorgbr LAPACK_GLOBAL(dorgbr,DORGBR) #define LAPACK_sormbr LAPACK_GLOBAL(sormbr,SORMBR) #define LAPACK_dormbr LAPACK_GLOBAL(dormbr,DORMBR) #define LAPACK_cungbr LAPACK_GLOBAL(cungbr,CUNGBR) #define LAPACK_zungbr LAPACK_GLOBAL(zungbr,ZUNGBR) #define LAPACK_cunmbr LAPACK_GLOBAL(cunmbr,CUNMBR) #define LAPACK_zunmbr LAPACK_GLOBAL(zunmbr,ZUNMBR) #define LAPACK_sbdsqr LAPACK_GLOBAL(sbdsqr,SBDSQR) #define LAPACK_dbdsqr LAPACK_GLOBAL(dbdsqr,DBDSQR) #define LAPACK_cbdsqr LAPACK_GLOBAL(cbdsqr,CBDSQR) #define LAPACK_zbdsqr LAPACK_GLOBAL(zbdsqr,ZBDSQR) #define LAPACK_sbdsdc LAPACK_GLOBAL(sbdsdc,SBDSDC) #define LAPACK_dbdsdc LAPACK_GLOBAL(dbdsdc,DBDSDC) #define LAPACK_ssytrd LAPACK_GLOBAL(ssytrd,SSYTRD) #define LAPACK_dsytrd LAPACK_GLOBAL(dsytrd,DSYTRD) #define LAPACK_sorgtr LAPACK_GLOBAL(sorgtr,SORGTR) #define LAPACK_dorgtr LAPACK_GLOBAL(dorgtr,DORGTR) #define LAPACK_sormtr LAPACK_GLOBAL(sormtr,SORMTR) #define LAPACK_dormtr LAPACK_GLOBAL(dormtr,DORMTR) #define LAPACK_chetrd LAPACK_GLOBAL(chetrd,CHETRD) #define LAPACK_zhetrd LAPACK_GLOBAL(zhetrd,ZHETRD) #define LAPACK_cungtr LAPACK_GLOBAL(cungtr,CUNGTR) #define LAPACK_zungtr LAPACK_GLOBAL(zungtr,ZUNGTR) #define LAPACK_cunmtr LAPACK_GLOBAL(cunmtr,CUNMTR) #define LAPACK_zunmtr LAPACK_GLOBAL(zunmtr,ZUNMTR) #define LAPACK_ssptrd LAPACK_GLOBAL(ssptrd,SSPTRD) #define LAPACK_dsptrd LAPACK_GLOBAL(dsptrd,DSPTRD) #define LAPACK_sopgtr LAPACK_GLOBAL(sopgtr,SOPGTR) #define LAPACK_dopgtr LAPACK_GLOBAL(dopgtr,DOPGTR) #define LAPACK_sopmtr LAPACK_GLOBAL(sopmtr,SOPMTR) #define LAPACK_dopmtr LAPACK_GLOBAL(dopmtr,DOPMTR) #define LAPACK_chptrd LAPACK_GLOBAL(chptrd,CHPTRD) #define LAPACK_zhptrd LAPACK_GLOBAL(zhptrd,ZHPTRD) #define LAPACK_cupgtr LAPACK_GLOBAL(cupgtr,CUPGTR) #define LAPACK_zupgtr LAPACK_GLOBAL(zupgtr,ZUPGTR) #define LAPACK_cupmtr LAPACK_GLOBAL(cupmtr,CUPMTR) #define LAPACK_zupmtr LAPACK_GLOBAL(zupmtr,ZUPMTR) #define LAPACK_ssbtrd LAPACK_GLOBAL(ssbtrd,SSBTRD) #define LAPACK_dsbtrd LAPACK_GLOBAL(dsbtrd,DSBTRD) #define LAPACK_chbtrd LAPACK_GLOBAL(chbtrd,CHBTRD) #define LAPACK_zhbtrd LAPACK_GLOBAL(zhbtrd,ZHBTRD) #define LAPACK_ssterf LAPACK_GLOBAL(ssterf,SSTERF) #define LAPACK_dsterf LAPACK_GLOBAL(dsterf,DSTERF) #define LAPACK_ssteqr LAPACK_GLOBAL(ssteqr,SSTEQR) #define LAPACK_dsteqr LAPACK_GLOBAL(dsteqr,DSTEQR) #define LAPACK_csteqr LAPACK_GLOBAL(csteqr,CSTEQR) #define LAPACK_zsteqr LAPACK_GLOBAL(zsteqr,ZSTEQR) #define LAPACK_sstemr LAPACK_GLOBAL(sstemr,SSTEMR) #define LAPACK_dstemr LAPACK_GLOBAL(dstemr,DSTEMR) #define LAPACK_cstemr LAPACK_GLOBAL(cstemr,CSTEMR) #define LAPACK_zstemr LAPACK_GLOBAL(zstemr,ZSTEMR) #define LAPACK_sstedc LAPACK_GLOBAL(sstedc,SSTEDC) #define LAPACK_dstedc LAPACK_GLOBAL(dstedc,DSTEDC) #define LAPACK_cstedc LAPACK_GLOBAL(cstedc,CSTEDC) #define LAPACK_zstedc LAPACK_GLOBAL(zstedc,ZSTEDC) #define LAPACK_sstegr LAPACK_GLOBAL(sstegr,SSTEGR) #define LAPACK_dstegr LAPACK_GLOBAL(dstegr,DSTEGR) #define LAPACK_cstegr LAPACK_GLOBAL(cstegr,CSTEGR) #define LAPACK_zstegr LAPACK_GLOBAL(zstegr,ZSTEGR) #define LAPACK_spteqr LAPACK_GLOBAL(spteqr,SPTEQR) #define LAPACK_dpteqr LAPACK_GLOBAL(dpteqr,DPTEQR) #define LAPACK_cpteqr LAPACK_GLOBAL(cpteqr,CPTEQR) #define LAPACK_zpteqr LAPACK_GLOBAL(zpteqr,ZPTEQR) #define LAPACK_sstebz LAPACK_GLOBAL(sstebz,SSTEBZ) #define LAPACK_dstebz LAPACK_GLOBAL(dstebz,DSTEBZ) #define LAPACK_sstein LAPACK_GLOBAL(sstein,SSTEIN) #define LAPACK_dstein LAPACK_GLOBAL(dstein,DSTEIN) #define LAPACK_cstein LAPACK_GLOBAL(cstein,CSTEIN) #define LAPACK_zstein LAPACK_GLOBAL(zstein,ZSTEIN) #define LAPACK_sdisna LAPACK_GLOBAL(sdisna,SDISNA) #define LAPACK_ddisna LAPACK_GLOBAL(ddisna,DDISNA) #define LAPACK_ssygst LAPACK_GLOBAL(ssygst,SSYGST) #define LAPACK_dsygst LAPACK_GLOBAL(dsygst,DSYGST) #define LAPACK_chegst LAPACK_GLOBAL(chegst,CHEGST) #define LAPACK_zhegst LAPACK_GLOBAL(zhegst,ZHEGST) #define LAPACK_sspgst LAPACK_GLOBAL(sspgst,SSPGST) #define LAPACK_dspgst LAPACK_GLOBAL(dspgst,DSPGST) #define LAPACK_chpgst LAPACK_GLOBAL(chpgst,CHPGST) #define LAPACK_zhpgst LAPACK_GLOBAL(zhpgst,ZHPGST) #define LAPACK_ssbgst LAPACK_GLOBAL(ssbgst,SSBGST) #define LAPACK_dsbgst LAPACK_GLOBAL(dsbgst,DSBGST) #define LAPACK_chbgst LAPACK_GLOBAL(chbgst,CHBGST) #define LAPACK_zhbgst LAPACK_GLOBAL(zhbgst,ZHBGST) #define LAPACK_spbstf LAPACK_GLOBAL(spbstf,SPBSTF) #define LAPACK_dpbstf LAPACK_GLOBAL(dpbstf,DPBSTF) #define LAPACK_cpbstf LAPACK_GLOBAL(cpbstf,CPBSTF) #define LAPACK_zpbstf LAPACK_GLOBAL(zpbstf,ZPBSTF) #define LAPACK_sgehrd LAPACK_GLOBAL(sgehrd,SGEHRD) #define LAPACK_dgehrd LAPACK_GLOBAL(dgehrd,DGEHRD) #define LAPACK_cgehrd LAPACK_GLOBAL(cgehrd,CGEHRD) #define LAPACK_zgehrd LAPACK_GLOBAL(zgehrd,ZGEHRD) #define LAPACK_sorghr LAPACK_GLOBAL(sorghr,SORGHR) #define LAPACK_dorghr LAPACK_GLOBAL(dorghr,DORGHR) #define LAPACK_sormhr LAPACK_GLOBAL(sormhr,SORMHR) #define LAPACK_dormhr LAPACK_GLOBAL(dormhr,DORMHR) #define LAPACK_cunghr LAPACK_GLOBAL(cunghr,CUNGHR) #define LAPACK_zunghr LAPACK_GLOBAL(zunghr,ZUNGHR) #define LAPACK_cunmhr LAPACK_GLOBAL(cunmhr,CUNMHR) #define LAPACK_zunmhr LAPACK_GLOBAL(zunmhr,ZUNMHR) #define LAPACK_sgebal LAPACK_GLOBAL(sgebal,SGEBAL) #define LAPACK_dgebal LAPACK_GLOBAL(dgebal,DGEBAL) #define LAPACK_cgebal LAPACK_GLOBAL(cgebal,CGEBAL) #define LAPACK_zgebal LAPACK_GLOBAL(zgebal,ZGEBAL) #define LAPACK_sgebak LAPACK_GLOBAL(sgebak,SGEBAK) #define LAPACK_dgebak LAPACK_GLOBAL(dgebak,DGEBAK) #define LAPACK_cgebak LAPACK_GLOBAL(cgebak,CGEBAK) #define LAPACK_zgebak LAPACK_GLOBAL(zgebak,ZGEBAK) #define LAPACK_shseqr LAPACK_GLOBAL(shseqr,SHSEQR) #define LAPACK_dhseqr LAPACK_GLOBAL(dhseqr,DHSEQR) #define LAPACK_chseqr LAPACK_GLOBAL(chseqr,CHSEQR) #define LAPACK_zhseqr LAPACK_GLOBAL(zhseqr,ZHSEQR) #define LAPACK_shsein LAPACK_GLOBAL(shsein,SHSEIN) #define LAPACK_dhsein LAPACK_GLOBAL(dhsein,DHSEIN) #define LAPACK_chsein LAPACK_GLOBAL(chsein,CHSEIN) #define LAPACK_zhsein LAPACK_GLOBAL(zhsein,ZHSEIN) #define LAPACK_strevc LAPACK_GLOBAL(strevc,STREVC) #define LAPACK_dtrevc LAPACK_GLOBAL(dtrevc,DTREVC) #define LAPACK_ctrevc LAPACK_GLOBAL(ctrevc,CTREVC) #define LAPACK_ztrevc LAPACK_GLOBAL(ztrevc,ZTREVC) #define LAPACK_strsna LAPACK_GLOBAL(strsna,STRSNA) #define LAPACK_dtrsna LAPACK_GLOBAL(dtrsna,DTRSNA) #define LAPACK_ctrsna LAPACK_GLOBAL(ctrsna,CTRSNA) #define LAPACK_ztrsna LAPACK_GLOBAL(ztrsna,ZTRSNA) #define LAPACK_strexc LAPACK_GLOBAL(strexc,STREXC) #define LAPACK_dtrexc LAPACK_GLOBAL(dtrexc,DTREXC) #define LAPACK_ctrexc LAPACK_GLOBAL(ctrexc,CTREXC) #define LAPACK_ztrexc LAPACK_GLOBAL(ztrexc,ZTREXC) #define LAPACK_strsen LAPACK_GLOBAL(strsen,STRSEN) #define LAPACK_dtrsen LAPACK_GLOBAL(dtrsen,DTRSEN) #define LAPACK_ctrsen LAPACK_GLOBAL(ctrsen,CTRSEN) #define LAPACK_ztrsen LAPACK_GLOBAL(ztrsen,ZTRSEN) #define LAPACK_strsyl LAPACK_GLOBAL(strsyl,STRSYL) #define LAPACK_dtrsyl LAPACK_GLOBAL(dtrsyl,DTRSYL) #define LAPACK_ctrsyl LAPACK_GLOBAL(ctrsyl,CTRSYL) #define LAPACK_ztrsyl LAPACK_GLOBAL(ztrsyl,ZTRSYL) #define LAPACK_sgghrd LAPACK_GLOBAL(sgghrd,SGGHRD) #define LAPACK_dgghrd LAPACK_GLOBAL(dgghrd,DGGHRD) #define LAPACK_cgghrd LAPACK_GLOBAL(cgghrd,CGGHRD) #define LAPACK_zgghrd LAPACK_GLOBAL(zgghrd,ZGGHRD) #define LAPACK_sggbal LAPACK_GLOBAL(sggbal,SGGBAL) #define LAPACK_dggbal LAPACK_GLOBAL(dggbal,DGGBAL) #define LAPACK_cggbal LAPACK_GLOBAL(cggbal,CGGBAL) #define LAPACK_zggbal LAPACK_GLOBAL(zggbal,ZGGBAL) #define LAPACK_sggbak LAPACK_GLOBAL(sggbak,SGGBAK) #define LAPACK_dggbak LAPACK_GLOBAL(dggbak,DGGBAK) #define LAPACK_cggbak LAPACK_GLOBAL(cggbak,CGGBAK) #define LAPACK_zggbak LAPACK_GLOBAL(zggbak,ZGGBAK) #define LAPACK_shgeqz LAPACK_GLOBAL(shgeqz,SHGEQZ) #define LAPACK_dhgeqz LAPACK_GLOBAL(dhgeqz,DHGEQZ) #define LAPACK_chgeqz LAPACK_GLOBAL(chgeqz,CHGEQZ) #define LAPACK_zhgeqz LAPACK_GLOBAL(zhgeqz,ZHGEQZ) #define LAPACK_stgevc LAPACK_GLOBAL(stgevc,STGEVC) #define LAPACK_dtgevc LAPACK_GLOBAL(dtgevc,DTGEVC) #define LAPACK_ctgevc LAPACK_GLOBAL(ctgevc,CTGEVC) #define LAPACK_ztgevc LAPACK_GLOBAL(ztgevc,ZTGEVC) #define LAPACK_stgexc LAPACK_GLOBAL(stgexc,STGEXC) #define LAPACK_dtgexc LAPACK_GLOBAL(dtgexc,DTGEXC) #define LAPACK_ctgexc LAPACK_GLOBAL(ctgexc,CTGEXC) #define LAPACK_ztgexc LAPACK_GLOBAL(ztgexc,ZTGEXC) #define LAPACK_stgsen LAPACK_GLOBAL(stgsen,STGSEN) #define LAPACK_dtgsen LAPACK_GLOBAL(dtgsen,DTGSEN) #define LAPACK_ctgsen LAPACK_GLOBAL(ctgsen,CTGSEN) #define LAPACK_ztgsen LAPACK_GLOBAL(ztgsen,ZTGSEN) #define LAPACK_stgsyl LAPACK_GLOBAL(stgsyl,STGSYL) #define LAPACK_dtgsyl LAPACK_GLOBAL(dtgsyl,DTGSYL) #define LAPACK_ctgsyl LAPACK_GLOBAL(ctgsyl,CTGSYL) #define LAPACK_ztgsyl LAPACK_GLOBAL(ztgsyl,ZTGSYL) #define LAPACK_stgsna LAPACK_GLOBAL(stgsna,STGSNA) #define LAPACK_dtgsna LAPACK_GLOBAL(dtgsna,DTGSNA) #define LAPACK_ctgsna LAPACK_GLOBAL(ctgsna,CTGSNA) #define LAPACK_ztgsna LAPACK_GLOBAL(ztgsna,ZTGSNA) #define LAPACK_sggsvp LAPACK_GLOBAL(sggsvp,SGGSVP) #define LAPACK_dggsvp LAPACK_GLOBAL(dggsvp,DGGSVP) #define LAPACK_cggsvp LAPACK_GLOBAL(cggsvp,CGGSVP) #define LAPACK_zggsvp LAPACK_GLOBAL(zggsvp,ZGGSVP) #define LAPACK_stgsja LAPACK_GLOBAL(stgsja,STGSJA) #define LAPACK_dtgsja LAPACK_GLOBAL(dtgsja,DTGSJA) #define LAPACK_ctgsja LAPACK_GLOBAL(ctgsja,CTGSJA) #define LAPACK_ztgsja LAPACK_GLOBAL(ztgsja,ZTGSJA) #define LAPACK_sgels LAPACK_GLOBAL(sgels,SGELS) #define LAPACK_dgels LAPACK_GLOBAL(dgels,DGELS) #define LAPACK_cgels LAPACK_GLOBAL(cgels,CGELS) #define LAPACK_zgels LAPACK_GLOBAL(zgels,ZGELS) #define LAPACK_sgelsy LAPACK_GLOBAL(sgelsy,SGELSY) #define LAPACK_dgelsy LAPACK_GLOBAL(dgelsy,DGELSY) #define LAPACK_cgelsy LAPACK_GLOBAL(cgelsy,CGELSY) #define LAPACK_zgelsy LAPACK_GLOBAL(zgelsy,ZGELSY) #define LAPACK_sgelss LAPACK_GLOBAL(sgelss,SGELSS) #define LAPACK_dgelss LAPACK_GLOBAL(dgelss,DGELSS) #define LAPACK_cgelss LAPACK_GLOBAL(cgelss,CGELSS) #define LAPACK_zgelss LAPACK_GLOBAL(zgelss,ZGELSS) #define LAPACK_sgelsd LAPACK_GLOBAL(sgelsd,SGELSD) #define LAPACK_dgelsd LAPACK_GLOBAL(dgelsd,DGELSD) #define LAPACK_cgelsd LAPACK_GLOBAL(cgelsd,CGELSD) #define LAPACK_zgelsd LAPACK_GLOBAL(zgelsd,ZGELSD) #define LAPACK_sgglse LAPACK_GLOBAL(sgglse,SGGLSE) #define LAPACK_dgglse LAPACK_GLOBAL(dgglse,DGGLSE) #define LAPACK_cgglse LAPACK_GLOBAL(cgglse,CGGLSE) #define LAPACK_zgglse LAPACK_GLOBAL(zgglse,ZGGLSE) #define LAPACK_sggglm LAPACK_GLOBAL(sggglm,SGGGLM) #define LAPACK_dggglm LAPACK_GLOBAL(dggglm,DGGGLM) #define LAPACK_cggglm LAPACK_GLOBAL(cggglm,CGGGLM) #define LAPACK_zggglm LAPACK_GLOBAL(zggglm,ZGGGLM) #define LAPACK_ssyev LAPACK_GLOBAL(ssyev,SSYEV) #define LAPACK_dsyev LAPACK_GLOBAL(dsyev,DSYEV) #define LAPACK_cheev LAPACK_GLOBAL(cheev,CHEEV) #define LAPACK_zheev LAPACK_GLOBAL(zheev,ZHEEV) #define LAPACK_ssyevd LAPACK_GLOBAL(ssyevd,SSYEVD) #define LAPACK_dsyevd LAPACK_GLOBAL(dsyevd,DSYEVD) #define LAPACK_cheevd LAPACK_GLOBAL(cheevd,CHEEVD) #define LAPACK_zheevd LAPACK_GLOBAL(zheevd,ZHEEVD) #define LAPACK_ssyevx LAPACK_GLOBAL(ssyevx,SSYEVX) #define LAPACK_dsyevx LAPACK_GLOBAL(dsyevx,DSYEVX) #define LAPACK_cheevx LAPACK_GLOBAL(cheevx,CHEEVX) #define LAPACK_zheevx LAPACK_GLOBAL(zheevx,ZHEEVX) #define LAPACK_ssyevr LAPACK_GLOBAL(ssyevr,SSYEVR) #define LAPACK_dsyevr LAPACK_GLOBAL(dsyevr,DSYEVR) #define LAPACK_cheevr LAPACK_GLOBAL(cheevr,CHEEVR) #define LAPACK_zheevr LAPACK_GLOBAL(zheevr,ZHEEVR) #define LAPACK_sspev LAPACK_GLOBAL(sspev,SSPEV) #define LAPACK_dspev LAPACK_GLOBAL(dspev,DSPEV) #define LAPACK_chpev LAPACK_GLOBAL(chpev,CHPEV) #define LAPACK_zhpev LAPACK_GLOBAL(zhpev,ZHPEV) #define LAPACK_sspevd LAPACK_GLOBAL(sspevd,SSPEVD) #define LAPACK_dspevd LAPACK_GLOBAL(dspevd,DSPEVD) #define LAPACK_chpevd LAPACK_GLOBAL(chpevd,CHPEVD) #define LAPACK_zhpevd LAPACK_GLOBAL(zhpevd,ZHPEVD) #define LAPACK_sspevx LAPACK_GLOBAL(sspevx,SSPEVX) #define LAPACK_dspevx LAPACK_GLOBAL(dspevx,DSPEVX) #define LAPACK_chpevx LAPACK_GLOBAL(chpevx,CHPEVX) #define LAPACK_zhpevx LAPACK_GLOBAL(zhpevx,ZHPEVX) #define LAPACK_ssbev LAPACK_GLOBAL(ssbev,SSBEV) #define LAPACK_dsbev LAPACK_GLOBAL(dsbev,DSBEV) #define LAPACK_chbev LAPACK_GLOBAL(chbev,CHBEV) #define LAPACK_zhbev LAPACK_GLOBAL(zhbev,ZHBEV) #define LAPACK_ssbevd LAPACK_GLOBAL(ssbevd,SSBEVD) #define LAPACK_dsbevd LAPACK_GLOBAL(dsbevd,DSBEVD) #define LAPACK_chbevd LAPACK_GLOBAL(chbevd,CHBEVD) #define LAPACK_zhbevd LAPACK_GLOBAL(zhbevd,ZHBEVD) #define LAPACK_ssbevx LAPACK_GLOBAL(ssbevx,SSBEVX) #define LAPACK_dsbevx LAPACK_GLOBAL(dsbevx,DSBEVX) #define LAPACK_chbevx LAPACK_GLOBAL(chbevx,CHBEVX) #define LAPACK_zhbevx LAPACK_GLOBAL(zhbevx,ZHBEVX) #define LAPACK_sstev LAPACK_GLOBAL(sstev,SSTEV) #define LAPACK_dstev LAPACK_GLOBAL(dstev,DSTEV) #define LAPACK_sstevd LAPACK_GLOBAL(sstevd,SSTEVD) #define LAPACK_dstevd LAPACK_GLOBAL(dstevd,DSTEVD) #define LAPACK_sstevx LAPACK_GLOBAL(sstevx,SSTEVX) #define LAPACK_dstevx LAPACK_GLOBAL(dstevx,DSTEVX) #define LAPACK_sstevr LAPACK_GLOBAL(sstevr,SSTEVR) #define LAPACK_dstevr LAPACK_GLOBAL(dstevr,DSTEVR) #define LAPACK_sgees LAPACK_GLOBAL(sgees,SGEES) #define LAPACK_dgees LAPACK_GLOBAL(dgees,DGEES) #define LAPACK_cgees LAPACK_GLOBAL(cgees,CGEES) #define LAPACK_zgees LAPACK_GLOBAL(zgees,ZGEES) #define LAPACK_sgeesx LAPACK_GLOBAL(sgeesx,SGEESX) #define LAPACK_dgeesx LAPACK_GLOBAL(dgeesx,DGEESX) #define LAPACK_cgeesx LAPACK_GLOBAL(cgeesx,CGEESX) #define LAPACK_zgeesx LAPACK_GLOBAL(zgeesx,ZGEESX) #define LAPACK_sgeev LAPACK_GLOBAL(sgeev,SGEEV) #define LAPACK_dgeev LAPACK_GLOBAL(dgeev,DGEEV) #define LAPACK_cgeev LAPACK_GLOBAL(cgeev,CGEEV) #define LAPACK_zgeev LAPACK_GLOBAL(zgeev,ZGEEV) #define LAPACK_sgeevx LAPACK_GLOBAL(sgeevx,SGEEVX) #define LAPACK_dgeevx LAPACK_GLOBAL(dgeevx,DGEEVX) #define LAPACK_cgeevx LAPACK_GLOBAL(cgeevx,CGEEVX) #define LAPACK_zgeevx LAPACK_GLOBAL(zgeevx,ZGEEVX) #define LAPACK_sgesvd LAPACK_GLOBAL(sgesvd,SGESVD) #define LAPACK_dgesvd LAPACK_GLOBAL(dgesvd,DGESVD) #define LAPACK_cgesvd LAPACK_GLOBAL(cgesvd,CGESVD) #define LAPACK_zgesvd LAPACK_GLOBAL(zgesvd,ZGESVD) #define LAPACK_sgesdd LAPACK_GLOBAL(sgesdd,SGESDD) #define LAPACK_dgesdd LAPACK_GLOBAL(dgesdd,DGESDD) #define LAPACK_cgesdd LAPACK_GLOBAL(cgesdd,CGESDD) #define LAPACK_zgesdd LAPACK_GLOBAL(zgesdd,ZGESDD) #define LAPACK_dgejsv LAPACK_GLOBAL(dgejsv,DGEJSV) #define LAPACK_sgejsv LAPACK_GLOBAL(sgejsv,SGEJSV) #define LAPACK_dgesvj LAPACK_GLOBAL(dgesvj,DGESVJ) #define LAPACK_sgesvj LAPACK_GLOBAL(sgesvj,SGESVJ) #define LAPACK_sggsvd LAPACK_GLOBAL(sggsvd,SGGSVD) #define LAPACK_dggsvd LAPACK_GLOBAL(dggsvd,DGGSVD) #define LAPACK_cggsvd LAPACK_GLOBAL(cggsvd,CGGSVD) #define LAPACK_zggsvd LAPACK_GLOBAL(zggsvd,ZGGSVD) #define LAPACK_ssygv LAPACK_GLOBAL(ssygv,SSYGV) #define LAPACK_dsygv LAPACK_GLOBAL(dsygv,DSYGV) #define LAPACK_chegv LAPACK_GLOBAL(chegv,CHEGV) #define LAPACK_zhegv LAPACK_GLOBAL(zhegv,ZHEGV) #define LAPACK_ssygvd LAPACK_GLOBAL(ssygvd,SSYGVD) #define LAPACK_dsygvd LAPACK_GLOBAL(dsygvd,DSYGVD) #define LAPACK_chegvd LAPACK_GLOBAL(chegvd,CHEGVD) #define LAPACK_zhegvd LAPACK_GLOBAL(zhegvd,ZHEGVD) #define LAPACK_ssygvx LAPACK_GLOBAL(ssygvx,SSYGVX) #define LAPACK_dsygvx LAPACK_GLOBAL(dsygvx,DSYGVX) #define LAPACK_chegvx LAPACK_GLOBAL(chegvx,CHEGVX) #define LAPACK_zhegvx LAPACK_GLOBAL(zhegvx,ZHEGVX) #define LAPACK_sspgv LAPACK_GLOBAL(sspgv,SSPGV) #define LAPACK_dspgv LAPACK_GLOBAL(dspgv,DSPGV) #define LAPACK_chpgv LAPACK_GLOBAL(chpgv,CHPGV) #define LAPACK_zhpgv LAPACK_GLOBAL(zhpgv,ZHPGV) #define LAPACK_sspgvd LAPACK_GLOBAL(sspgvd,SSPGVD) #define LAPACK_dspgvd LAPACK_GLOBAL(dspgvd,DSPGVD) #define LAPACK_chpgvd LAPACK_GLOBAL(chpgvd,CHPGVD) #define LAPACK_zhpgvd LAPACK_GLOBAL(zhpgvd,ZHPGVD) #define LAPACK_sspgvx LAPACK_GLOBAL(sspgvx,SSPGVX) #define LAPACK_dspgvx LAPACK_GLOBAL(dspgvx,DSPGVX) #define LAPACK_chpgvx LAPACK_GLOBAL(chpgvx,CHPGVX) #define LAPACK_zhpgvx LAPACK_GLOBAL(zhpgvx,ZHPGVX) #define LAPACK_ssbgv LAPACK_GLOBAL(ssbgv,SSBGV) #define LAPACK_dsbgv LAPACK_GLOBAL(dsbgv,DSBGV) #define LAPACK_chbgv LAPACK_GLOBAL(chbgv,CHBGV) #define LAPACK_zhbgv LAPACK_GLOBAL(zhbgv,ZHBGV) #define LAPACK_ssbgvd LAPACK_GLOBAL(ssbgvd,SSBGVD) #define LAPACK_dsbgvd LAPACK_GLOBAL(dsbgvd,DSBGVD) #define LAPACK_chbgvd LAPACK_GLOBAL(chbgvd,CHBGVD) #define LAPACK_zhbgvd LAPACK_GLOBAL(zhbgvd,ZHBGVD) #define LAPACK_ssbgvx LAPACK_GLOBAL(ssbgvx,SSBGVX) #define LAPACK_dsbgvx LAPACK_GLOBAL(dsbgvx,DSBGVX) #define LAPACK_chbgvx LAPACK_GLOBAL(chbgvx,CHBGVX) #define LAPACK_zhbgvx LAPACK_GLOBAL(zhbgvx,ZHBGVX) #define LAPACK_sgges LAPACK_GLOBAL(sgges,SGGES) #define LAPACK_dgges LAPACK_GLOBAL(dgges,DGGES) #define LAPACK_cgges LAPACK_GLOBAL(cgges,CGGES) #define LAPACK_zgges LAPACK_GLOBAL(zgges,ZGGES) #define LAPACK_sggesx LAPACK_GLOBAL(sggesx,SGGESX) #define LAPACK_dggesx LAPACK_GLOBAL(dggesx,DGGESX) #define LAPACK_cggesx LAPACK_GLOBAL(cggesx,CGGESX) #define LAPACK_zggesx LAPACK_GLOBAL(zggesx,ZGGESX) #define LAPACK_sggev LAPACK_GLOBAL(sggev,SGGEV) #define LAPACK_dggev LAPACK_GLOBAL(dggev,DGGEV) #define LAPACK_cggev LAPACK_GLOBAL(cggev,CGGEV) #define LAPACK_zggev LAPACK_GLOBAL(zggev,ZGGEV) #define LAPACK_sggevx LAPACK_GLOBAL(sggevx,SGGEVX) #define LAPACK_dggevx LAPACK_GLOBAL(dggevx,DGGEVX) #define LAPACK_cggevx LAPACK_GLOBAL(cggevx,CGGEVX) #define LAPACK_zggevx LAPACK_GLOBAL(zggevx,ZGGEVX) #define LAPACK_dsfrk LAPACK_GLOBAL(dsfrk,DSFRK) #define LAPACK_ssfrk LAPACK_GLOBAL(ssfrk,SSFRK) #define LAPACK_zhfrk LAPACK_GLOBAL(zhfrk,ZHFRK) #define LAPACK_chfrk LAPACK_GLOBAL(chfrk,CHFRK) #define LAPACK_dtfsm LAPACK_GLOBAL(dtfsm,DTFSM) #define LAPACK_stfsm LAPACK_GLOBAL(stfsm,STFSM) #define LAPACK_ztfsm LAPACK_GLOBAL(ztfsm,ZTFSM) #define LAPACK_ctfsm LAPACK_GLOBAL(ctfsm,CTFSM) #define LAPACK_dtfttp LAPACK_GLOBAL(dtfttp,DTFTTP) #define LAPACK_stfttp LAPACK_GLOBAL(stfttp,STFTTP) #define LAPACK_ztfttp LAPACK_GLOBAL(ztfttp,ZTFTTP) #define LAPACK_ctfttp LAPACK_GLOBAL(ctfttp,CTFTTP) #define LAPACK_dtfttr LAPACK_GLOBAL(dtfttr,DTFTTR) #define LAPACK_stfttr LAPACK_GLOBAL(stfttr,STFTTR) #define LAPACK_ztfttr LAPACK_GLOBAL(ztfttr,ZTFTTR) #define LAPACK_ctfttr LAPACK_GLOBAL(ctfttr,CTFTTR) #define LAPACK_dtpttf LAPACK_GLOBAL(dtpttf,DTPTTF) #define LAPACK_stpttf LAPACK_GLOBAL(stpttf,STPTTF) #define LAPACK_ztpttf LAPACK_GLOBAL(ztpttf,ZTPTTF) #define LAPACK_ctpttf LAPACK_GLOBAL(ctpttf,CTPTTF) #define LAPACK_dtpttr LAPACK_GLOBAL(dtpttr,DTPTTR) #define LAPACK_stpttr LAPACK_GLOBAL(stpttr,STPTTR) #define LAPACK_ztpttr LAPACK_GLOBAL(ztpttr,ZTPTTR) #define LAPACK_ctpttr LAPACK_GLOBAL(ctpttr,CTPTTR) #define LAPACK_dtrttf LAPACK_GLOBAL(dtrttf,DTRTTF) #define LAPACK_strttf LAPACK_GLOBAL(strttf,STRTTF) #define LAPACK_ztrttf LAPACK_GLOBAL(ztrttf,ZTRTTF) #define LAPACK_ctrttf LAPACK_GLOBAL(ctrttf,CTRTTF) #define LAPACK_dtrttp LAPACK_GLOBAL(dtrttp,DTRTTP) #define LAPACK_strttp LAPACK_GLOBAL(strttp,STRTTP) #define LAPACK_ztrttp LAPACK_GLOBAL(ztrttp,ZTRTTP) #define LAPACK_ctrttp LAPACK_GLOBAL(ctrttp,CTRTTP) #define LAPACK_sgeqrfp LAPACK_GLOBAL(sgeqrfp,SGEQRFP) #define LAPACK_dgeqrfp LAPACK_GLOBAL(dgeqrfp,DGEQRFP) #define LAPACK_cgeqrfp LAPACK_GLOBAL(cgeqrfp,CGEQRFP) #define LAPACK_zgeqrfp LAPACK_GLOBAL(zgeqrfp,ZGEQRFP) #define LAPACK_clacgv LAPACK_GLOBAL(clacgv,CLACGV) #define LAPACK_zlacgv LAPACK_GLOBAL(zlacgv,ZLACGV) #define LAPACK_slarnv LAPACK_GLOBAL(slarnv,SLARNV) #define LAPACK_dlarnv LAPACK_GLOBAL(dlarnv,DLARNV) #define LAPACK_clarnv LAPACK_GLOBAL(clarnv,CLARNV) #define LAPACK_zlarnv LAPACK_GLOBAL(zlarnv,ZLARNV) #define LAPACK_sgeqr2 LAPACK_GLOBAL(sgeqr2,SGEQR2) #define LAPACK_dgeqr2 LAPACK_GLOBAL(dgeqr2,DGEQR2) #define LAPACK_cgeqr2 LAPACK_GLOBAL(cgeqr2,CGEQR2) #define LAPACK_zgeqr2 LAPACK_GLOBAL(zgeqr2,ZGEQR2) #define LAPACK_slacpy LAPACK_GLOBAL(slacpy,SLACPY) #define LAPACK_dlacpy LAPACK_GLOBAL(dlacpy,DLACPY) #define LAPACK_clacpy LAPACK_GLOBAL(clacpy,CLACPY) #define LAPACK_zlacpy LAPACK_GLOBAL(zlacpy,ZLACPY) #define LAPACK_sgetf2 LAPACK_GLOBAL(sgetf2,SGETF2) #define LAPACK_dgetf2 LAPACK_GLOBAL(dgetf2,DGETF2) #define LAPACK_cgetf2 LAPACK_GLOBAL(cgetf2,CGETF2) #define LAPACK_zgetf2 LAPACK_GLOBAL(zgetf2,ZGETF2) #define LAPACK_slaswp LAPACK_GLOBAL(slaswp,SLASWP) #define LAPACK_dlaswp LAPACK_GLOBAL(dlaswp,DLASWP) #define LAPACK_claswp LAPACK_GLOBAL(claswp,CLASWP) #define LAPACK_zlaswp LAPACK_GLOBAL(zlaswp,ZLASWP) #define LAPACK_slange LAPACK_GLOBAL(slange,SLANGE) #define LAPACK_dlange LAPACK_GLOBAL(dlange,DLANGE) #define LAPACK_clange LAPACK_GLOBAL(clange,CLANGE) #define LAPACK_zlange LAPACK_GLOBAL(zlange,ZLANGE) #define LAPACK_clanhe LAPACK_GLOBAL(clanhe,CLANHE) #define LAPACK_zlanhe LAPACK_GLOBAL(zlanhe,ZLANHE) #define LAPACK_slansy LAPACK_GLOBAL(slansy,SLANSY) #define LAPACK_dlansy LAPACK_GLOBAL(dlansy,DLANSY) #define LAPACK_clansy LAPACK_GLOBAL(clansy,CLANSY) #define LAPACK_zlansy LAPACK_GLOBAL(zlansy,ZLANSY) #define LAPACK_slantr LAPACK_GLOBAL(slantr,SLANTR) #define LAPACK_dlantr LAPACK_GLOBAL(dlantr,DLANTR) #define LAPACK_clantr LAPACK_GLOBAL(clantr,CLANTR) #define LAPACK_zlantr LAPACK_GLOBAL(zlantr,ZLANTR) #define LAPACK_slamch LAPACK_GLOBAL(slamch,SLAMCH) #define LAPACK_dlamch LAPACK_GLOBAL(dlamch,DLAMCH) #define LAPACK_sgelq2 LAPACK_GLOBAL(sgelq2,SGELQ2) #define LAPACK_dgelq2 LAPACK_GLOBAL(dgelq2,DGELQ2) #define LAPACK_cgelq2 LAPACK_GLOBAL(cgelq2,CGELQ2) #define LAPACK_zgelq2 LAPACK_GLOBAL(zgelq2,ZGELQ2) #define LAPACK_slarfb LAPACK_GLOBAL(slarfb,SLARFB) #define LAPACK_dlarfb LAPACK_GLOBAL(dlarfb,DLARFB) #define LAPACK_clarfb LAPACK_GLOBAL(clarfb,CLARFB) #define LAPACK_zlarfb LAPACK_GLOBAL(zlarfb,ZLARFB) #define LAPACK_slarfg LAPACK_GLOBAL(slarfg,SLARFG) #define LAPACK_dlarfg LAPACK_GLOBAL(dlarfg,DLARFG) #define LAPACK_clarfg LAPACK_GLOBAL(clarfg,CLARFG) #define LAPACK_zlarfg LAPACK_GLOBAL(zlarfg,ZLARFG) #define LAPACK_slarft LAPACK_GLOBAL(slarft,SLARFT) #define LAPACK_dlarft LAPACK_GLOBAL(dlarft,DLARFT) #define LAPACK_clarft LAPACK_GLOBAL(clarft,CLARFT) #define LAPACK_zlarft LAPACK_GLOBAL(zlarft,ZLARFT) #define LAPACK_slarfx LAPACK_GLOBAL(slarfx,SLARFX) #define LAPACK_dlarfx LAPACK_GLOBAL(dlarfx,DLARFX) #define LAPACK_clarfx LAPACK_GLOBAL(clarfx,CLARFX) #define LAPACK_zlarfx LAPACK_GLOBAL(zlarfx,ZLARFX) #define LAPACK_slatms LAPACK_GLOBAL(slatms,SLATMS) #define LAPACK_dlatms LAPACK_GLOBAL(dlatms,DLATMS) #define LAPACK_clatms LAPACK_GLOBAL(clatms,CLATMS) #define LAPACK_zlatms LAPACK_GLOBAL(zlatms,ZLATMS) #define LAPACK_slag2d LAPACK_GLOBAL(slag2d,SLAG2D) #define LAPACK_dlag2s LAPACK_GLOBAL(dlag2s,DLAG2S) #define LAPACK_clag2z LAPACK_GLOBAL(clag2z,CLAG2Z) #define LAPACK_zlag2c LAPACK_GLOBAL(zlag2c,ZLAG2C) #define LAPACK_slauum LAPACK_GLOBAL(slauum,SLAUUM) #define LAPACK_dlauum LAPACK_GLOBAL(dlauum,DLAUUM) #define LAPACK_clauum LAPACK_GLOBAL(clauum,CLAUUM) #define LAPACK_zlauum LAPACK_GLOBAL(zlauum,ZLAUUM) #define LAPACK_slagge LAPACK_GLOBAL(slagge,SLAGGE) #define LAPACK_dlagge LAPACK_GLOBAL(dlagge,DLAGGE) #define LAPACK_clagge LAPACK_GLOBAL(clagge,CLAGGE) #define LAPACK_zlagge LAPACK_GLOBAL(zlagge,ZLAGGE) #define LAPACK_slaset LAPACK_GLOBAL(slaset,SLASET) #define LAPACK_dlaset LAPACK_GLOBAL(dlaset,DLASET) #define LAPACK_claset LAPACK_GLOBAL(claset,CLASET) #define LAPACK_zlaset LAPACK_GLOBAL(zlaset,ZLASET) #define LAPACK_slasrt LAPACK_GLOBAL(slasrt,SLASRT) #define LAPACK_dlasrt LAPACK_GLOBAL(dlasrt,DLASRT) #define LAPACK_slagsy LAPACK_GLOBAL(slagsy,SLAGSY) #define LAPACK_dlagsy LAPACK_GLOBAL(dlagsy,DLAGSY) #define LAPACK_clagsy LAPACK_GLOBAL(clagsy,CLAGSY) #define LAPACK_zlagsy LAPACK_GLOBAL(zlagsy,ZLAGSY) #define LAPACK_claghe LAPACK_GLOBAL(claghe,CLAGHE) #define LAPACK_zlaghe LAPACK_GLOBAL(zlaghe,ZLAGHE) #define LAPACK_slapmr LAPACK_GLOBAL(slapmr,SLAPMR) #define LAPACK_dlapmr LAPACK_GLOBAL(dlapmr,DLAPMR) #define LAPACK_clapmr LAPACK_GLOBAL(clapmr,CLAPMR) #define LAPACK_zlapmr LAPACK_GLOBAL(zlapmr,ZLAPMR) #define LAPACK_slapy2 LAPACK_GLOBAL(slapy2,SLAPY2) #define LAPACK_dlapy2 LAPACK_GLOBAL(dlapy2,DLAPY2) #define LAPACK_slapy3 LAPACK_GLOBAL(slapy3,SLAPY3) #define LAPACK_dlapy3 LAPACK_GLOBAL(dlapy3,DLAPY3) #define LAPACK_slartgp LAPACK_GLOBAL(slartgp,SLARTGP) #define LAPACK_dlartgp LAPACK_GLOBAL(dlartgp,DLARTGP) #define LAPACK_slartgs LAPACK_GLOBAL(slartgs,SLARTGS) #define LAPACK_dlartgs LAPACK_GLOBAL(dlartgs,DLARTGS) // LAPACK 3.3.0 #define LAPACK_cbbcsd LAPACK_GLOBAL(cbbcsd,CBBCSD) #define LAPACK_cheswapr LAPACK_GLOBAL(cheswapr,CHESWAPR) #define LAPACK_chetri2 LAPACK_GLOBAL(chetri2,CHETRI2) #define LAPACK_chetri2x LAPACK_GLOBAL(chetri2x,CHETRI2X) #define LAPACK_chetrs2 LAPACK_GLOBAL(chetrs2,CHETRS2) #define LAPACK_csyconv LAPACK_GLOBAL(csyconv,CSYCONV) #define LAPACK_csyswapr LAPACK_GLOBAL(csyswapr,CSYSWAPR) #define LAPACK_csytri2 LAPACK_GLOBAL(csytri2,CSYTRI2) #define LAPACK_csytri2x LAPACK_GLOBAL(csytri2x,CSYTRI2X) #define LAPACK_csytrs2 LAPACK_GLOBAL(csytrs2,CSYTRS2) #define LAPACK_cunbdb LAPACK_GLOBAL(cunbdb,CUNBDB) #define LAPACK_cuncsd LAPACK_GLOBAL(cuncsd,CUNCSD) #define LAPACK_dbbcsd LAPACK_GLOBAL(dbbcsd,DBBCSD) #define LAPACK_dorbdb LAPACK_GLOBAL(dorbdb,DORBDB) #define LAPACK_dorcsd LAPACK_GLOBAL(dorcsd,DORCSD) #define LAPACK_dsyconv LAPACK_GLOBAL(dsyconv,DSYCONV) #define LAPACK_dsyswapr LAPACK_GLOBAL(dsyswapr,DSYSWAPR) #define LAPACK_dsytri2 LAPACK_GLOBAL(dsytri2,DSYTRI2) #define LAPACK_dsytri2x LAPACK_GLOBAL(dsytri2x,DSYTRI2X) #define LAPACK_dsytrs2 LAPACK_GLOBAL(dsytrs2,DSYTRS2) #define LAPACK_sbbcsd LAPACK_GLOBAL(sbbcsd,SBBCSD) #define LAPACK_sorbdb LAPACK_GLOBAL(sorbdb,SORBDB) #define LAPACK_sorcsd LAPACK_GLOBAL(sorcsd,SORCSD) #define LAPACK_ssyconv LAPACK_GLOBAL(ssyconv,SSYCONV) #define LAPACK_ssyswapr LAPACK_GLOBAL(ssyswapr,SSYSWAPR) #define LAPACK_ssytri2 LAPACK_GLOBAL(ssytri2,SSYTRI2) #define LAPACK_ssytri2x LAPACK_GLOBAL(ssytri2x,SSYTRI2X) #define LAPACK_ssytrs2 LAPACK_GLOBAL(ssytrs2,SSYTRS2) #define LAPACK_zbbcsd LAPACK_GLOBAL(zbbcsd,ZBBCSD) #define LAPACK_zheswapr LAPACK_GLOBAL(zheswapr,ZHESWAPR) #define LAPACK_zhetri2 LAPACK_GLOBAL(zhetri2,ZHETRI2) #define LAPACK_zhetri2x LAPACK_GLOBAL(zhetri2x,ZHETRI2X) #define LAPACK_zhetrs2 LAPACK_GLOBAL(zhetrs2,ZHETRS2) #define LAPACK_zsyconv LAPACK_GLOBAL(zsyconv,ZSYCONV) #define LAPACK_zsyswapr LAPACK_GLOBAL(zsyswapr,ZSYSWAPR) #define LAPACK_zsytri2 LAPACK_GLOBAL(zsytri2,ZSYTRI2) #define LAPACK_zsytri2x LAPACK_GLOBAL(zsytri2x,ZSYTRI2X) #define LAPACK_zsytrs2 LAPACK_GLOBAL(zsytrs2,ZSYTRS2) #define LAPACK_zunbdb LAPACK_GLOBAL(zunbdb,ZUNBDB) #define LAPACK_zuncsd LAPACK_GLOBAL(zuncsd,ZUNCSD) // LAPACK 3.4.0 #define LAPACK_sgemqrt LAPACK_GLOBAL(sgemqrt,SGEMQRT) #define LAPACK_dgemqrt LAPACK_GLOBAL(dgemqrt,DGEMQRT) #define LAPACK_cgemqrt LAPACK_GLOBAL(cgemqrt,CGEMQRT) #define LAPACK_zgemqrt LAPACK_GLOBAL(zgemqrt,ZGEMQRT) #define LAPACK_sgeqrt LAPACK_GLOBAL(sgeqrt,SGEQRT) #define LAPACK_dgeqrt LAPACK_GLOBAL(dgeqrt,DGEQRT) #define LAPACK_cgeqrt LAPACK_GLOBAL(cgeqrt,CGEQRT) #define LAPACK_zgeqrt LAPACK_GLOBAL(zgeqrt,ZGEQRT) #define LAPACK_sgeqrt2 LAPACK_GLOBAL(sgeqrt2,SGEQRT2) #define LAPACK_dgeqrt2 LAPACK_GLOBAL(dgeqrt2,DGEQRT2) #define LAPACK_cgeqrt2 LAPACK_GLOBAL(cgeqrt2,CGEQRT2) #define LAPACK_zgeqrt2 LAPACK_GLOBAL(zgeqrt2,ZGEQRT2) #define LAPACK_sgeqrt3 LAPACK_GLOBAL(sgeqrt3,SGEQRT3) #define LAPACK_dgeqrt3 LAPACK_GLOBAL(dgeqrt3,DGEQRT3) #define LAPACK_cgeqrt3 LAPACK_GLOBAL(cgeqrt3,CGEQRT3) #define LAPACK_zgeqrt3 LAPACK_GLOBAL(zgeqrt3,ZGEQRT3) #define LAPACK_stpmqrt LAPACK_GLOBAL(stpmqrt,STPMQRT) #define LAPACK_dtpmqrt LAPACK_GLOBAL(dtpmqrt,DTPMQRT) #define LAPACK_ctpmqrt LAPACK_GLOBAL(ctpmqrt,CTPMQRT) #define LAPACK_ztpmqrt LAPACK_GLOBAL(ztpmqrt,ZTPMQRT) #define LAPACK_dtpqrt LAPACK_GLOBAL(dtpqrt,DTPQRT) #define LAPACK_ctpqrt LAPACK_GLOBAL(ctpqrt,CTPQRT) #define LAPACK_ztpqrt LAPACK_GLOBAL(ztpqrt,ZTPQRT) #define LAPACK_stpqrt2 LAPACK_GLOBAL(stpqrt2,STPQRT2) #define LAPACK_dtpqrt2 LAPACK_GLOBAL(dtpqrt2,DTPQRT2) #define LAPACK_ctpqrt2 LAPACK_GLOBAL(ctpqrt2,CTPQRT2) #define LAPACK_ztpqrt2 LAPACK_GLOBAL(ztpqrt2,ZTPQRT2) #define LAPACK_stprfb LAPACK_GLOBAL(stprfb,STPRFB) #define LAPACK_dtprfb LAPACK_GLOBAL(dtprfb,DTPRFB) #define LAPACK_ctprfb LAPACK_GLOBAL(ctprfb,CTPRFB) #define LAPACK_ztprfb LAPACK_GLOBAL(ztprfb,ZTPRFB) // LAPACK 3.X.X #define LAPACK_csyr LAPACK_GLOBAL(csyr,CSYR) #define LAPACK_zsyr LAPACK_GLOBAL(zsyr,ZSYR) void LAPACK_sgetrf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_dgetrf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_cgetrf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_zgetrf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_sgbtrf( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, float* ab, lapack_int* ldab, lapack_int* ipiv, lapack_int *info ); void LAPACK_dgbtrf( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, double* ab, lapack_int* ldab, lapack_int* ipiv, lapack_int *info ); void LAPACK_cgbtrf( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_complex_float* ab, lapack_int* ldab, lapack_int* ipiv, lapack_int *info ); void LAPACK_zgbtrf( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_complex_double* ab, lapack_int* ldab, lapack_int* ipiv, lapack_int *info ); void LAPACK_sgttrf( lapack_int* n, float* dl, float* d, float* du, float* du2, lapack_int* ipiv, lapack_int *info ); void LAPACK_dgttrf( lapack_int* n, double* dl, double* d, double* du, double* du2, lapack_int* ipiv, lapack_int *info ); void LAPACK_cgttrf( lapack_int* n, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* du2, lapack_int* ipiv, lapack_int *info ); void LAPACK_zgttrf( lapack_int* n, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* du2, lapack_int* ipiv, lapack_int *info ); void LAPACK_spotrf( char* uplo, lapack_int* n, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dpotrf( char* uplo, lapack_int* n, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_cpotrf( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_zpotrf( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_dpstrf( char* uplo, lapack_int* n, double* a, lapack_int* lda, lapack_int* piv, lapack_int* rank, double* tol, double* work, lapack_int *info ); void LAPACK_spstrf( char* uplo, lapack_int* n, float* a, lapack_int* lda, lapack_int* piv, lapack_int* rank, float* tol, float* work, lapack_int *info ); void LAPACK_zpstrf( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* piv, lapack_int* rank, double* tol, double* work, lapack_int *info ); void LAPACK_cpstrf( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* piv, lapack_int* rank, float* tol, float* work, lapack_int *info ); void LAPACK_dpftrf( char* transr, char* uplo, lapack_int* n, double* a, lapack_int *info ); void LAPACK_spftrf( char* transr, char* uplo, lapack_int* n, float* a, lapack_int *info ); void LAPACK_zpftrf( char* transr, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int *info ); void LAPACK_cpftrf( char* transr, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int *info ); void LAPACK_spptrf( char* uplo, lapack_int* n, float* ap, lapack_int *info ); void LAPACK_dpptrf( char* uplo, lapack_int* n, double* ap, lapack_int *info ); void LAPACK_cpptrf( char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_int *info ); void LAPACK_zpptrf( char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_int *info ); void LAPACK_spbtrf( char* uplo, lapack_int* n, lapack_int* kd, float* ab, lapack_int* ldab, lapack_int *info ); void LAPACK_dpbtrf( char* uplo, lapack_int* n, lapack_int* kd, double* ab, lapack_int* ldab, lapack_int *info ); void LAPACK_cpbtrf( char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_float* ab, lapack_int* ldab, lapack_int *info ); void LAPACK_zpbtrf( char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_double* ab, lapack_int* ldab, lapack_int *info ); void LAPACK_spttrf( lapack_int* n, float* d, float* e, lapack_int *info ); void LAPACK_dpttrf( lapack_int* n, double* d, double* e, lapack_int *info ); void LAPACK_cpttrf( lapack_int* n, float* d, lapack_complex_float* e, lapack_int *info ); void LAPACK_zpttrf( lapack_int* n, double* d, lapack_complex_double* e, lapack_int *info ); void LAPACK_ssytrf( char* uplo, lapack_int* n, float* a, lapack_int* lda, lapack_int* ipiv, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dsytrf( char* uplo, lapack_int* n, double* a, lapack_int* lda, lapack_int* ipiv, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_csytrf( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zsytrf( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chetrf( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zhetrf( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ssptrf( char* uplo, lapack_int* n, float* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_dsptrf( char* uplo, lapack_int* n, double* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_csptrf( char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_zsptrf( char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_chptrf( char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_zhptrf( char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_int* ipiv, lapack_int *info ); void LAPACK_sgetrs( char* trans, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgetrs( char* trans, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgetrs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgetrs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sgbtrs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const float* ab, lapack_int* ldab, const lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgbtrs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const double* ab, lapack_int* ldab, const lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgbtrs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgbtrs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sgttrs( char* trans, lapack_int* n, lapack_int* nrhs, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgttrs( char* trans, lapack_int* n, lapack_int* nrhs, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgttrs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgttrs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spotrs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpotrs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpotrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpotrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpftrs( char* transr, char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spftrs( char* transr, char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpftrs( char* transr, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpftrs( char* transr, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* ap, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* ap, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spbtrs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const float* ab, lapack_int* ldab, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpbtrs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const double* ab, lapack_int* ldab, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpbtrs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpbtrs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spttrs( lapack_int* n, lapack_int* nrhs, const float* d, const float* e, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpttrs( lapack_int* n, lapack_int* nrhs, const double* d, const double* e, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpttrs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* d, const lapack_complex_float* e, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpttrs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* d, const lapack_complex_double* e, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ssytrs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dsytrs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_csytrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zsytrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_chetrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zhetrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ssptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* ap, const lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dsptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* ap, const lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_csptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zsptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_chptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zhptrs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_strtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dtrtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ctrtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ztrtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_stptrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const float* ap, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dtptrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const double* ap, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ctptrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ztptrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_stbtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const float* ab, lapack_int* ldab, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dtbtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const double* ab, lapack_int* ldab, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ctbtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_ztbtrs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sgecon( char* norm, lapack_int* n, const float* a, lapack_int* lda, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgecon( char* norm, lapack_int* n, const double* a, lapack_int* lda, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgecon( char* norm, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* anorm, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgecon( char* norm, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* anorm, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgbcon( char* norm, lapack_int* n, lapack_int* kl, lapack_int* ku, const float* ab, lapack_int* ldab, const lapack_int* ipiv, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgbcon( char* norm, lapack_int* n, lapack_int* kl, lapack_int* ku, const double* ab, lapack_int* ldab, const lapack_int* ipiv, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgbcon( char* norm, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_float* ab, lapack_int* ldab, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgbcon( char* norm, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_double* ab, lapack_int* ldab, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgtcon( char* norm, lapack_int* n, const float* dl, const float* d, const float* du, const float* du2, const lapack_int* ipiv, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgtcon( char* norm, lapack_int* n, const double* dl, const double* d, const double* du, const double* du2, const lapack_int* ipiv, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgtcon( char* norm, lapack_int* n, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* du2, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, lapack_int *info ); void LAPACK_zgtcon( char* norm, lapack_int* n, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* du2, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, lapack_int *info ); void LAPACK_spocon( char* uplo, lapack_int* n, const float* a, lapack_int* lda, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dpocon( char* uplo, lapack_int* n, const double* a, lapack_int* lda, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cpocon( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* anorm, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zpocon( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* anorm, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sppcon( char* uplo, lapack_int* n, const float* ap, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dppcon( char* uplo, lapack_int* n, const double* ap, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cppcon( char* uplo, lapack_int* n, const lapack_complex_float* ap, float* anorm, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zppcon( char* uplo, lapack_int* n, const lapack_complex_double* ap, double* anorm, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_spbcon( char* uplo, lapack_int* n, lapack_int* kd, const float* ab, lapack_int* ldab, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dpbcon( char* uplo, lapack_int* n, lapack_int* kd, const double* ab, lapack_int* ldab, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cpbcon( char* uplo, lapack_int* n, lapack_int* kd, const lapack_complex_float* ab, lapack_int* ldab, float* anorm, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zpbcon( char* uplo, lapack_int* n, lapack_int* kd, const lapack_complex_double* ab, lapack_int* ldab, double* anorm, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sptcon( lapack_int* n, const float* d, const float* e, float* anorm, float* rcond, float* work, lapack_int *info ); void LAPACK_dptcon( lapack_int* n, const double* d, const double* e, double* anorm, double* rcond, double* work, lapack_int *info ); void LAPACK_cptcon( lapack_int* n, const float* d, const lapack_complex_float* e, float* anorm, float* rcond, float* work, lapack_int *info ); void LAPACK_zptcon( lapack_int* n, const double* d, const lapack_complex_double* e, double* anorm, double* rcond, double* work, lapack_int *info ); void LAPACK_ssycon( char* uplo, lapack_int* n, const float* a, lapack_int* lda, const lapack_int* ipiv, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dsycon( char* uplo, lapack_int* n, const double* a, lapack_int* lda, const lapack_int* ipiv, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_csycon( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, lapack_int *info ); void LAPACK_zsycon( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, lapack_int *info ); void LAPACK_checon( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, lapack_int *info ); void LAPACK_zhecon( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, lapack_int *info ); void LAPACK_sspcon( char* uplo, lapack_int* n, const float* ap, const lapack_int* ipiv, float* anorm, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dspcon( char* uplo, lapack_int* n, const double* ap, const lapack_int* ipiv, double* anorm, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cspcon( char* uplo, lapack_int* n, const lapack_complex_float* ap, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, lapack_int *info ); void LAPACK_zspcon( char* uplo, lapack_int* n, const lapack_complex_double* ap, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, lapack_int *info ); void LAPACK_chpcon( char* uplo, lapack_int* n, const lapack_complex_float* ap, const lapack_int* ipiv, float* anorm, float* rcond, lapack_complex_float* work, lapack_int *info ); void LAPACK_zhpcon( char* uplo, lapack_int* n, const lapack_complex_double* ap, const lapack_int* ipiv, double* anorm, double* rcond, lapack_complex_double* work, lapack_int *info ); void LAPACK_strcon( char* norm, char* uplo, char* diag, lapack_int* n, const float* a, lapack_int* lda, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtrcon( char* norm, char* uplo, char* diag, lapack_int* n, const double* a, lapack_int* lda, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctrcon( char* norm, char* uplo, char* diag, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztrcon( char* norm, char* uplo, char* diag, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_stpcon( char* norm, char* uplo, char* diag, lapack_int* n, const float* ap, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtpcon( char* norm, char* uplo, char* diag, lapack_int* n, const double* ap, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctpcon( char* norm, char* uplo, char* diag, lapack_int* n, const lapack_complex_float* ap, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztpcon( char* norm, char* uplo, char* diag, lapack_int* n, const lapack_complex_double* ap, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_stbcon( char* norm, char* uplo, char* diag, lapack_int* n, lapack_int* kd, const float* ab, lapack_int* ldab, float* rcond, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtbcon( char* norm, char* uplo, char* diag, lapack_int* n, lapack_int* kd, const double* ab, lapack_int* ldab, double* rcond, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctbcon( char* norm, char* uplo, char* diag, lapack_int* n, lapack_int* kd, const lapack_complex_float* ab, lapack_int* ldab, float* rcond, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztbcon( char* norm, char* uplo, char* diag, lapack_int* n, lapack_int* kd, const lapack_complex_double* ab, lapack_int* ldab, double* rcond, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgerfs( char* trans, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgerfs( char* trans, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgerfs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgerfs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dgerfsx( char* trans, char* equed, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sgerfsx( char* trans, char* equed, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zgerfsx( char* trans, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cgerfsx( char* trans, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sgbrfs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const float* ab, lapack_int* ldab, const float* afb, lapack_int* ldafb, const lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgbrfs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const double* ab, lapack_int* ldab, const double* afb, lapack_int* ldafb, const lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgbrfs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, const lapack_complex_float* afb, lapack_int* ldafb, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgbrfs( char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, const lapack_complex_double* afb, lapack_int* ldafb, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dgbrfsx( char* trans, char* equed, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const double* ab, lapack_int* ldab, const double* afb, lapack_int* ldafb, const lapack_int* ipiv, const double* r, const double* c, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sgbrfsx( char* trans, char* equed, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const float* ab, lapack_int* ldab, const float* afb, lapack_int* ldafb, const lapack_int* ipiv, const float* r, const float* c, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zgbrfsx( char* trans, char* equed, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, const lapack_complex_double* afb, lapack_int* ldafb, const lapack_int* ipiv, const double* r, const double* c, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cgbrfsx( char* trans, char* equed, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, const lapack_complex_float* afb, lapack_int* ldafb, const lapack_int* ipiv, const float* r, const float* c, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sgtrfs( char* trans, lapack_int* n, lapack_int* nrhs, const float* dl, const float* d, const float* du, const float* dlf, const float* df, const float* duf, const float* du2, const lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgtrfs( char* trans, lapack_int* n, lapack_int* nrhs, const double* dl, const double* d, const double* du, const double* dlf, const double* df, const double* duf, const double* du2, const lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgtrfs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, const lapack_complex_float* dlf, const lapack_complex_float* df, const lapack_complex_float* duf, const lapack_complex_float* du2, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgtrfs( char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, const lapack_complex_double* dlf, const lapack_complex_double* df, const lapack_complex_double* duf, const lapack_complex_double* du2, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sporfs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dporfs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cporfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zporfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dporfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const double* s, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sporfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const float* s, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zporfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const double* s, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cporfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const float* s, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_spprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* ap, const float* afp, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dpprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* ap, const double* afp, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cpprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zpprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_spbrfs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const float* ab, lapack_int* ldab, const float* afb, lapack_int* ldafb, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dpbrfs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const double* ab, lapack_int* ldab, const double* afb, lapack_int* ldafb, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cpbrfs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, const lapack_complex_float* afb, lapack_int* ldafb, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zpbrfs( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, const lapack_complex_double* afb, lapack_int* ldafb, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sptrfs( lapack_int* n, lapack_int* nrhs, const float* d, const float* e, const float* df, const float* ef, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int *info ); void LAPACK_dptrfs( lapack_int* n, lapack_int* nrhs, const double* d, const double* e, const double* df, const double* ef, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int *info ); void LAPACK_cptrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* d, const lapack_complex_float* e, const float* df, const lapack_complex_float* ef, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zptrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* d, const lapack_complex_double* e, const double* df, const lapack_complex_double* ef, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_ssyrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dsyrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_csyrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zsyrfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dsyrfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* s, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ssyrfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* s, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zsyrfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_csyrfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_cherfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zherfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_zherfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* af, lapack_int* ldaf, const lapack_int* ipiv, const double* s, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cherfsx( char* uplo, char* equed, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* af, lapack_int* ldaf, const lapack_int* ipiv, const float* s, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ssprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const float* ap, const float* afp, const lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dsprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const double* ap, const double* afp, const lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_csprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zsprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_chprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_complex_float* afp, const lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhprfs( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_complex_double* afp, const lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_strrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const float* b, lapack_int* ldb, const float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtrrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const double* b, lapack_int* ldb, const double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctrrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* b, lapack_int* ldb, const lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztrrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* b, lapack_int* ldb, const lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_stprfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const float* ap, const float* b, lapack_int* ldb, const float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtprfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const double* ap, const double* b, lapack_int* ldb, const double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctprfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, const lapack_complex_float* b, lapack_int* ldb, const lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztprfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, const lapack_complex_double* b, lapack_int* ldb, const lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_stbrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const float* ab, lapack_int* ldab, const float* b, lapack_int* ldb, const float* x, lapack_int* ldx, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dtbrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const double* ab, lapack_int* ldab, const double* b, lapack_int* ldb, const double* x, lapack_int* ldx, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ctbrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_float* ab, lapack_int* ldab, const lapack_complex_float* b, lapack_int* ldb, const lapack_complex_float* x, lapack_int* ldx, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztbrfs( char* uplo, char* trans, char* diag, lapack_int* n, lapack_int* kd, lapack_int* nrhs, const lapack_complex_double* ab, lapack_int* ldab, const lapack_complex_double* b, lapack_int* ldb, const lapack_complex_double* x, lapack_int* ldx, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgetri( lapack_int* n, float* a, lapack_int* lda, const lapack_int* ipiv, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgetri( lapack_int* n, double* a, lapack_int* lda, const lapack_int* ipiv, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgetri( lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgetri( lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_spotri( char* uplo, lapack_int* n, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dpotri( char* uplo, lapack_int* n, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_cpotri( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_zpotri( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_dpftri( char* transr, char* uplo, lapack_int* n, double* a, lapack_int *info ); void LAPACK_spftri( char* transr, char* uplo, lapack_int* n, float* a, lapack_int *info ); void LAPACK_zpftri( char* transr, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int *info ); void LAPACK_cpftri( char* transr, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int *info ); void LAPACK_spptri( char* uplo, lapack_int* n, float* ap, lapack_int *info ); void LAPACK_dpptri( char* uplo, lapack_int* n, double* ap, lapack_int *info ); void LAPACK_cpptri( char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_int *info ); void LAPACK_zpptri( char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_int *info ); void LAPACK_ssytri( char* uplo, lapack_int* n, float* a, lapack_int* lda, const lapack_int* ipiv, float* work, lapack_int *info ); void LAPACK_dsytri( char* uplo, lapack_int* n, double* a, lapack_int* lda, const lapack_int* ipiv, double* work, lapack_int *info ); void LAPACK_csytri( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int *info ); void LAPACK_zsytri( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int *info ); void LAPACK_chetri( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int *info ); void LAPACK_zhetri( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int *info ); void LAPACK_ssptri( char* uplo, lapack_int* n, float* ap, const lapack_int* ipiv, float* work, lapack_int *info ); void LAPACK_dsptri( char* uplo, lapack_int* n, double* ap, const lapack_int* ipiv, double* work, lapack_int *info ); void LAPACK_csptri( char* uplo, lapack_int* n, lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* work, lapack_int *info ); void LAPACK_zsptri( char* uplo, lapack_int* n, lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* work, lapack_int *info ); void LAPACK_chptri( char* uplo, lapack_int* n, lapack_complex_float* ap, const lapack_int* ipiv, lapack_complex_float* work, lapack_int *info ); void LAPACK_zhptri( char* uplo, lapack_int* n, lapack_complex_double* ap, const lapack_int* ipiv, lapack_complex_double* work, lapack_int *info ); void LAPACK_strtri( char* uplo, char* diag, lapack_int* n, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dtrtri( char* uplo, char* diag, lapack_int* n, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_ctrtri( char* uplo, char* diag, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_ztrtri( char* uplo, char* diag, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_dtftri( char* transr, char* uplo, char* diag, lapack_int* n, double* a, lapack_int *info ); void LAPACK_stftri( char* transr, char* uplo, char* diag, lapack_int* n, float* a, lapack_int *info ); void LAPACK_ztftri( char* transr, char* uplo, char* diag, lapack_int* n, lapack_complex_double* a, lapack_int *info ); void LAPACK_ctftri( char* transr, char* uplo, char* diag, lapack_int* n, lapack_complex_float* a, lapack_int *info ); void LAPACK_stptri( char* uplo, char* diag, lapack_int* n, float* ap, lapack_int *info ); void LAPACK_dtptri( char* uplo, char* diag, lapack_int* n, double* ap, lapack_int *info ); void LAPACK_ctptri( char* uplo, char* diag, lapack_int* n, lapack_complex_float* ap, lapack_int *info ); void LAPACK_ztptri( char* uplo, char* diag, lapack_int* n, lapack_complex_double* ap, lapack_int *info ); void LAPACK_sgeequ( lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_dgeequ( lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_cgeequ( lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_zgeequ( lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_dgeequb( lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_sgeequb( lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_zgeequb( lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_cgeequb( lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_sgbequ( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const float* ab, lapack_int* ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_dgbequ( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const double* ab, lapack_int* ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_cgbequ( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_float* ab, lapack_int* ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_zgbequ( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_double* ab, lapack_int* ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_dgbequb( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const double* ab, lapack_int* ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_sgbequb( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const float* ab, lapack_int* ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_zgbequb( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_double* ab, lapack_int* ldab, double* r, double* c, double* rowcnd, double* colcnd, double* amax, lapack_int *info ); void LAPACK_cgbequb( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const lapack_complex_float* ab, lapack_int* ldab, float* r, float* c, float* rowcnd, float* colcnd, float* amax, lapack_int *info ); void LAPACK_spoequ( lapack_int* n, const float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_dpoequ( lapack_int* n, const double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_cpoequ( lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_zpoequ( lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_dpoequb( lapack_int* n, const double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_spoequb( lapack_int* n, const float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_zpoequb( lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_cpoequb( lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_sppequ( char* uplo, lapack_int* n, const float* ap, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_dppequ( char* uplo, lapack_int* n, const double* ap, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_cppequ( char* uplo, lapack_int* n, const lapack_complex_float* ap, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_zppequ( char* uplo, lapack_int* n, const lapack_complex_double* ap, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_spbequ( char* uplo, lapack_int* n, lapack_int* kd, const float* ab, lapack_int* ldab, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_dpbequ( char* uplo, lapack_int* n, lapack_int* kd, const double* ab, lapack_int* ldab, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_cpbequ( char* uplo, lapack_int* n, lapack_int* kd, const lapack_complex_float* ab, lapack_int* ldab, float* s, float* scond, float* amax, lapack_int *info ); void LAPACK_zpbequ( char* uplo, lapack_int* n, lapack_int* kd, const lapack_complex_double* ab, lapack_int* ldab, double* s, double* scond, double* amax, lapack_int *info ); void LAPACK_dsyequb( char* uplo, lapack_int* n, const double* a, lapack_int* lda, double* s, double* scond, double* amax, double* work, lapack_int *info ); void LAPACK_ssyequb( char* uplo, lapack_int* n, const float* a, lapack_int* lda, float* s, float* scond, float* amax, float* work, lapack_int *info ); void LAPACK_zsyequb( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_complex_double* work, lapack_int *info ); void LAPACK_csyequb( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_complex_float* work, lapack_int *info ); void LAPACK_zheequb( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* s, double* scond, double* amax, lapack_complex_double* work, lapack_int *info ); void LAPACK_cheequb( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* s, float* scond, float* amax, lapack_complex_float* work, lapack_int *info ); void LAPACK_sgesv( lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgesv( lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgesv( lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgesv( lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dsgesv( lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, lapack_int* ipiv, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* work, float* swork, lapack_int* iter, lapack_int *info ); void LAPACK_zcgesv( lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, lapack_complex_double* work, lapack_complex_float* swork, double* rwork, lapack_int* iter, lapack_int *info ); void LAPACK_sgesvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgesvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgesvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgesvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dgesvxx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sgesvxx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zgesvxx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cgesvxx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sgbsv( lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, float* ab, lapack_int* ldab, lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgbsv( lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, double* ab, lapack_int* ldab, lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgbsv( lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_float* ab, lapack_int* ldab, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgbsv( lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_double* ab, lapack_int* ldab, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sgbsvx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, float* ab, lapack_int* ldab, float* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgbsvx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, double* ab, lapack_int* ldab, double* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgbsvx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgbsvx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dgbsvxx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, double* ab, lapack_int* ldab, double* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, double* r, double* c, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sgbsvxx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, float* ab, lapack_int* ldab, float* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, float* r, float* c, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zgbsvxx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, double* r, double* c, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cgbsvxx( char* fact, char* trans, lapack_int* n, lapack_int* kl, lapack_int* ku, lapack_int* nrhs, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* afb, lapack_int* ldafb, lapack_int* ipiv, char* equed, float* r, float* c, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sgtsv( lapack_int* n, lapack_int* nrhs, float* dl, float* d, float* du, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dgtsv( lapack_int* n, lapack_int* nrhs, double* dl, double* d, double* du, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cgtsv( lapack_int* n, lapack_int* nrhs, lapack_complex_float* dl, lapack_complex_float* d, lapack_complex_float* du, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zgtsv( lapack_int* n, lapack_int* nrhs, lapack_complex_double* dl, lapack_complex_double* d, lapack_complex_double* du, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sgtsvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, const float* dl, const float* d, const float* du, float* dlf, float* df, float* duf, float* du2, lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dgtsvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, const double* dl, const double* d, const double* du, double* dlf, double* df, double* duf, double* du2, lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cgtsvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* dl, const lapack_complex_float* d, const lapack_complex_float* du, lapack_complex_float* dlf, lapack_complex_float* df, lapack_complex_float* duf, lapack_complex_float* du2, lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgtsvx( char* fact, char* trans, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* dl, const lapack_complex_double* d, const lapack_complex_double* du, lapack_complex_double* dlf, lapack_complex_double* df, lapack_complex_double* duf, lapack_complex_double* du2, lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sposv( char* uplo, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dposv( char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cposv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zposv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dsposv( char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* work, float* swork, lapack_int* iter, lapack_int *info ); void LAPACK_zcposv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, lapack_complex_double* work, lapack_complex_float* swork, double* rwork, lapack_int* iter, lapack_int *info ); void LAPACK_sposvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* af, lapack_int* ldaf, char* equed, float* s, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dposvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* af, lapack_int* ldaf, char* equed, double* s, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cposvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zposvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_dposvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* af, lapack_int* ldaf, char* equed, double* s, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sposvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* af, lapack_int* ldaf, char* equed, float* s, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zposvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_cposvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sppsv( char* uplo, lapack_int* n, lapack_int* nrhs, float* ap, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dppsv( char* uplo, lapack_int* n, lapack_int* nrhs, double* ap, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cppsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* ap, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zppsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* ap, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sppsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, float* ap, float* afp, char* equed, float* s, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dppsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, double* ap, double* afp, char* equed, double* s, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cppsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* ap, lapack_complex_float* afp, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zppsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* ap, lapack_complex_double* afp, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_spbsv( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, float* ab, lapack_int* ldab, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dpbsv( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, double* ab, lapack_int* ldab, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cpbsv( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zpbsv( char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_spbsvx( char* fact, char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, float* ab, lapack_int* ldab, float* afb, lapack_int* ldafb, char* equed, float* s, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dpbsvx( char* fact, char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, double* ab, lapack_int* ldab, double* afb, lapack_int* ldafb, char* equed, double* s, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cpbsvx( char* fact, char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* afb, lapack_int* ldafb, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zpbsvx( char* fact, char* uplo, lapack_int* n, lapack_int* kd, lapack_int* nrhs, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* afb, lapack_int* ldafb, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sptsv( lapack_int* n, lapack_int* nrhs, float* d, float* e, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dptsv( lapack_int* n, lapack_int* nrhs, double* d, double* e, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cptsv( lapack_int* n, lapack_int* nrhs, float* d, lapack_complex_float* e, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zptsv( lapack_int* n, lapack_int* nrhs, double* d, lapack_complex_double* e, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sptsvx( char* fact, lapack_int* n, lapack_int* nrhs, const float* d, const float* e, float* df, float* ef, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int *info ); void LAPACK_dptsvx( char* fact, lapack_int* n, lapack_int* nrhs, const double* d, const double* e, double* df, double* ef, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int *info ); void LAPACK_cptsvx( char* fact, lapack_int* n, lapack_int* nrhs, const float* d, const lapack_complex_float* e, float* df, lapack_complex_float* ef, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zptsvx( char* fact, lapack_int* n, lapack_int* nrhs, const double* d, const lapack_complex_double* e, double* df, lapack_complex_double* ef, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_ssysv( char* uplo, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, lapack_int* ipiv, float* b, lapack_int* ldb, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dsysv( char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, lapack_int* ipiv, double* b, lapack_int* ldb, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_csysv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zsysv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ssysvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, float* af, lapack_int* ldaf, lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dsysvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, double* af, lapack_int* ldaf, lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_csysvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zsysvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_dsysvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* s, double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ssysvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* s, float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_zsysvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_csysvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_chesv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zhesv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chesvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zhesvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_zhesvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, double* s, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* rpvgrw, double* berr, lapack_int* n_err_bnds, double* err_bnds_norm, double* err_bnds_comp, lapack_int* nparams, double* params, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_chesvxx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* af, lapack_int* ldaf, lapack_int* ipiv, char* equed, float* s, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* rpvgrw, float* berr, lapack_int* n_err_bnds, float* err_bnds_norm, float* err_bnds_comp, lapack_int* nparams, float* params, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_sspsv( char* uplo, lapack_int* n, lapack_int* nrhs, float* ap, lapack_int* ipiv, float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dspsv( char* uplo, lapack_int* n, lapack_int* nrhs, double* ap, lapack_int* ipiv, double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_cspsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zspsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sspsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const float* ap, float* afp, lapack_int* ipiv, const float* b, lapack_int* ldb, float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dspsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const double* ap, double* afp, lapack_int* ipiv, const double* b, lapack_int* ldb, double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cspsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zspsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_chpsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_float* ap, lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zhpsv( char* uplo, lapack_int* n, lapack_int* nrhs, lapack_complex_double* ap, lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_chpsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* ap, lapack_complex_float* afp, lapack_int* ipiv, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* x, lapack_int* ldx, float* rcond, float* ferr, float* berr, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhpsvx( char* fact, char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* ap, lapack_complex_double* afp, lapack_int* ipiv, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* x, lapack_int* ldx, double* rcond, double* ferr, double* berr, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgeqrf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgeqrf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgeqrf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgeqrf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgeqpf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, lapack_int* jpvt, float* tau, float* work, lapack_int *info ); void LAPACK_dgeqpf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, lapack_int* jpvt, double* tau, double* work, lapack_int *info ); void LAPACK_cgeqpf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* jpvt, lapack_complex_float* tau, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgeqpf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* jpvt, lapack_complex_double* tau, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sgeqp3( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, lapack_int* jpvt, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgeqp3( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, lapack_int* jpvt, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgeqp3( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* jpvt, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgeqp3( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* jpvt, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sorgqr( lapack_int* m, lapack_int* n, lapack_int* k, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorgqr( lapack_int* m, lapack_int* n, lapack_int* k, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormqr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormqr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cungqr( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zungqr( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmqr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmqr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgelqf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgelqf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgelqf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgelqf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sorglq( lapack_int* m, lapack_int* n, lapack_int* k, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorglq( lapack_int* m, lapack_int* n, lapack_int* k, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormlq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormlq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunglq( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunglq( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmlq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmlq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgeqlf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgeqlf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgeqlf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgeqlf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sorgql( lapack_int* m, lapack_int* n, lapack_int* k, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorgql( lapack_int* m, lapack_int* n, lapack_int* k, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cungql( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zungql( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormql( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormql( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmql( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmql( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgerqf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgerqf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgerqf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgerqf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sorgrq( lapack_int* m, lapack_int* n, lapack_int* k, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorgrq( lapack_int* m, lapack_int* n, lapack_int* k, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cungrq( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zungrq( lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormrq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormrq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmrq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmrq( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_stzrzf( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dtzrzf( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ctzrzf( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ztzrzf( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormrz( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormrz( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmrz( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmrz( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sggqrf( lapack_int* n, lapack_int* m, lapack_int* p, float* a, lapack_int* lda, float* taua, float* b, lapack_int* ldb, float* taub, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dggqrf( lapack_int* n, lapack_int* m, lapack_int* p, double* a, lapack_int* lda, double* taua, double* b, lapack_int* ldb, double* taub, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cggqrf( lapack_int* n, lapack_int* m, lapack_int* p, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* taub, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zggqrf( lapack_int* n, lapack_int* m, lapack_int* p, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* taub, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sggrqf( lapack_int* m, lapack_int* p, lapack_int* n, float* a, lapack_int* lda, float* taua, float* b, lapack_int* ldb, float* taub, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dggrqf( lapack_int* m, lapack_int* p, lapack_int* n, double* a, lapack_int* lda, double* taua, double* b, lapack_int* ldb, double* taub, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cggrqf( lapack_int* m, lapack_int* p, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* taua, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* taub, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zggrqf( lapack_int* m, lapack_int* p, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* taua, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* taub, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgebrd( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* d, float* e, float* tauq, float* taup, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgebrd( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* d, double* e, double* tauq, double* taup, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgebrd( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* d, float* e, lapack_complex_float* tauq, lapack_complex_float* taup, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgebrd( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* d, double* e, lapack_complex_double* tauq, lapack_complex_double* taup, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgbbrd( char* vect, lapack_int* m, lapack_int* n, lapack_int* ncc, lapack_int* kl, lapack_int* ku, float* ab, lapack_int* ldab, float* d, float* e, float* q, lapack_int* ldq, float* pt, lapack_int* ldpt, float* c, lapack_int* ldc, float* work, lapack_int *info ); void LAPACK_dgbbrd( char* vect, lapack_int* m, lapack_int* n, lapack_int* ncc, lapack_int* kl, lapack_int* ku, double* ab, lapack_int* ldab, double* d, double* e, double* q, lapack_int* ldq, double* pt, lapack_int* ldpt, double* c, lapack_int* ldc, double* work, lapack_int *info ); void LAPACK_cgbbrd( char* vect, lapack_int* m, lapack_int* n, lapack_int* ncc, lapack_int* kl, lapack_int* ku, lapack_complex_float* ab, lapack_int* ldab, float* d, float* e, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* pt, lapack_int* ldpt, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zgbbrd( char* vect, lapack_int* m, lapack_int* n, lapack_int* ncc, lapack_int* kl, lapack_int* ku, lapack_complex_double* ab, lapack_int* ldab, double* d, double* e, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* pt, lapack_int* ldpt, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sorgbr( char* vect, lapack_int* m, lapack_int* n, lapack_int* k, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorgbr( char* vect, lapack_int* m, lapack_int* n, lapack_int* k, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormbr( char* vect, char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormbr( char* vect, char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cungbr( char* vect, lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zungbr( char* vect, lapack_int* m, lapack_int* n, lapack_int* k, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmbr( char* vect, char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmbr( char* vect, char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sbdsqr( char* uplo, lapack_int* n, lapack_int* ncvt, lapack_int* nru, lapack_int* ncc, float* d, float* e, float* vt, lapack_int* ldvt, float* u, lapack_int* ldu, float* c, lapack_int* ldc, float* work, lapack_int *info ); void LAPACK_dbdsqr( char* uplo, lapack_int* n, lapack_int* ncvt, lapack_int* nru, lapack_int* ncc, double* d, double* e, double* vt, lapack_int* ldvt, double* u, lapack_int* ldu, double* c, lapack_int* ldc, double* work, lapack_int *info ); void LAPACK_cbdsqr( char* uplo, lapack_int* n, lapack_int* ncvt, lapack_int* nru, lapack_int* ncc, float* d, float* e, lapack_complex_float* vt, lapack_int* ldvt, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* c, lapack_int* ldc, float* work, lapack_int *info ); void LAPACK_zbdsqr( char* uplo, lapack_int* n, lapack_int* ncvt, lapack_int* nru, lapack_int* ncc, double* d, double* e, lapack_complex_double* vt, lapack_int* ldvt, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* c, lapack_int* ldc, double* work, lapack_int *info ); void LAPACK_sbdsdc( char* uplo, char* compq, lapack_int* n, float* d, float* e, float* u, lapack_int* ldu, float* vt, lapack_int* ldvt, float* q, lapack_int* iq, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dbdsdc( char* uplo, char* compq, lapack_int* n, double* d, double* e, double* u, lapack_int* ldu, double* vt, lapack_int* ldvt, double* q, lapack_int* iq, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_ssytrd( char* uplo, lapack_int* n, float* a, lapack_int* lda, float* d, float* e, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dsytrd( char* uplo, lapack_int* n, double* a, lapack_int* lda, double* d, double* e, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sorgtr( char* uplo, lapack_int* n, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorgtr( char* uplo, lapack_int* n, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chetrd( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* d, float* e, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zhetrd( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* d, double* e, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cungtr( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zungtr( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ssptrd( char* uplo, lapack_int* n, float* ap, float* d, float* e, float* tau, lapack_int *info ); void LAPACK_dsptrd( char* uplo, lapack_int* n, double* ap, double* d, double* e, double* tau, lapack_int *info ); void LAPACK_sopgtr( char* uplo, lapack_int* n, const float* ap, const float* tau, float* q, lapack_int* ldq, float* work, lapack_int *info ); void LAPACK_dopgtr( char* uplo, lapack_int* n, const double* ap, const double* tau, double* q, lapack_int* ldq, double* work, lapack_int *info ); void LAPACK_sopmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const float* ap, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int *info ); void LAPACK_dopmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const double* ap, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int *info ); void LAPACK_chptrd( char* uplo, lapack_int* n, lapack_complex_float* ap, float* d, float* e, lapack_complex_float* tau, lapack_int *info ); void LAPACK_zhptrd( char* uplo, lapack_int* n, lapack_complex_double* ap, double* d, double* e, lapack_complex_double* tau, lapack_int *info ); void LAPACK_cupgtr( char* uplo, lapack_int* n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* work, lapack_int *info ); void LAPACK_zupgtr( char* uplo, lapack_int* n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* work, lapack_int *info ); void LAPACK_cupmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const lapack_complex_float* ap, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int *info ); void LAPACK_zupmtr( char* side, char* uplo, char* trans, lapack_int* m, lapack_int* n, const lapack_complex_double* ap, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int *info ); void LAPACK_ssbtrd( char* vect, char* uplo, lapack_int* n, lapack_int* kd, float* ab, lapack_int* ldab, float* d, float* e, float* q, lapack_int* ldq, float* work, lapack_int *info ); void LAPACK_dsbtrd( char* vect, char* uplo, lapack_int* n, lapack_int* kd, double* ab, lapack_int* ldab, double* d, double* e, double* q, lapack_int* ldq, double* work, lapack_int *info ); void LAPACK_chbtrd( char* vect, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_float* ab, lapack_int* ldab, float* d, float* e, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* work, lapack_int *info ); void LAPACK_zhbtrd( char* vect, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_double* ab, lapack_int* ldab, double* d, double* e, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* work, lapack_int *info ); void LAPACK_ssterf( lapack_int* n, float* d, float* e, lapack_int *info ); void LAPACK_dsterf( lapack_int* n, double* d, double* e, lapack_int *info ); void LAPACK_ssteqr( char* compz, lapack_int* n, float* d, float* e, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dsteqr( char* compz, lapack_int* n, double* d, double* e, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_csteqr( char* compz, lapack_int* n, float* d, float* e, lapack_complex_float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_zsteqr( char* compz, lapack_int* n, double* d, double* e, lapack_complex_double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_sstemr( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, lapack_int* m, float* w, float* z, lapack_int* ldz, lapack_int* nzc, lapack_int* isuppz, lapack_logical* tryrac, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dstemr( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, lapack_int* m, double* w, double* z, lapack_int* ldz, lapack_int* nzc, lapack_int* isuppz, lapack_logical* tryrac, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_cstemr( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_int* nzc, lapack_int* isuppz, lapack_logical* tryrac, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zstemr( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_int* nzc, lapack_int* isuppz, lapack_logical* tryrac, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sstedc( char* compz, lapack_int* n, float* d, float* e, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dstedc( char* compz, lapack_int* n, double* d, double* e, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_cstedc( char* compz, lapack_int* n, float* d, float* e, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zstedc( char* compz, lapack_int* n, double* d, double* e, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sstegr( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, lapack_int* isuppz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dstegr( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, lapack_int* isuppz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_cstegr( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_int* isuppz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zstegr( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_int* isuppz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_spteqr( char* compz, lapack_int* n, float* d, float* e, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dpteqr( char* compz, lapack_int* n, double* d, double* e, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_cpteqr( char* compz, lapack_int* n, float* d, float* e, lapack_complex_float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_zpteqr( char* compz, lapack_int* n, double* d, double* e, lapack_complex_double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_sstebz( char* range, char* order, lapack_int* n, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, const float* d, const float* e, lapack_int* m, lapack_int* nsplit, float* w, lapack_int* iblock, lapack_int* isplit, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dstebz( char* range, char* order, lapack_int* n, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, const double* d, const double* e, lapack_int* m, lapack_int* nsplit, double* w, lapack_int* iblock, lapack_int* isplit, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_sstein( lapack_int* n, const float* d, const float* e, lapack_int* m, const float* w, const lapack_int* iblock, const lapack_int* isplit, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifailv, lapack_int *info ); void LAPACK_dstein( lapack_int* n, const double* d, const double* e, lapack_int* m, const double* w, const lapack_int* iblock, const lapack_int* isplit, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifailv, lapack_int *info ); void LAPACK_cstein( lapack_int* n, const float* d, const float* e, lapack_int* m, const float* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifailv, lapack_int *info ); void LAPACK_zstein( lapack_int* n, const double* d, const double* e, lapack_int* m, const double* w, const lapack_int* iblock, const lapack_int* isplit, lapack_complex_double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifailv, lapack_int *info ); void LAPACK_sdisna( char* job, lapack_int* m, lapack_int* n, const float* d, float* sep, lapack_int *info ); void LAPACK_ddisna( char* job, lapack_int* m, lapack_int* n, const double* d, double* sep, lapack_int *info ); void LAPACK_ssygst( lapack_int* itype, char* uplo, lapack_int* n, float* a, lapack_int* lda, const float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_dsygst( lapack_int* itype, char* uplo, lapack_int* n, double* a, lapack_int* lda, const double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_chegst( lapack_int* itype, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* b, lapack_int* ldb, lapack_int *info ); void LAPACK_zhegst( lapack_int* itype, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* b, lapack_int* ldb, lapack_int *info ); void LAPACK_sspgst( lapack_int* itype, char* uplo, lapack_int* n, float* ap, const float* bp, lapack_int *info ); void LAPACK_dspgst( lapack_int* itype, char* uplo, lapack_int* n, double* ap, const double* bp, lapack_int *info ); void LAPACK_chpgst( lapack_int* itype, char* uplo, lapack_int* n, lapack_complex_float* ap, const lapack_complex_float* bp, lapack_int *info ); void LAPACK_zhpgst( lapack_int* itype, char* uplo, lapack_int* n, lapack_complex_double* ap, const lapack_complex_double* bp, lapack_int *info ); void LAPACK_ssbgst( char* vect, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, float* ab, lapack_int* ldab, const float* bb, lapack_int* ldbb, float* x, lapack_int* ldx, float* work, lapack_int *info ); void LAPACK_dsbgst( char* vect, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, double* ab, lapack_int* ldab, const double* bb, lapack_int* ldbb, double* x, lapack_int* ldx, double* work, lapack_int *info ); void LAPACK_chbgst( char* vect, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_float* ab, lapack_int* ldab, const lapack_complex_float* bb, lapack_int* ldbb, lapack_complex_float* x, lapack_int* ldx, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhbgst( char* vect, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_double* ab, lapack_int* ldab, const lapack_complex_double* bb, lapack_int* ldbb, lapack_complex_double* x, lapack_int* ldx, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_spbstf( char* uplo, lapack_int* n, lapack_int* kb, float* bb, lapack_int* ldbb, lapack_int *info ); void LAPACK_dpbstf( char* uplo, lapack_int* n, lapack_int* kb, double* bb, lapack_int* ldbb, lapack_int *info ); void LAPACK_cpbstf( char* uplo, lapack_int* n, lapack_int* kb, lapack_complex_float* bb, lapack_int* ldbb, lapack_int *info ); void LAPACK_zpbstf( char* uplo, lapack_int* n, lapack_int* kb, lapack_complex_double* bb, lapack_int* ldbb, lapack_int *info ); void LAPACK_sgehrd( lapack_int* n, lapack_int* ilo, lapack_int* ihi, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgehrd( lapack_int* n, lapack_int* ilo, lapack_int* ihi, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgehrd( lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgehrd( lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sorghr( lapack_int* n, lapack_int* ilo, lapack_int* ihi, float* a, lapack_int* lda, const float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dorghr( lapack_int* n, lapack_int* ilo, lapack_int* ihi, double* a, lapack_int* lda, const double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sormhr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const float* a, lapack_int* lda, const float* tau, float* c, lapack_int* ldc, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dormhr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const double* a, lapack_int* lda, const double* tau, double* c, lapack_int* ldc, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunghr( lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunghr( lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cunmhr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zunmhr( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgebal( char* job, lapack_int* n, float* a, lapack_int* lda, lapack_int* ilo, lapack_int* ihi, float* scale, lapack_int *info ); void LAPACK_dgebal( char* job, lapack_int* n, double* a, lapack_int* lda, lapack_int* ilo, lapack_int* ihi, double* scale, lapack_int *info ); void LAPACK_cgebal( char* job, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* ilo, lapack_int* ihi, float* scale, lapack_int *info ); void LAPACK_zgebal( char* job, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* ilo, lapack_int* ihi, double* scale, lapack_int *info ); void LAPACK_sgebak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const float* scale, lapack_int* m, float* v, lapack_int* ldv, lapack_int *info ); void LAPACK_dgebak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const double* scale, lapack_int* m, double* v, lapack_int* ldv, lapack_int *info ); void LAPACK_cgebak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const float* scale, lapack_int* m, lapack_complex_float* v, lapack_int* ldv, lapack_int *info ); void LAPACK_zgebak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const double* scale, lapack_int* m, lapack_complex_double* v, lapack_int* ldv, lapack_int *info ); void LAPACK_shseqr( char* job, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, float* h, lapack_int* ldh, float* wr, float* wi, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dhseqr( char* job, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, double* h, lapack_int* ldh, double* wr, double* wi, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chseqr( char* job, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_float* h, lapack_int* ldh, lapack_complex_float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zhseqr( char* job, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_double* h, lapack_int* ldh, lapack_complex_double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_shsein( char* job, char* eigsrc, char* initv, lapack_logical* select, lapack_int* n, const float* h, lapack_int* ldh, float* wr, const float* wi, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, float* work, lapack_int* ifaill, lapack_int* ifailr, lapack_int *info ); void LAPACK_dhsein( char* job, char* eigsrc, char* initv, lapack_logical* select, lapack_int* n, const double* h, lapack_int* ldh, double* wr, const double* wi, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, double* work, lapack_int* ifaill, lapack_int* ifailr, lapack_int *info ); void LAPACK_chsein( char* job, char* eigsrc, char* initv, const lapack_logical* select, lapack_int* n, const lapack_complex_float* h, lapack_int* ldh, lapack_complex_float* w, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_float* work, float* rwork, lapack_int* ifaill, lapack_int* ifailr, lapack_int *info ); void LAPACK_zhsein( char* job, char* eigsrc, char* initv, const lapack_logical* select, lapack_int* n, const lapack_complex_double* h, lapack_int* ldh, lapack_complex_double* w, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_double* work, double* rwork, lapack_int* ifaill, lapack_int* ifailr, lapack_int *info ); void LAPACK_strevc( char* side, char* howmny, lapack_logical* select, lapack_int* n, const float* t, lapack_int* ldt, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, float* work, lapack_int *info ); void LAPACK_dtrevc( char* side, char* howmny, lapack_logical* select, lapack_int* n, const double* t, lapack_int* ldt, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, double* work, lapack_int *info ); void LAPACK_ctrevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztrevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_strsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const float* t, lapack_int* ldt, const float* vl, lapack_int* ldvl, const float* vr, lapack_int* ldvr, float* s, float* sep, lapack_int* mm, lapack_int* m, float* work, lapack_int* ldwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dtrsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const double* t, lapack_int* ldt, const double* vl, lapack_int* ldvl, const double* vr, lapack_int* ldvr, double* s, double* sep, lapack_int* mm, lapack_int* m, double* work, lapack_int* ldwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ctrsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_float* t, lapack_int* ldt, const lapack_complex_float* vl, lapack_int* ldvl, const lapack_complex_float* vr, lapack_int* ldvr, float* s, float* sep, lapack_int* mm, lapack_int* m, lapack_complex_float* work, lapack_int* ldwork, float* rwork, lapack_int *info ); void LAPACK_ztrsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_double* t, lapack_int* ldt, const lapack_complex_double* vl, lapack_int* ldvl, const lapack_complex_double* vr, lapack_int* ldvr, double* s, double* sep, lapack_int* mm, lapack_int* m, lapack_complex_double* work, lapack_int* ldwork, double* rwork, lapack_int *info ); void LAPACK_strexc( char* compq, lapack_int* n, float* t, lapack_int* ldt, float* q, lapack_int* ldq, lapack_int* ifst, lapack_int* ilst, float* work, lapack_int *info ); void LAPACK_dtrexc( char* compq, lapack_int* n, double* t, lapack_int* ldt, double* q, lapack_int* ldq, lapack_int* ifst, lapack_int* ilst, double* work, lapack_int *info ); void LAPACK_ctrexc( char* compq, lapack_int* n, lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* q, lapack_int* ldq, lapack_int* ifst, lapack_int* ilst, lapack_int *info ); void LAPACK_ztrexc( char* compq, lapack_int* n, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* q, lapack_int* ldq, lapack_int* ifst, lapack_int* ilst, lapack_int *info ); void LAPACK_strsen( char* job, char* compq, const lapack_logical* select, lapack_int* n, float* t, lapack_int* ldt, float* q, lapack_int* ldq, float* wr, float* wi, lapack_int* m, float* s, float* sep, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dtrsen( char* job, char* compq, const lapack_logical* select, lapack_int* n, double* t, lapack_int* ldt, double* q, lapack_int* ldq, double* wr, double* wi, lapack_int* m, double* s, double* sep, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ctrsen( char* job, char* compq, const lapack_logical* select, lapack_int* n, lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* w, lapack_int* m, float* s, float* sep, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ztrsen( char* job, char* compq, const lapack_logical* select, lapack_int* n, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* w, lapack_int* m, double* s, double* sep, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_strsyl( char* trana, char* tranb, lapack_int* isgn, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, const float* b, lapack_int* ldb, float* c, lapack_int* ldc, float* scale, lapack_int *info ); void LAPACK_dtrsyl( char* trana, char* tranb, lapack_int* isgn, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, const double* b, lapack_int* ldb, double* c, lapack_int* ldc, double* scale, lapack_int *info ); void LAPACK_ctrsyl( char* trana, char* tranb, lapack_int* isgn, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* c, lapack_int* ldc, float* scale, lapack_int *info ); void LAPACK_ztrsyl( char* trana, char* tranb, lapack_int* isgn, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* c, lapack_int* ldc, double* scale, lapack_int *info ); void LAPACK_sgghrd( char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* q, lapack_int* ldq, float* z, lapack_int* ldz, lapack_int *info ); void LAPACK_dgghrd( char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* q, lapack_int* ldq, double* z, lapack_int* ldz, lapack_int *info ); void LAPACK_cgghrd( char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* z, lapack_int* ldz, lapack_int *info ); void LAPACK_zgghrd( char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* z, lapack_int* ldz, lapack_int *info ); void LAPACK_sggbal( char* job, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* work, lapack_int *info ); void LAPACK_dggbal( char* job, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* work, lapack_int *info ); void LAPACK_cggbal( char* job, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* work, lapack_int *info ); void LAPACK_zggbal( char* job, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* work, lapack_int *info ); void LAPACK_sggbak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const float* lscale, const float* rscale, lapack_int* m, float* v, lapack_int* ldv, lapack_int *info ); void LAPACK_dggbak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const double* lscale, const double* rscale, lapack_int* m, double* v, lapack_int* ldv, lapack_int *info ); void LAPACK_cggbak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const float* lscale, const float* rscale, lapack_int* m, lapack_complex_float* v, lapack_int* ldv, lapack_int *info ); void LAPACK_zggbak( char* job, char* side, lapack_int* n, lapack_int* ilo, lapack_int* ihi, const double* lscale, const double* rscale, lapack_int* m, lapack_complex_double* v, lapack_int* ldv, lapack_int *info ); void LAPACK_shgeqz( char* job, char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, float* h, lapack_int* ldh, float* t, lapack_int* ldt, float* alphar, float* alphai, float* beta, float* q, lapack_int* ldq, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dhgeqz( char* job, char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, double* h, lapack_int* ldh, double* t, lapack_int* ldt, double* alphar, double* alphai, double* beta, double* q, lapack_int* ldq, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chgeqz( char* job, char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_float* h, lapack_int* ldh, lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zhgeqz( char* job, char* compq, char* compz, lapack_int* n, lapack_int* ilo, lapack_int* ihi, lapack_complex_double* h, lapack_int* ldh, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_stgevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, const float* s, lapack_int* lds, const float* p, lapack_int* ldp, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, float* work, lapack_int *info ); void LAPACK_dtgevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, const double* s, lapack_int* lds, const double* p, lapack_int* ldp, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, double* work, lapack_int *info ); void LAPACK_ctgevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_float* s, lapack_int* lds, const lapack_complex_float* p, lapack_int* ldp, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_ztgevc( char* side, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_double* s, lapack_int* lds, const lapack_complex_double* p, lapack_int* ldp, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_int* mm, lapack_int* m, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_stgexc( lapack_logical* wantq, lapack_logical* wantz, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* q, lapack_int* ldq, float* z, lapack_int* ldz, lapack_int* ifst, lapack_int* ilst, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dtgexc( lapack_logical* wantq, lapack_logical* wantz, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* q, lapack_int* ldq, double* z, lapack_int* ldz, lapack_int* ifst, lapack_int* ilst, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ctgexc( lapack_logical* wantq, lapack_logical* wantz, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* z, lapack_int* ldz, lapack_int* ifst, lapack_int* ilst, lapack_int *info ); void LAPACK_ztgexc( lapack_logical* wantq, lapack_logical* wantz, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* z, lapack_int* ldz, lapack_int* ifst, lapack_int* ilst, lapack_int *info ); void LAPACK_stgsen( lapack_int* ijob, lapack_logical* wantq, lapack_logical* wantz, const lapack_logical* select, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* alphar, float* alphai, float* beta, float* q, lapack_int* ldq, float* z, lapack_int* ldz, lapack_int* m, float* pl, float* pr, float* dif, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dtgsen( lapack_int* ijob, lapack_logical* wantq, lapack_logical* wantz, const lapack_logical* select, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* alphar, double* alphai, double* beta, double* q, lapack_int* ldq, double* z, lapack_int* ldz, lapack_int* m, double* pl, double* pr, double* dif, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ctgsen( lapack_int* ijob, lapack_logical* wantq, lapack_logical* wantz, const lapack_logical* select, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* z, lapack_int* ldz, lapack_int* m, float* pl, float* pr, float* dif, lapack_complex_float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ztgsen( lapack_int* ijob, lapack_logical* wantq, lapack_logical* wantz, const lapack_logical* select, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* z, lapack_int* ldz, lapack_int* m, double* pl, double* pr, double* dif, lapack_complex_double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_stgsyl( char* trans, lapack_int* ijob, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, const float* b, lapack_int* ldb, float* c, lapack_int* ldc, const float* d, lapack_int* ldd, const float* e, lapack_int* lde, float* f, lapack_int* ldf, float* scale, float* dif, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dtgsyl( char* trans, lapack_int* ijob, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, const double* b, lapack_int* ldb, double* c, lapack_int* ldc, const double* d, lapack_int* ldd, const double* e, lapack_int* lde, double* f, lapack_int* ldf, double* scale, double* dif, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ctgsyl( char* trans, lapack_int* ijob, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* c, lapack_int* ldc, const lapack_complex_float* d, lapack_int* ldd, const lapack_complex_float* e, lapack_int* lde, lapack_complex_float* f, lapack_int* ldf, float* scale, float* dif, lapack_complex_float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ztgsyl( char* trans, lapack_int* ijob, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* c, lapack_int* ldc, const lapack_complex_double* d, lapack_int* ldd, const lapack_complex_double* e, lapack_int* lde, lapack_complex_double* f, lapack_int* ldf, double* scale, double* dif, lapack_complex_double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_stgsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const float* a, lapack_int* lda, const float* b, lapack_int* ldb, const float* vl, lapack_int* ldvl, const float* vr, lapack_int* ldvr, float* s, float* dif, lapack_int* mm, lapack_int* m, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dtgsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const double* a, lapack_int* lda, const double* b, lapack_int* ldb, const double* vl, lapack_int* ldvl, const double* vr, lapack_int* ldvr, double* s, double* dif, lapack_int* mm, lapack_int* m, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ctgsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, const lapack_complex_float* b, lapack_int* ldb, const lapack_complex_float* vl, lapack_int* ldvl, const lapack_complex_float* vr, lapack_int* ldvr, float* s, float* dif, lapack_int* mm, lapack_int* m, lapack_complex_float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ztgsna( char* job, char* howmny, const lapack_logical* select, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, const lapack_complex_double* b, lapack_int* ldb, const lapack_complex_double* vl, lapack_int* ldvl, const lapack_complex_double* vr, lapack_int* ldvr, double* s, double* dif, lapack_int* mm, lapack_int* m, lapack_complex_double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_sggsvp( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* tola, float* tolb, lapack_int* k, lapack_int* l, float* u, lapack_int* ldu, float* v, lapack_int* ldv, float* q, lapack_int* ldq, lapack_int* iwork, float* tau, float* work, lapack_int *info ); void LAPACK_dggsvp( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* tola, double* tolb, lapack_int* k, lapack_int* l, double* u, lapack_int* ldu, double* v, lapack_int* ldv, double* q, lapack_int* ldq, lapack_int* iwork, double* tau, double* work, lapack_int *info ); void LAPACK_cggsvp( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* tola, float* tolb, lapack_int* k, lapack_int* l, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* v, lapack_int* ldv, lapack_complex_float* q, lapack_int* ldq, lapack_int* iwork, float* rwork, lapack_complex_float* tau, lapack_complex_float* work, lapack_int *info ); void LAPACK_zggsvp( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* tola, double* tolb, lapack_int* k, lapack_int* l, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* v, lapack_int* ldv, lapack_complex_double* q, lapack_int* ldq, lapack_int* iwork, double* rwork, lapack_complex_double* tau, lapack_complex_double* work, lapack_int *info ); void LAPACK_stgsja( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_int* k, lapack_int* l, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* tola, float* tolb, float* alpha, float* beta, float* u, lapack_int* ldu, float* v, lapack_int* ldv, float* q, lapack_int* ldq, float* work, lapack_int* ncycle, lapack_int *info ); void LAPACK_dtgsja( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_int* k, lapack_int* l, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* tola, double* tolb, double* alpha, double* beta, double* u, lapack_int* ldu, double* v, lapack_int* ldv, double* q, lapack_int* ldq, double* work, lapack_int* ncycle, lapack_int *info ); void LAPACK_ctgsja( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_int* k, lapack_int* l, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* tola, float* tolb, float* alpha, float* beta, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* v, lapack_int* ldv, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* work, lapack_int* ncycle, lapack_int *info ); void LAPACK_ztgsja( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* p, lapack_int* n, lapack_int* k, lapack_int* l, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* tola, double* tolb, double* alpha, double* beta, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* v, lapack_int* ldv, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* work, lapack_int* ncycle, lapack_int *info ); void LAPACK_sgels( char* trans, lapack_int* m, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgels( char* trans, lapack_int* m, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgels( char* trans, lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgels( char* trans, lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgelsy( lapack_int* m, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int* jpvt, float* rcond, lapack_int* rank, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgelsy( lapack_int* m, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int* jpvt, double* rcond, lapack_int* rank, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgelsy( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int* jpvt, float* rcond, lapack_int* rank, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgelsy( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int* jpvt, double* rcond, lapack_int* rank, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sgelss( lapack_int* m, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* s, float* rcond, lapack_int* rank, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgelss( lapack_int* m, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* s, double* rcond, lapack_int* rank, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgelss( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* s, float* rcond, lapack_int* rank, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgelss( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* s, double* rcond, lapack_int* rank, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sgelsd( lapack_int* m, lapack_int* n, lapack_int* nrhs, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* s, float* rcond, lapack_int* rank, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dgelsd( lapack_int* m, lapack_int* n, lapack_int* nrhs, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* s, double* rcond, lapack_int* rank, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_cgelsd( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* s, float* rcond, lapack_int* rank, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_zgelsd( lapack_int* m, lapack_int* n, lapack_int* nrhs, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* s, double* rcond, lapack_int* rank, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_sgglse( lapack_int* m, lapack_int* n, lapack_int* p, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* c, float* d, float* x, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgglse( lapack_int* m, lapack_int* n, lapack_int* p, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* c, double* d, double* x, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgglse( lapack_int* m, lapack_int* n, lapack_int* p, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* c, lapack_complex_float* d, lapack_complex_float* x, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgglse( lapack_int* m, lapack_int* n, lapack_int* p, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* c, lapack_complex_double* d, lapack_complex_double* x, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sggglm( lapack_int* n, lapack_int* m, lapack_int* p, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* d, float* x, float* y, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dggglm( lapack_int* n, lapack_int* m, lapack_int* p, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* d, double* x, double* y, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cggglm( lapack_int* n, lapack_int* m, lapack_int* p, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* d, lapack_complex_float* x, lapack_complex_float* y, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zggglm( lapack_int* n, lapack_int* m, lapack_int* p, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* d, lapack_complex_double* x, lapack_complex_double* y, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_ssyev( char* jobz, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* w, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dsyev( char* jobz, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* w, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cheev( char* jobz, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* w, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zheev( char* jobz, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* w, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_ssyevd( char* jobz, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* w, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dsyevd( char* jobz, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* w, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_cheevd( char* jobz, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* w, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zheevd( char* jobz, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* w, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ssyevx( char* jobz, char* range, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dsyevx( char* jobz, char* range, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_cheevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zheevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_ssyevr( char* jobz, char* range, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, lapack_int* isuppz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dsyevr( char* jobz, char* range, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, lapack_int* isuppz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_cheevr( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_int* isuppz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zheevr( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_int* isuppz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sspev( char* jobz, char* uplo, lapack_int* n, float* ap, float* w, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dspev( char* jobz, char* uplo, lapack_int* n, double* ap, double* w, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_chpev( char* jobz, char* uplo, lapack_int* n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhpev( char* jobz, char* uplo, lapack_int* n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sspevd( char* jobz, char* uplo, lapack_int* n, float* ap, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dspevd( char* jobz, char* uplo, lapack_int* n, double* ap, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_chpevd( char* jobz, char* uplo, lapack_int* n, lapack_complex_float* ap, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zhpevd( char* jobz, char* uplo, lapack_int* n, lapack_complex_double* ap, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sspevx( char* jobz, char* range, char* uplo, lapack_int* n, float* ap, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dspevx( char* jobz, char* range, char* uplo, lapack_int* n, double* ap, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_chpevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_float* ap, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zhpevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_double* ap, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_ssbev( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, float* ab, lapack_int* ldab, float* w, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dsbev( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, double* ab, lapack_int* ldab, double* w, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_chbev( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_float* ab, lapack_int* ldab, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhbev( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_double* ab, lapack_int* ldab, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_ssbevd( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, float* ab, lapack_int* ldab, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dsbevd( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, double* ab, lapack_int* ldab, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_chbevd( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_float* ab, lapack_int* ldab, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zhbevd( char* jobz, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_double* ab, lapack_int* ldab, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ssbevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* kd, float* ab, lapack_int* ldab, float* q, lapack_int* ldq, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dsbevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* kd, double* ab, lapack_int* ldab, double* q, lapack_int* ldq, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_chbevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* q, lapack_int* ldq, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zhbevx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* kd, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* q, lapack_int* ldq, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_sstev( char* jobz, lapack_int* n, float* d, float* e, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dstev( char* jobz, lapack_int* n, double* d, double* e, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_sstevd( char* jobz, lapack_int* n, float* d, float* e, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dstevd( char* jobz, lapack_int* n, double* d, double* e, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sstevx( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dstevx( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_sstevr( char* jobz, char* range, lapack_int* n, float* d, float* e, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, lapack_int* isuppz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dstevr( char* jobz, char* range, lapack_int* n, double* d, double* e, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, lapack_int* isuppz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sgees( char* jobvs, char* sort, LAPACK_S_SELECT2 select, lapack_int* n, float* a, lapack_int* lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int* ldvs, float* work, lapack_int* lwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dgees( char* jobvs, char* sort, LAPACK_D_SELECT2 select, lapack_int* n, double* a, lapack_int* lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int* ldvs, double* work, lapack_int* lwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_cgees( char* jobvs, char* sort, LAPACK_C_SELECT1 select, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int* ldvs, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_zgees( char* jobvs, char* sort, LAPACK_Z_SELECT1 select, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int* ldvs, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_sgeesx( char* jobvs, char* sort, LAPACK_S_SELECT2 select, char* sense, lapack_int* n, float* a, lapack_int* lda, lapack_int* sdim, float* wr, float* wi, float* vs, lapack_int* ldvs, float* rconde, float* rcondv, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dgeesx( char* jobvs, char* sort, LAPACK_D_SELECT2 select, char* sense, lapack_int* n, double* a, lapack_int* lda, lapack_int* sdim, double* wr, double* wi, double* vs, lapack_int* ldvs, double* rconde, double* rcondv, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_cgeesx( char* jobvs, char* sort, LAPACK_C_SELECT1 select, char* sense, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* sdim, lapack_complex_float* w, lapack_complex_float* vs, lapack_int* ldvs, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_zgeesx( char* jobvs, char* sort, LAPACK_Z_SELECT1 select, char* sense, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* sdim, lapack_complex_double* w, lapack_complex_double* vs, lapack_int* ldvs, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_sgeev( char* jobvl, char* jobvr, lapack_int* n, float* a, lapack_int* lda, float* wr, float* wi, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgeev( char* jobvl, char* jobvr, lapack_int* n, double* a, lapack_int* lda, double* wr, double* wi, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgeev( char* jobvl, char* jobvr, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgeev( char* jobvl, char* jobvr, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sgeevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, float* a, lapack_int* lda, float* wr, float* wi, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dgeevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, double* a, lapack_int* lda, double* wr, double* wi, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_cgeevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* w, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, float* scale, float* abnrm, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgeevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* w, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, double* scale, double* abnrm, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sgesvd( char* jobu, char* jobvt, lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* s, float* u, lapack_int* ldu, float* vt, lapack_int* ldvt, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgesvd( char* jobu, char* jobvt, lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* s, double* u, lapack_int* ldu, double* vt, lapack_int* ldvt, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgesvd( char* jobu, char* jobvt, lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* s, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* vt, lapack_int* ldvt, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zgesvd( char* jobu, char* jobvt, lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* s, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* vt, lapack_int* ldvt, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sgesdd( char* jobz, lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* s, float* u, lapack_int* ldu, float* vt, lapack_int* ldvt, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dgesdd( char* jobz, lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* s, double* u, lapack_int* ldu, double* vt, lapack_int* ldvt, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_cgesdd( char* jobz, lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, float* s, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* vt, lapack_int* ldvt, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_zgesdd( char* jobz, lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, double* s, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* vt, lapack_int* ldvt, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dgejsv( char* joba, char* jobu, char* jobv, char* jobr, char* jobt, char* jobp, lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* sva, double* u, lapack_int* ldu, double* v, lapack_int* ldv, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_sgejsv( char* joba, char* jobu, char* jobv, char* jobr, char* jobt, char* jobp, lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* sva, float* u, lapack_int* ldu, float* v, lapack_int* ldv, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int *info ); void LAPACK_dgesvj( char* joba, char* jobu, char* jobv, lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* sva, lapack_int* mv, double* v, lapack_int* ldv, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sgesvj( char* joba, char* jobu, char* jobv, lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* sva, lapack_int* mv, float* v, lapack_int* ldv, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_sggsvd( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* n, lapack_int* p, lapack_int* k, lapack_int* l, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* alpha, float* beta, float* u, lapack_int* ldu, float* v, lapack_int* ldv, float* q, lapack_int* ldq, float* work, lapack_int* iwork, lapack_int *info ); void LAPACK_dggsvd( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* n, lapack_int* p, lapack_int* k, lapack_int* l, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* alpha, double* beta, double* u, lapack_int* ldu, double* v, lapack_int* ldv, double* q, lapack_int* ldq, double* work, lapack_int* iwork, lapack_int *info ); void LAPACK_cggsvd( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* n, lapack_int* p, lapack_int* k, lapack_int* l, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* alpha, float* beta, lapack_complex_float* u, lapack_int* ldu, lapack_complex_float* v, lapack_int* ldv, lapack_complex_float* q, lapack_int* ldq, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_zggsvd( char* jobu, char* jobv, char* jobq, lapack_int* m, lapack_int* n, lapack_int* p, lapack_int* k, lapack_int* l, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* alpha, double* beta, lapack_complex_double* u, lapack_int* ldu, lapack_complex_double* v, lapack_int* ldv, lapack_complex_double* q, lapack_int* ldq, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int *info ); void LAPACK_ssygv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* w, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dsygv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* w, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_chegv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* w, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zhegv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* w, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_ssygvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* w, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dsygvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* w, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_chegvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* w, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zhegvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* w, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ssygvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dsygvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_chegvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zhegvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_sspgv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, float* ap, float* bp, float* w, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dspgv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, double* ap, double* bp, double* w, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_chpgv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhpgv( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_sspgvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, float* ap, float* bp, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dspgvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, double* ap, double* bp, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_chpgvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_complex_float* bp, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zhpgvd( lapack_int* itype, char* jobz, char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_complex_double* bp, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_sspgvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, float* ap, float* bp, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dspgvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, double* ap, double* bp, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_chpgvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_float* ap, lapack_complex_float* bp, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zhpgvx( lapack_int* itype, char* jobz, char* range, char* uplo, lapack_int* n, lapack_complex_double* ap, lapack_complex_double* bp, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_ssbgv( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, float* ab, lapack_int* ldab, float* bb, lapack_int* ldbb, float* w, float* z, lapack_int* ldz, float* work, lapack_int *info ); void LAPACK_dsbgv( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, double* ab, lapack_int* ldab, double* bb, lapack_int* ldbb, double* w, double* z, lapack_int* ldz, double* work, lapack_int *info ); void LAPACK_chbgv( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* bb, lapack_int* ldbb, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int *info ); void LAPACK_zhbgv( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* bb, lapack_int* ldbb, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int *info ); void LAPACK_ssbgvd( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, float* ab, lapack_int* ldab, float* bb, lapack_int* ldbb, float* w, float* z, lapack_int* ldz, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_dsbgvd( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, double* ab, lapack_int* ldab, double* bb, lapack_int* ldbb, double* w, double* z, lapack_int* ldz, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_chbgvd( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* bb, lapack_int* ldbb, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_zhbgvd( char* jobz, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* bb, lapack_int* ldbb, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork, lapack_int* liwork, lapack_int *info ); void LAPACK_ssbgvx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, float* ab, lapack_int* ldab, float* bb, lapack_int* ldbb, float* q, lapack_int* ldq, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, float* z, lapack_int* ldz, float* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_dsbgvx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, double* ab, lapack_int* ldab, double* bb, lapack_int* ldbb, double* q, lapack_int* ldq, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, double* z, lapack_int* ldz, double* work, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_chbgvx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_float* ab, lapack_int* ldab, lapack_complex_float* bb, lapack_int* ldbb, lapack_complex_float* q, lapack_int* ldq, float* vl, float* vu, lapack_int* il, lapack_int* iu, float* abstol, lapack_int* m, float* w, lapack_complex_float* z, lapack_int* ldz, lapack_complex_float* work, float* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_zhbgvx( char* jobz, char* range, char* uplo, lapack_int* n, lapack_int* ka, lapack_int* kb, lapack_complex_double* ab, lapack_int* ldab, lapack_complex_double* bb, lapack_int* ldbb, lapack_complex_double* q, lapack_int* ldq, double* vl, double* vu, lapack_int* il, lapack_int* iu, double* abstol, lapack_int* m, double* w, lapack_complex_double* z, lapack_int* ldz, lapack_complex_double* work, double* rwork, lapack_int* iwork, lapack_int* ifail, lapack_int *info ); void LAPACK_sgges( char* jobvsl, char* jobvsr, char* sort, LAPACK_S_SELECT3 selctg, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int* ldvsl, float* vsr, lapack_int* ldvsr, float* work, lapack_int* lwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dgges( char* jobvsl, char* jobvsr, char* sort, LAPACK_D_SELECT3 selctg, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int* ldvsl, double* vsr, lapack_int* ldvsr, double* work, lapack_int* lwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_cgges( char* jobvsl, char* jobvsr, char* sort, LAPACK_C_SELECT2 selctg, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int* ldvsl, lapack_complex_float* vsr, lapack_int* ldvsr, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_zgges( char* jobvsl, char* jobvsr, char* sort, LAPACK_Z_SELECT2 selctg, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int* ldvsl, lapack_complex_double* vsr, lapack_int* ldvsr, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_sggesx( char* jobvsl, char* jobvsr, char* sort, LAPACK_S_SELECT3 selctg, char* sense, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, lapack_int* sdim, float* alphar, float* alphai, float* beta, float* vsl, lapack_int* ldvsl, float* vsr, lapack_int* ldvsr, float* rconde, float* rcondv, float* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dggesx( char* jobvsl, char* jobvsr, char* sort, LAPACK_D_SELECT3 selctg, char* sense, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, lapack_int* sdim, double* alphar, double* alphai, double* beta, double* vsl, lapack_int* ldvsl, double* vsr, lapack_int* ldvsr, double* rconde, double* rcondv, double* work, lapack_int* lwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_cggesx( char* jobvsl, char* jobvsr, char* sort, LAPACK_C_SELECT2 selctg, char* sense, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_int* sdim, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vsl, lapack_int* ldvsl, lapack_complex_float* vsr, lapack_int* ldvsr, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_zggesx( char* jobvsl, char* jobvsr, char* sort, LAPACK_Z_SELECT2 selctg, char* sense, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_int* sdim, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vsl, lapack_int* ldvsl, lapack_complex_double* vsr, lapack_int* ldvsr, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_int* liwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_sggev( char* jobvl, char* jobvr, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dggev( char* jobvl, char* jobvr, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cggev( char* jobvl, char* jobvr, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int *info ); void LAPACK_zggev( char* jobvl, char* jobvr, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int *info ); void LAPACK_sggevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* alphar, float* alphai, float* beta, float* vl, lapack_int* ldvl, float* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv, float* work, lapack_int* lwork, lapack_int* iwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dggevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* alphar, double* alphai, double* beta, double* vl, lapack_int* ldvl, double* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv, double* work, lapack_int* lwork, lapack_int* iwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_cggevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* vl, lapack_int* ldvl, lapack_complex_float* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, float* lscale, float* rscale, float* abnrm, float* bbnrm, float* rconde, float* rcondv, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* iwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_zggevx( char* balanc, char* jobvl, char* jobvr, char* sense, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* vl, lapack_int* ldvl, lapack_complex_double* vr, lapack_int* ldvr, lapack_int* ilo, lapack_int* ihi, double* lscale, double* rscale, double* abnrm, double* bbnrm, double* rconde, double* rcondv, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* iwork, lapack_logical* bwork, lapack_int *info ); void LAPACK_dsfrk( char* transr, char* uplo, char* trans, lapack_int* n, lapack_int* k, double* alpha, const double* a, lapack_int* lda, double* beta, double* c ); void LAPACK_ssfrk( char* transr, char* uplo, char* trans, lapack_int* n, lapack_int* k, float* alpha, const float* a, lapack_int* lda, float* beta, float* c ); void LAPACK_zhfrk( char* transr, char* uplo, char* trans, lapack_int* n, lapack_int* k, double* alpha, const lapack_complex_double* a, lapack_int* lda, double* beta, lapack_complex_double* c ); void LAPACK_chfrk( char* transr, char* uplo, char* trans, lapack_int* n, lapack_int* k, float* alpha, const lapack_complex_float* a, lapack_int* lda, float* beta, lapack_complex_float* c ); void LAPACK_dtfsm( char* transr, char* side, char* uplo, char* trans, char* diag, lapack_int* m, lapack_int* n, double* alpha, const double* a, double* b, lapack_int* ldb ); void LAPACK_stfsm( char* transr, char* side, char* uplo, char* trans, char* diag, lapack_int* m, lapack_int* n, float* alpha, const float* a, float* b, lapack_int* ldb ); void LAPACK_ztfsm( char* transr, char* side, char* uplo, char* trans, char* diag, lapack_int* m, lapack_int* n, lapack_complex_double* alpha, const lapack_complex_double* a, lapack_complex_double* b, lapack_int* ldb ); void LAPACK_ctfsm( char* transr, char* side, char* uplo, char* trans, char* diag, lapack_int* m, lapack_int* n, lapack_complex_float* alpha, const lapack_complex_float* a, lapack_complex_float* b, lapack_int* ldb ); void LAPACK_dtfttp( char* transr, char* uplo, lapack_int* n, const double* arf, double* ap, lapack_int *info ); void LAPACK_stfttp( char* transr, char* uplo, lapack_int* n, const float* arf, float* ap, lapack_int *info ); void LAPACK_ztfttp( char* transr, char* uplo, lapack_int* n, const lapack_complex_double* arf, lapack_complex_double* ap, lapack_int *info ); void LAPACK_ctfttp( char* transr, char* uplo, lapack_int* n, const lapack_complex_float* arf, lapack_complex_float* ap, lapack_int *info ); void LAPACK_dtfttr( char* transr, char* uplo, lapack_int* n, const double* arf, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_stfttr( char* transr, char* uplo, lapack_int* n, const float* arf, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_ztfttr( char* transr, char* uplo, lapack_int* n, const lapack_complex_double* arf, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_ctfttr( char* transr, char* uplo, lapack_int* n, const lapack_complex_float* arf, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dtpttf( char* transr, char* uplo, lapack_int* n, const double* ap, double* arf, lapack_int *info ); void LAPACK_stpttf( char* transr, char* uplo, lapack_int* n, const float* ap, float* arf, lapack_int *info ); void LAPACK_ztpttf( char* transr, char* uplo, lapack_int* n, const lapack_complex_double* ap, lapack_complex_double* arf, lapack_int *info ); void LAPACK_ctpttf( char* transr, char* uplo, lapack_int* n, const lapack_complex_float* ap, lapack_complex_float* arf, lapack_int *info ); void LAPACK_dtpttr( char* uplo, lapack_int* n, const double* ap, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_stpttr( char* uplo, lapack_int* n, const float* ap, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_ztpttr( char* uplo, lapack_int* n, const lapack_complex_double* ap, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_ctpttr( char* uplo, lapack_int* n, const lapack_complex_float* ap, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dtrttf( char* transr, char* uplo, lapack_int* n, const double* a, lapack_int* lda, double* arf, lapack_int *info ); void LAPACK_strttf( char* transr, char* uplo, lapack_int* n, const float* a, lapack_int* lda, float* arf, lapack_int *info ); void LAPACK_ztrttf( char* transr, char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* arf, lapack_int *info ); void LAPACK_ctrttf( char* transr, char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* arf, lapack_int *info ); void LAPACK_dtrttp( char* uplo, lapack_int* n, const double* a, lapack_int* lda, double* ap, lapack_int *info ); void LAPACK_strttp( char* uplo, lapack_int* n, const float* a, lapack_int* lda, float* ap, lapack_int *info ); void LAPACK_ztrttp( char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* ap, lapack_int *info ); void LAPACK_ctrttp( char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* ap, lapack_int *info ); void LAPACK_sgeqrfp( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_dgeqrfp( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_cgeqrfp( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int* lwork, lapack_int *info ); void LAPACK_zgeqrfp( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int* lwork, lapack_int *info ); void LAPACK_clacgv( lapack_int* n, lapack_complex_float* x, lapack_int* incx ); void LAPACK_zlacgv( lapack_int* n, lapack_complex_double* x, lapack_int* incx ); void LAPACK_slarnv( lapack_int* idist, lapack_int* iseed, lapack_int* n, float* x ); void LAPACK_dlarnv( lapack_int* idist, lapack_int* iseed, lapack_int* n, double* x ); void LAPACK_clarnv( lapack_int* idist, lapack_int* iseed, lapack_int* n, lapack_complex_float* x ); void LAPACK_zlarnv( lapack_int* idist, lapack_int* iseed, lapack_int* n, lapack_complex_double* x ); void LAPACK_sgeqr2( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int *info ); void LAPACK_dgeqr2( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int *info ); void LAPACK_cgeqr2( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int *info ); void LAPACK_zgeqr2( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int *info ); void LAPACK_slacpy( char* uplo, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, float* b, lapack_int* ldb ); void LAPACK_dlacpy( char* uplo, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, double* b, lapack_int* ldb ); void LAPACK_clacpy( char* uplo, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb ); void LAPACK_zlacpy( char* uplo, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb ); void LAPACK_sgetf2( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_dgetf2( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_cgetf2( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_zgetf2( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* ipiv, lapack_int *info ); void LAPACK_slaswp( lapack_int* n, float* a, lapack_int* lda, lapack_int* k1, lapack_int* k2, const lapack_int* ipiv, lapack_int* incx ); void LAPACK_dlaswp( lapack_int* n, double* a, lapack_int* lda, lapack_int* k1, lapack_int* k2, const lapack_int* ipiv, lapack_int* incx ); void LAPACK_claswp( lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int* k1, lapack_int* k2, const lapack_int* ipiv, lapack_int* incx ); void LAPACK_zlaswp( lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int* k1, lapack_int* k2, const lapack_int* ipiv, lapack_int* incx ); float LAPACK_slange( char* norm, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, float* work ); double LAPACK_dlange( char* norm, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, double* work ); float LAPACK_clange( char* norm, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* work ); double LAPACK_zlange( char* norm, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* work ); float LAPACK_clanhe( char* norm, char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* work ); double LAPACK_zlanhe( char* norm, char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* work ); float LAPACK_slansy( char* norm, char* uplo, lapack_int* n, const float* a, lapack_int* lda, float* work ); double LAPACK_dlansy( char* norm, char* uplo, lapack_int* n, const double* a, lapack_int* lda, double* work ); float LAPACK_clansy( char* norm, char* uplo, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* work ); double LAPACK_zlansy( char* norm, char* uplo, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* work ); float LAPACK_slantr( char* norm, char* uplo, char* diag, lapack_int* m, lapack_int* n, const float* a, lapack_int* lda, float* work ); double LAPACK_dlantr( char* norm, char* uplo, char* diag, lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, double* work ); float LAPACK_clantr( char* norm, char* uplo, char* diag, lapack_int* m, lapack_int* n, const lapack_complex_float* a, lapack_int* lda, float* work ); double LAPACK_zlantr( char* norm, char* uplo, char* diag, lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, double* work ); float LAPACK_slamch( char* cmach ); double LAPACK_dlamch( char* cmach ); void LAPACK_sgelq2( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* tau, float* work, lapack_int *info ); void LAPACK_dgelq2( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* tau, double* work, lapack_int *info ); void LAPACK_cgelq2( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* tau, lapack_complex_float* work, lapack_int *info ); void LAPACK_zgelq2( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* tau, lapack_complex_double* work, lapack_int *info ); void LAPACK_slarfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, const float* v, lapack_int* ldv, const float* t, lapack_int* ldt, float* c, lapack_int* ldc, float* work, lapack_int* ldwork ); void LAPACK_dlarfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, const double* v, lapack_int* ldv, const double* t, lapack_int* ldt, double* c, lapack_int* ldc, double* work, lapack_int* ldwork ); void LAPACK_clarfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_float* v, lapack_int* ldv, const lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int* ldwork ); void LAPACK_zlarfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, const lapack_complex_double* v, lapack_int* ldv, const lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int* ldwork ); void LAPACK_slarfg( lapack_int* n, float* alpha, float* x, lapack_int* incx, float* tau ); void LAPACK_dlarfg( lapack_int* n, double* alpha, double* x, lapack_int* incx, double* tau ); void LAPACK_clarfg( lapack_int* n, lapack_complex_float* alpha, lapack_complex_float* x, lapack_int* incx, lapack_complex_float* tau ); void LAPACK_zlarfg( lapack_int* n, lapack_complex_double* alpha, lapack_complex_double* x, lapack_int* incx, lapack_complex_double* tau ); void LAPACK_slarft( char* direct, char* storev, lapack_int* n, lapack_int* k, const float* v, lapack_int* ldv, const float* tau, float* t, lapack_int* ldt ); void LAPACK_dlarft( char* direct, char* storev, lapack_int* n, lapack_int* k, const double* v, lapack_int* ldv, const double* tau, double* t, lapack_int* ldt ); void LAPACK_clarft( char* direct, char* storev, lapack_int* n, lapack_int* k, const lapack_complex_float* v, lapack_int* ldv, const lapack_complex_float* tau, lapack_complex_float* t, lapack_int* ldt ); void LAPACK_zlarft( char* direct, char* storev, lapack_int* n, lapack_int* k, const lapack_complex_double* v, lapack_int* ldv, const lapack_complex_double* tau, lapack_complex_double* t, lapack_int* ldt ); void LAPACK_slarfx( char* side, lapack_int* m, lapack_int* n, const float* v, float* tau, float* c, lapack_int* ldc, float* work ); void LAPACK_dlarfx( char* side, lapack_int* m, lapack_int* n, const double* v, double* tau, double* c, lapack_int* ldc, double* work ); void LAPACK_clarfx( char* side, lapack_int* m, lapack_int* n, const lapack_complex_float* v, lapack_complex_float* tau, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work ); void LAPACK_zlarfx( char* side, lapack_int* m, lapack_int* n, const lapack_complex_double* v, lapack_complex_double* tau, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work ); void LAPACK_slatms( lapack_int* m, lapack_int* n, char* dist, lapack_int* iseed, char* sym, float* d, lapack_int* mode, float* cond, float* dmax, lapack_int* kl, lapack_int* ku, char* pack, float* a, lapack_int* lda, float* work, lapack_int *info ); void LAPACK_dlatms( lapack_int* m, lapack_int* n, char* dist, lapack_int* iseed, char* sym, double* d, lapack_int* mode, double* cond, double* dmax, lapack_int* kl, lapack_int* ku, char* pack, double* a, lapack_int* lda, double* work, lapack_int *info ); void LAPACK_clatms( lapack_int* m, lapack_int* n, char* dist, lapack_int* iseed, char* sym, float* d, lapack_int* mode, float* cond, float* dmax, lapack_int* kl, lapack_int* ku, char* pack, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* work, lapack_int *info ); void LAPACK_zlatms( lapack_int* m, lapack_int* n, char* dist, lapack_int* iseed, char* sym, double* d, lapack_int* mode, double* cond, double* dmax, lapack_int* kl, lapack_int* ku, char* pack, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* work, lapack_int *info ); void LAPACK_slag2d( lapack_int* m, lapack_int* n, const float* sa, lapack_int* ldsa, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_dlag2s( lapack_int* m, lapack_int* n, const double* a, lapack_int* lda, float* sa, lapack_int* ldsa, lapack_int *info ); void LAPACK_clag2z( lapack_int* m, lapack_int* n, const lapack_complex_float* sa, lapack_int* ldsa, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_zlag2c( lapack_int* m, lapack_int* n, const lapack_complex_double* a, lapack_int* lda, lapack_complex_float* sa, lapack_int* ldsa, lapack_int *info ); void LAPACK_slauum( char* uplo, lapack_int* n, float* a, lapack_int* lda, lapack_int *info ); void LAPACK_dlauum( char* uplo, lapack_int* n, double* a, lapack_int* lda, lapack_int *info ); void LAPACK_clauum( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_int *info ); void LAPACK_zlauum( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_int *info ); void LAPACK_slagge( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const float* d, float* a, lapack_int* lda, lapack_int* iseed, float* work, lapack_int *info ); void LAPACK_dlagge( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const double* d, double* a, lapack_int* lda, lapack_int* iseed, double* work, lapack_int *info ); void LAPACK_clagge( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const float* d, lapack_complex_float* a, lapack_int* lda, lapack_int* iseed, lapack_complex_float* work, lapack_int *info ); void LAPACK_zlagge( lapack_int* m, lapack_int* n, lapack_int* kl, lapack_int* ku, const double* d, lapack_complex_double* a, lapack_int* lda, lapack_int* iseed, lapack_complex_double* work, lapack_int *info ); void LAPACK_slaset( char* uplo, lapack_int* m, lapack_int* n, float* alpha, float* beta, float* a, lapack_int* lda ); void LAPACK_dlaset( char* uplo, lapack_int* m, lapack_int* n, double* alpha, double* beta, double* a, lapack_int* lda ); void LAPACK_claset( char* uplo, lapack_int* m, lapack_int* n, lapack_complex_float* alpha, lapack_complex_float* beta, lapack_complex_float* a, lapack_int* lda ); void LAPACK_zlaset( char* uplo, lapack_int* m, lapack_int* n, lapack_complex_double* alpha, lapack_complex_double* beta, lapack_complex_double* a, lapack_int* lda ); void LAPACK_slasrt( char* id, lapack_int* n, float* d, lapack_int *info ); void LAPACK_dlasrt( char* id, lapack_int* n, double* d, lapack_int *info ); void LAPACK_claghe( lapack_int* n, lapack_int* k, const float* d, lapack_complex_float* a, lapack_int* lda, lapack_int* iseed, lapack_complex_float* work, lapack_int *info ); void LAPACK_zlaghe( lapack_int* n, lapack_int* k, const double* d, lapack_complex_double* a, lapack_int* lda, lapack_int* iseed, lapack_complex_double* work, lapack_int *info ); void LAPACK_slagsy( lapack_int* n, lapack_int* k, const float* d, float* a, lapack_int* lda, lapack_int* iseed, float* work, lapack_int *info ); void LAPACK_dlagsy( lapack_int* n, lapack_int* k, const double* d, double* a, lapack_int* lda, lapack_int* iseed, double* work, lapack_int *info ); void LAPACK_clagsy( lapack_int* n, lapack_int* k, const float* d, lapack_complex_float* a, lapack_int* lda, lapack_int* iseed, lapack_complex_float* work, lapack_int *info ); void LAPACK_zlagsy( lapack_int* n, lapack_int* k, const double* d, lapack_complex_double* a, lapack_int* lda, lapack_int* iseed, lapack_complex_double* work, lapack_int *info ); void LAPACK_slapmr( lapack_logical* forwrd, lapack_int* m, lapack_int* n, float* x, lapack_int* ldx, lapack_int* k ); void LAPACK_dlapmr( lapack_logical* forwrd, lapack_int* m, lapack_int* n, double* x, lapack_int* ldx, lapack_int* k ); void LAPACK_clapmr( lapack_logical* forwrd, lapack_int* m, lapack_int* n, lapack_complex_float* x, lapack_int* ldx, lapack_int* k ); void LAPACK_zlapmr( lapack_logical* forwrd, lapack_int* m, lapack_int* n, lapack_complex_double* x, lapack_int* ldx, lapack_int* k ); float LAPACK_slapy2( float* x, float* y ); double LAPACK_dlapy2( double* x, double* y ); float LAPACK_slapy3( float* x, float* y, float* z ); double LAPACK_dlapy3( double* x, double* y, double* z ); void LAPACK_slartgp( float* f, float* g, float* cs, float* sn, float* r ); void LAPACK_dlartgp( double* f, double* g, double* cs, double* sn, double* r ); void LAPACK_slartgs( float* x, float* y, float* sigma, float* cs, float* sn ); void LAPACK_dlartgs( double* x, double* y, double* sigma, double* cs, double* sn ); // LAPACK 3.3.0 void LAPACK_cbbcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, lapack_int* m, lapack_int* p, lapack_int* q, float* theta, float* phi, lapack_complex_float* u1, lapack_int* ldu1, lapack_complex_float* u2, lapack_int* ldu2, lapack_complex_float* v1t, lapack_int* ldv1t, lapack_complex_float* v2t, lapack_int* ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e, float* rwork, lapack_int* lrwork , lapack_int *info ); void LAPACK_cheswapr( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_chetri2( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_chetri2x( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* nb , lapack_int *info ); void LAPACK_chetrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work , lapack_int *info ); void LAPACK_csyconv( char* uplo, char* way, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work , lapack_int *info ); void LAPACK_csyswapr( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_csytri2( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_csytri2x( char* uplo, lapack_int* n, lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* nb , lapack_int *info ); void LAPACK_csytrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work , lapack_int *info ); void LAPACK_cunbdb( char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, lapack_complex_float* x11, lapack_int* ldx11, lapack_complex_float* x12, lapack_int* ldx12, lapack_complex_float* x21, lapack_int* ldx21, lapack_complex_float* x22, lapack_int* ldx22, float* theta, float* phi, lapack_complex_float* taup1, lapack_complex_float* taup2, lapack_complex_float* tauq1, lapack_complex_float* tauq2, lapack_complex_float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_cuncsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, lapack_complex_float* x11, lapack_int* ldx11, lapack_complex_float* x12, lapack_int* ldx12, lapack_complex_float* x21, lapack_int* ldx21, lapack_complex_float* x22, lapack_int* ldx22, float* theta, lapack_complex_float* u1, lapack_int* ldu1, lapack_complex_float* u2, lapack_int* ldu2, lapack_complex_float* v1t, lapack_int* ldv1t, lapack_complex_float* v2t, lapack_int* ldv2t, lapack_complex_float* work, lapack_int* lwork, float* rwork, lapack_int* lrwork, lapack_int* iwork , lapack_int *info ); void LAPACK_dbbcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, lapack_int* m, lapack_int* p, lapack_int* q, double* theta, double* phi, double* u1, lapack_int* ldu1, double* u2, lapack_int* ldu2, double* v1t, lapack_int* ldv1t, double* v2t, lapack_int* ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e, double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_dorbdb( char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, double* x11, lapack_int* ldx11, double* x12, lapack_int* ldx12, double* x21, lapack_int* ldx21, double* x22, lapack_int* ldx22, double* theta, double* phi, double* taup1, double* taup2, double* tauq1, double* tauq2, double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_dorcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, double* x11, lapack_int* ldx11, double* x12, lapack_int* ldx12, double* x21, lapack_int* ldx21, double* x22, lapack_int* ldx22, double* theta, double* u1, lapack_int* ldu1, double* u2, lapack_int* ldu2, double* v1t, lapack_int* ldv1t, double* v2t, lapack_int* ldv2t, double* work, lapack_int* lwork, lapack_int* iwork , lapack_int *info ); void LAPACK_dsyconv( char* uplo, char* way, lapack_int* n, double* a, lapack_int* lda, const lapack_int* ipiv, double* work , lapack_int *info ); void LAPACK_dsyswapr( char* uplo, lapack_int* n, double* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_dsytri2( char* uplo, lapack_int* n, double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_dsytri2x( char* uplo, lapack_int* n, double* a, lapack_int* lda, const lapack_int* ipiv, double* work, lapack_int* nb , lapack_int *info ); void LAPACK_dsytrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const double* a, lapack_int* lda, const lapack_int* ipiv, double* b, lapack_int* ldb, double* work , lapack_int *info ); void LAPACK_sbbcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, lapack_int* m, lapack_int* p, lapack_int* q, float* theta, float* phi, float* u1, lapack_int* ldu1, float* u2, lapack_int* ldu2, float* v1t, lapack_int* ldv1t, float* v2t, lapack_int* ldv2t, float* b11d, float* b11e, float* b12d, float* b12e, float* b21d, float* b21e, float* b22d, float* b22e, float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_sorbdb( char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, float* x11, lapack_int* ldx11, float* x12, lapack_int* ldx12, float* x21, lapack_int* ldx21, float* x22, lapack_int* ldx22, float* theta, float* phi, float* taup1, float* taup2, float* tauq1, float* tauq2, float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_sorcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, float* x11, lapack_int* ldx11, float* x12, lapack_int* ldx12, float* x21, lapack_int* ldx21, float* x22, lapack_int* ldx22, float* theta, float* u1, lapack_int* ldu1, float* u2, lapack_int* ldu2, float* v1t, lapack_int* ldv1t, float* v2t, lapack_int* ldv2t, float* work, lapack_int* lwork, lapack_int* iwork , lapack_int *info ); void LAPACK_ssyconv( char* uplo, char* way, lapack_int* n, float* a, lapack_int* lda, const lapack_int* ipiv, float* work , lapack_int *info ); void LAPACK_ssyswapr( char* uplo, lapack_int* n, float* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_ssytri2( char* uplo, lapack_int* n, float* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_float* work, lapack_int* lwork , lapack_int *info ); void LAPACK_ssytri2x( char* uplo, lapack_int* n, float* a, lapack_int* lda, const lapack_int* ipiv, float* work, lapack_int* nb , lapack_int *info ); void LAPACK_ssytrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const float* a, lapack_int* lda, const lapack_int* ipiv, float* b, lapack_int* ldb, float* work , lapack_int *info ); void LAPACK_zbbcsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, lapack_int* m, lapack_int* p, lapack_int* q, double* theta, double* phi, lapack_complex_double* u1, lapack_int* ldu1, lapack_complex_double* u2, lapack_int* ldu2, lapack_complex_double* v1t, lapack_int* ldv1t, lapack_complex_double* v2t, lapack_int* ldv2t, double* b11d, double* b11e, double* b12d, double* b12e, double* b21d, double* b21e, double* b22d, double* b22e, double* rwork, lapack_int* lrwork , lapack_int *info ); void LAPACK_zheswapr( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_zhetri2( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_zhetri2x( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* nb , lapack_int *info ); void LAPACK_zhetrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work , lapack_int *info ); void LAPACK_zsyconv( char* uplo, char* way, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work , lapack_int *info ); void LAPACK_zsyswapr( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* i1, lapack_int* i2 ); void LAPACK_zsytri2( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_zsytri2x( char* uplo, lapack_int* n, lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* work, lapack_int* nb , lapack_int *info ); void LAPACK_zsytrs2( char* uplo, lapack_int* n, lapack_int* nrhs, const lapack_complex_double* a, lapack_int* lda, const lapack_int* ipiv, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work , lapack_int *info ); void LAPACK_zunbdb( char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, lapack_complex_double* x11, lapack_int* ldx11, lapack_complex_double* x12, lapack_int* ldx12, lapack_complex_double* x21, lapack_int* ldx21, lapack_complex_double* x22, lapack_int* ldx22, double* theta, double* phi, lapack_complex_double* taup1, lapack_complex_double* taup2, lapack_complex_double* tauq1, lapack_complex_double* tauq2, lapack_complex_double* work, lapack_int* lwork , lapack_int *info ); void LAPACK_zuncsd( char* jobu1, char* jobu2, char* jobv1t, char* jobv2t, char* trans, char* signs, lapack_int* m, lapack_int* p, lapack_int* q, lapack_complex_double* x11, lapack_int* ldx11, lapack_complex_double* x12, lapack_int* ldx12, lapack_complex_double* x21, lapack_int* ldx21, lapack_complex_double* x22, lapack_int* ldx22, double* theta, lapack_complex_double* u1, lapack_int* ldu1, lapack_complex_double* u2, lapack_int* ldu2, lapack_complex_double* v1t, lapack_int* ldv1t, lapack_complex_double* v2t, lapack_int* ldv2t, lapack_complex_double* work, lapack_int* lwork, double* rwork, lapack_int* lrwork, lapack_int* iwork , lapack_int *info ); // LAPACK 3.4.0 void LAPACK_sgemqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* nb, const float* v, lapack_int* ldv, const float* t, lapack_int* ldt, float* c, lapack_int* ldc, float* work, lapack_int *info ); void LAPACK_dgemqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* nb, const double* v, lapack_int* ldv, const double* t, lapack_int* ldt, double* c, lapack_int* ldc, double* work, lapack_int *info ); void LAPACK_cgemqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* nb, const lapack_complex_float* v, lapack_int* ldv, const lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* c, lapack_int* ldc, lapack_complex_float* work, lapack_int *info ); void LAPACK_zgemqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* nb, const lapack_complex_double* v, lapack_int* ldv, const lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* c, lapack_int* ldc, lapack_complex_double* work, lapack_int *info ); void LAPACK_sgeqrt( lapack_int* m, lapack_int* n, lapack_int* nb, float* a, lapack_int* lda, float* t, lapack_int* ldt, float* work, lapack_int *info ); void LAPACK_dgeqrt( lapack_int* m, lapack_int* n, lapack_int* nb, double* a, lapack_int* lda, double* t, lapack_int* ldt, double* work, lapack_int *info ); void LAPACK_cgeqrt( lapack_int* m, lapack_int* n, lapack_int* nb, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* work, lapack_int *info ); void LAPACK_zgeqrt( lapack_int* m, lapack_int* n, lapack_int* nb, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* work, lapack_int *info ); void LAPACK_sgeqrt2( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_dgeqrt2( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_cgeqrt2( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_zgeqrt2( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_sgeqrt3( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_dgeqrt3( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_cgeqrt3( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_zgeqrt3( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_stpmqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, lapack_int* nb, const float* v, lapack_int* ldv, const float* t, lapack_int* ldt, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* work, lapack_int *info ); void LAPACK_dtpmqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, lapack_int* nb, const double* v, lapack_int* ldv, const double* t, lapack_int* ldt, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* work, lapack_int *info ); void LAPACK_ctpmqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, lapack_int* nb, const lapack_complex_float* v, lapack_int* ldv, const lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* work, lapack_int *info ); void LAPACK_ztpmqrt( char* side, char* trans, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, lapack_int* nb, const lapack_complex_double* v, lapack_int* ldv, const lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* work, lapack_int *info ); void LAPACK_dtpqrt( lapack_int* m, lapack_int* n, lapack_int* l, lapack_int* nb, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* t, lapack_int* ldt, double* work, lapack_int *info ); void LAPACK_ctpqrt( lapack_int* m, lapack_int* n, lapack_int* l, lapack_int* nb, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* t, lapack_complex_float* b, lapack_int* ldb, lapack_int* ldt, lapack_complex_float* work, lapack_int *info ); void LAPACK_ztpqrt( lapack_int* m, lapack_int* n, lapack_int* l, lapack_int* nb, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* work, lapack_int *info ); void LAPACK_stpqrt2( lapack_int* m, lapack_int* n, float* a, lapack_int* lda, float* b, lapack_int* ldb, float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_dtpqrt2( lapack_int* m, lapack_int* n, double* a, lapack_int* lda, double* b, lapack_int* ldb, double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_ctpqrt2( lapack_int* m, lapack_int* n, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, lapack_complex_float* t, lapack_int* ldt, lapack_int *info ); void LAPACK_ztpqrt2( lapack_int* m, lapack_int* n, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, lapack_complex_double* t, lapack_int* ldt, lapack_int *info ); void LAPACK_stprfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const float* v, lapack_int* ldv, const float* t, lapack_int* ldt, float* a, lapack_int* lda, float* b, lapack_int* ldb, const float* mywork, lapack_int* myldwork ); void LAPACK_dtprfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const double* v, lapack_int* ldv, const double* t, lapack_int* ldt, double* a, lapack_int* lda, double* b, lapack_int* ldb, const double* mywork, lapack_int* myldwork ); void LAPACK_ctprfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const lapack_complex_float* v, lapack_int* ldv, const lapack_complex_float* t, lapack_int* ldt, lapack_complex_float* a, lapack_int* lda, lapack_complex_float* b, lapack_int* ldb, const float* mywork, lapack_int* myldwork ); void LAPACK_ztprfb( char* side, char* trans, char* direct, char* storev, lapack_int* m, lapack_int* n, lapack_int* k, lapack_int* l, const lapack_complex_double* v, lapack_int* ldv, const lapack_complex_double* t, lapack_int* ldt, lapack_complex_double* a, lapack_int* lda, lapack_complex_double* b, lapack_int* ldb, const double* mywork, lapack_int* myldwork ); // LAPACK 3.X.X void LAPACK_csyr( char* uplo, lapack_int* n, lapack_complex_float* alpha, const lapack_complex_float* x, lapack_int* incx, lapack_complex_float* a, lapack_int* lda ); void LAPACK_zsyr( char* uplo, lapack_int* n, lapack_complex_double* alpha, const lapack_complex_double* x, lapack_int* incx, lapack_complex_double* a, lapack_int* lda ); #ifdef __cplusplus } #endif /* __cplusplus */ #endif /* _LAPACKE_H_ */ #endif /* _MKL_LAPACKE_H_ */ piqp-octave/src/eigen/Eigen/src/misc/lapacke_mangling.h0000644000175100001770000000073214107270226022732 0ustar runnerdocker#ifndef LAPACK_HEADER_INCLUDED #define LAPACK_HEADER_INCLUDED #ifndef LAPACK_GLOBAL #if defined(LAPACK_GLOBAL_PATTERN_LC) || defined(ADD_) #define LAPACK_GLOBAL(lcname,UCNAME) lcname##_ #elif defined(LAPACK_GLOBAL_PATTERN_UC) || defined(UPPER) #define LAPACK_GLOBAL(lcname,UCNAME) UCNAME #elif defined(LAPACK_GLOBAL_PATTERN_MC) || defined(NOCHANGE) #define LAPACK_GLOBAL(lcname,UCNAME) lcname #else #define LAPACK_GLOBAL(lcname,UCNAME) lcname##_ #endif #endif #endif piqp-octave/src/eigen/Eigen/src/Geometry/0000755000175100001770000000000014107270226020143 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Geometry/Umeyama.h0000644000175100001770000001405614107270226021720 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_UMEYAMA_H #define EIGEN_UMEYAMA_H // This file requires the user to include // * Eigen/Core // * Eigen/LU // * Eigen/SVD // * Eigen/Array namespace Eigen { #ifndef EIGEN_PARSED_BY_DOXYGEN // These helpers are required since it allows to use mixed types as parameters // for the Umeyama. The problem with mixed parameters is that the return type // cannot trivially be deduced when float and double types are mixed. namespace internal { // Compile time return type deduction for different MatrixBase types. // Different means here different alignment and parameters but the same underlying // real scalar type. template struct umeyama_transform_matrix_type { enum { MinRowsAtCompileTime = EIGEN_SIZE_MIN_PREFER_DYNAMIC(MatrixType::RowsAtCompileTime, OtherMatrixType::RowsAtCompileTime), // When possible we want to choose some small fixed size value since the result // is likely to fit on the stack. So here, EIGEN_SIZE_MIN_PREFER_DYNAMIC is not what we want. HomogeneousDimension = int(MinRowsAtCompileTime) == Dynamic ? Dynamic : int(MinRowsAtCompileTime)+1 }; typedef Matrix::Scalar, HomogeneousDimension, HomogeneousDimension, AutoAlign | (traits::Flags & RowMajorBit ? RowMajor : ColMajor), HomogeneousDimension, HomogeneousDimension > type; }; } #endif /** * \geometry_module \ingroup Geometry_Module * * \brief Returns the transformation between two point sets. * * The algorithm is based on: * "Least-squares estimation of transformation parameters between two point patterns", * Shinji Umeyama, PAMI 1991, DOI: 10.1109/34.88573 * * It estimates parameters \f$ c, \mathbf{R}, \f$ and \f$ \mathbf{t} \f$ such that * \f{align*} * \frac{1}{n} \sum_{i=1}^n \vert\vert y_i - (c\mathbf{R}x_i + \mathbf{t}) \vert\vert_2^2 * \f} * is minimized. * * The algorithm is based on the analysis of the covariance matrix * \f$ \Sigma_{\mathbf{x}\mathbf{y}} \in \mathbb{R}^{d \times d} \f$ * of the input point sets \f$ \mathbf{x} \f$ and \f$ \mathbf{y} \f$ where * \f$d\f$ is corresponding to the dimension (which is typically small). * The analysis is involving the SVD having a complexity of \f$O(d^3)\f$ * though the actual computational effort lies in the covariance * matrix computation which has an asymptotic lower bound of \f$O(dm)\f$ when * the input point sets have dimension \f$d \times m\f$. * * Currently the method is working only for floating point matrices. * * \todo Should the return type of umeyama() become a Transform? * * \param src Source points \f$ \mathbf{x} = \left( x_1, \hdots, x_n \right) \f$. * \param dst Destination points \f$ \mathbf{y} = \left( y_1, \hdots, y_n \right) \f$. * \param with_scaling Sets \f$ c=1 \f$ when false is passed. * \return The homogeneous transformation * \f{align*} * T = \begin{bmatrix} c\mathbf{R} & \mathbf{t} \\ \mathbf{0} & 1 \end{bmatrix} * \f} * minimizing the residual above. This transformation is always returned as an * Eigen::Matrix. */ template typename internal::umeyama_transform_matrix_type::type umeyama(const MatrixBase& src, const MatrixBase& dst, bool with_scaling = true) { typedef typename internal::umeyama_transform_matrix_type::type TransformationMatrixType; typedef typename internal::traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; EIGEN_STATIC_ASSERT(!NumTraits::IsComplex, NUMERIC_TYPE_MUST_BE_REAL) EIGEN_STATIC_ASSERT((internal::is_same::Scalar>::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) enum { Dimension = EIGEN_SIZE_MIN_PREFER_DYNAMIC(Derived::RowsAtCompileTime, OtherDerived::RowsAtCompileTime) }; typedef Matrix VectorType; typedef Matrix MatrixType; typedef typename internal::plain_matrix_type_row_major::type RowMajorMatrixType; const Index m = src.rows(); // dimension const Index n = src.cols(); // number of measurements // required for demeaning ... const RealScalar one_over_n = RealScalar(1) / static_cast(n); // computation of mean const VectorType src_mean = src.rowwise().sum() * one_over_n; const VectorType dst_mean = dst.rowwise().sum() * one_over_n; // demeaning of src and dst points const RowMajorMatrixType src_demean = src.colwise() - src_mean; const RowMajorMatrixType dst_demean = dst.colwise() - dst_mean; // Eq. (36)-(37) const Scalar src_var = src_demean.rowwise().squaredNorm().sum() * one_over_n; // Eq. (38) const MatrixType sigma = one_over_n * dst_demean * src_demean.transpose(); JacobiSVD svd(sigma, ComputeFullU | ComputeFullV); // Initialize the resulting transformation with an identity matrix... TransformationMatrixType Rt = TransformationMatrixType::Identity(m+1,m+1); // Eq. (39) VectorType S = VectorType::Ones(m); if ( svd.matrixU().determinant() * svd.matrixV().determinant() < 0 ) S(m-1) = -1; // Eq. (40) and (43) Rt.block(0,0,m,m).noalias() = svd.matrixU() * S.asDiagonal() * svd.matrixV().transpose(); if (with_scaling) { // Eq. (42) const Scalar c = Scalar(1)/src_var * svd.singularValues().dot(S); // Eq. (41) Rt.col(m).head(m) = dst_mean; Rt.col(m).head(m).noalias() -= c*Rt.topLeftCorner(m,m)*src_mean; Rt.block(0,0,m,m) *= c; } else { Rt.col(m).head(m) = dst_mean; Rt.col(m).head(m).noalias() -= Rt.topLeftCorner(m,m)*src_mean; } return Rt; } } // end namespace Eigen #endif // EIGEN_UMEYAMA_H piqp-octave/src/eigen/Eigen/src/Geometry/ParametrizedLine.h0000644000175100001770000002312414107270226023555 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_PARAMETRIZEDLINE_H #define EIGEN_PARAMETRIZEDLINE_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class ParametrizedLine * * \brief A parametrized line * * A parametrized line is defined by an origin point \f$ \mathbf{o} \f$ and a unit * direction vector \f$ \mathbf{d} \f$ such that the line corresponds to * the set \f$ l(t) = \mathbf{o} + t \mathbf{d} \f$, \f$ t \in \mathbf{R} \f$. * * \tparam _Scalar the scalar type, i.e., the type of the coefficients * \tparam _AmbientDim the dimension of the ambient space, can be a compile time value or Dynamic. */ template class ParametrizedLine { public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF_VECTORIZABLE_FIXED_SIZE(_Scalar,_AmbientDim) enum { AmbientDimAtCompileTime = _AmbientDim, Options = _Options }; typedef _Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix VectorType; /** Default constructor without initialization */ EIGEN_DEVICE_FUNC inline ParametrizedLine() {} template EIGEN_DEVICE_FUNC ParametrizedLine(const ParametrizedLine& other) : m_origin(other.origin()), m_direction(other.direction()) {} /** Constructs a dynamic-size line with \a _dim the dimension * of the ambient space */ EIGEN_DEVICE_FUNC inline explicit ParametrizedLine(Index _dim) : m_origin(_dim), m_direction(_dim) {} /** Initializes a parametrized line of direction \a direction and origin \a origin. * \warning the vector direction is assumed to be normalized. */ EIGEN_DEVICE_FUNC ParametrizedLine(const VectorType& origin, const VectorType& direction) : m_origin(origin), m_direction(direction) {} template EIGEN_DEVICE_FUNC explicit ParametrizedLine(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane); /** Constructs a parametrized line going from \a p0 to \a p1. */ EIGEN_DEVICE_FUNC static inline ParametrizedLine Through(const VectorType& p0, const VectorType& p1) { return ParametrizedLine(p0, (p1-p0).normalized()); } EIGEN_DEVICE_FUNC ~ParametrizedLine() {} /** \returns the dimension in which the line holds */ EIGEN_DEVICE_FUNC inline Index dim() const { return m_direction.size(); } EIGEN_DEVICE_FUNC const VectorType& origin() const { return m_origin; } EIGEN_DEVICE_FUNC VectorType& origin() { return m_origin; } EIGEN_DEVICE_FUNC const VectorType& direction() const { return m_direction; } EIGEN_DEVICE_FUNC VectorType& direction() { return m_direction; } /** \returns the squared distance of a point \a p to its projection onto the line \c *this. * \sa distance() */ EIGEN_DEVICE_FUNC RealScalar squaredDistance(const VectorType& p) const { VectorType diff = p - origin(); return (diff - direction().dot(diff) * direction()).squaredNorm(); } /** \returns the distance of a point \a p to its projection onto the line \c *this. * \sa squaredDistance() */ EIGEN_DEVICE_FUNC RealScalar distance(const VectorType& p) const { EIGEN_USING_STD(sqrt) return sqrt(squaredDistance(p)); } /** \returns the projection of a point \a p onto the line \c *this. */ EIGEN_DEVICE_FUNC VectorType projection(const VectorType& p) const { return origin() + direction().dot(p-origin()) * direction(); } EIGEN_DEVICE_FUNC VectorType pointAt(const Scalar& t) const; template EIGEN_DEVICE_FUNC Scalar intersectionParameter(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const; template EIGEN_DEVICE_FUNC Scalar intersection(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const; template EIGEN_DEVICE_FUNC VectorType intersectionPoint(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const; /** Applies the transformation matrix \a mat to \c *this and returns a reference to \c *this. * * \param mat the Dim x Dim transformation matrix * \param traits specifies whether the matrix \a mat represents an #Isometry * or a more generic #Affine transformation. The default is #Affine. */ template EIGEN_DEVICE_FUNC inline ParametrizedLine& transform(const MatrixBase& mat, TransformTraits traits = Affine) { if (traits==Affine) direction() = (mat * direction()).normalized(); else if (traits==Isometry) direction() = mat * direction(); else { eigen_assert(0 && "invalid traits value in ParametrizedLine::transform()"); } origin() = mat * origin(); return *this; } /** Applies the transformation \a t to \c *this and returns a reference to \c *this. * * \param t the transformation of dimension Dim * \param traits specifies whether the transformation \a t represents an #Isometry * or a more generic #Affine transformation. The default is #Affine. * Other kind of transformations are not supported. */ template EIGEN_DEVICE_FUNC inline ParametrizedLine& transform(const Transform& t, TransformTraits traits = Affine) { transform(t.linear(), traits); origin() += t.translation(); return *this; } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit ParametrizedLine(const ParametrizedLine& other) { m_origin = other.origin().template cast(); m_direction = other.direction().template cast(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const ParametrizedLine& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return m_origin.isApprox(other.m_origin, prec) && m_direction.isApprox(other.m_direction, prec); } protected: VectorType m_origin, m_direction; }; /** Constructs a parametrized line from a 2D hyperplane * * \warning the ambient space must have dimension 2 such that the hyperplane actually describes a line */ template template EIGEN_DEVICE_FUNC inline ParametrizedLine<_Scalar, _AmbientDim,_Options>::ParametrizedLine(const Hyperplane<_Scalar, _AmbientDim,OtherOptions>& hyperplane) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(VectorType, 2) direction() = hyperplane.normal().unitOrthogonal(); origin() = -hyperplane.normal()*hyperplane.offset(); } /** \returns the point at \a t along this line */ template EIGEN_DEVICE_FUNC inline typename ParametrizedLine<_Scalar, _AmbientDim,_Options>::VectorType ParametrizedLine<_Scalar, _AmbientDim,_Options>::pointAt(const _Scalar& t) const { return origin() + (direction()*t); } /** \returns the parameter value of the intersection between \c *this and the given \a hyperplane */ template template EIGEN_DEVICE_FUNC inline _Scalar ParametrizedLine<_Scalar, _AmbientDim,_Options>::intersectionParameter(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const { return -(hyperplane.offset()+hyperplane.normal().dot(origin())) / hyperplane.normal().dot(direction()); } /** \deprecated use intersectionParameter() * \returns the parameter value of the intersection between \c *this and the given \a hyperplane */ template template EIGEN_DEVICE_FUNC inline _Scalar ParametrizedLine<_Scalar, _AmbientDim,_Options>::intersection(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const { return intersectionParameter(hyperplane); } /** \returns the point of the intersection between \c *this and the given hyperplane */ template template EIGEN_DEVICE_FUNC inline typename ParametrizedLine<_Scalar, _AmbientDim,_Options>::VectorType ParametrizedLine<_Scalar, _AmbientDim,_Options>::intersectionPoint(const Hyperplane<_Scalar, _AmbientDim, OtherOptions>& hyperplane) const { return pointAt(intersectionParameter(hyperplane)); } } // end namespace Eigen #endif // EIGEN_PARAMETRIZEDLINE_H piqp-octave/src/eigen/Eigen/src/Geometry/AngleAxis.h0000644000175100001770000002032314107270226022167 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ANGLEAXIS_H #define EIGEN_ANGLEAXIS_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class AngleAxis * * \brief Represents a 3D rotation as a rotation angle around an arbitrary 3D axis * * \param _Scalar the scalar type, i.e., the type of the coefficients. * * \warning When setting up an AngleAxis object, the axis vector \b must \b be \b normalized. * * The following two typedefs are provided for convenience: * \li \c AngleAxisf for \c float * \li \c AngleAxisd for \c double * * Combined with MatrixBase::Unit{X,Y,Z}, AngleAxis can be used to easily * mimic Euler-angles. Here is an example: * \include AngleAxis_mimic_euler.cpp * Output: \verbinclude AngleAxis_mimic_euler.out * * \note This class is not aimed to be used to store a rotation transformation, * but rather to make easier the creation of other rotation (Quaternion, rotation Matrix) * and transformation objects. * * \sa class Quaternion, class Transform, MatrixBase::UnitX() */ namespace internal { template struct traits > { typedef _Scalar Scalar; }; } template class AngleAxis : public RotationBase,3> { typedef RotationBase,3> Base; public: using Base::operator*; enum { Dim = 3 }; /** the scalar type of the coefficients */ typedef _Scalar Scalar; typedef Matrix Matrix3; typedef Matrix Vector3; typedef Quaternion QuaternionType; protected: Vector3 m_axis; Scalar m_angle; public: /** Default constructor without initialization. */ EIGEN_DEVICE_FUNC AngleAxis() {} /** Constructs and initialize the angle-axis rotation from an \a angle in radian * and an \a axis which \b must \b be \b normalized. * * \warning If the \a axis vector is not normalized, then the angle-axis object * represents an invalid rotation. */ template EIGEN_DEVICE_FUNC inline AngleAxis(const Scalar& angle, const MatrixBase& axis) : m_axis(axis), m_angle(angle) {} /** Constructs and initialize the angle-axis rotation from a quaternion \a q. * This function implicitly normalizes the quaternion \a q. */ template EIGEN_DEVICE_FUNC inline explicit AngleAxis(const QuaternionBase& q) { *this = q; } /** Constructs and initialize the angle-axis rotation from a 3x3 rotation matrix. */ template EIGEN_DEVICE_FUNC inline explicit AngleAxis(const MatrixBase& m) { *this = m; } /** \returns the value of the rotation angle in radian */ EIGEN_DEVICE_FUNC Scalar angle() const { return m_angle; } /** \returns a read-write reference to the stored angle in radian */ EIGEN_DEVICE_FUNC Scalar& angle() { return m_angle; } /** \returns the rotation axis */ EIGEN_DEVICE_FUNC const Vector3& axis() const { return m_axis; } /** \returns a read-write reference to the stored rotation axis. * * \warning The rotation axis must remain a \b unit vector. */ EIGEN_DEVICE_FUNC Vector3& axis() { return m_axis; } /** Concatenates two rotations */ EIGEN_DEVICE_FUNC inline QuaternionType operator* (const AngleAxis& other) const { return QuaternionType(*this) * QuaternionType(other); } /** Concatenates two rotations */ EIGEN_DEVICE_FUNC inline QuaternionType operator* (const QuaternionType& other) const { return QuaternionType(*this) * other; } /** Concatenates two rotations */ friend EIGEN_DEVICE_FUNC inline QuaternionType operator* (const QuaternionType& a, const AngleAxis& b) { return a * QuaternionType(b); } /** \returns the inverse rotation, i.e., an angle-axis with opposite rotation angle */ EIGEN_DEVICE_FUNC AngleAxis inverse() const { return AngleAxis(-m_angle, m_axis); } template EIGEN_DEVICE_FUNC AngleAxis& operator=(const QuaternionBase& q); template EIGEN_DEVICE_FUNC AngleAxis& operator=(const MatrixBase& m); template EIGEN_DEVICE_FUNC AngleAxis& fromRotationMatrix(const MatrixBase& m); EIGEN_DEVICE_FUNC Matrix3 toRotationMatrix(void) const; /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit AngleAxis(const AngleAxis& other) { m_axis = other.axis().template cast(); m_angle = Scalar(other.angle()); } EIGEN_DEVICE_FUNC static inline const AngleAxis Identity() { return AngleAxis(Scalar(0), Vector3::UnitX()); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const AngleAxis& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return m_axis.isApprox(other.m_axis, prec) && internal::isApprox(m_angle,other.m_angle, prec); } }; /** \ingroup Geometry_Module * single precision angle-axis type */ typedef AngleAxis AngleAxisf; /** \ingroup Geometry_Module * double precision angle-axis type */ typedef AngleAxis AngleAxisd; /** Set \c *this from a \b unit quaternion. * * The resulting axis is normalized, and the computed angle is in the [0,pi] range. * * This function implicitly normalizes the quaternion \a q. */ template template EIGEN_DEVICE_FUNC AngleAxis& AngleAxis::operator=(const QuaternionBase& q) { EIGEN_USING_STD(atan2) EIGEN_USING_STD(abs) Scalar n = q.vec().norm(); if(n::epsilon()) n = q.vec().stableNorm(); if (n != Scalar(0)) { m_angle = Scalar(2)*atan2(n, abs(q.w())); if(q.w() < Scalar(0)) n = -n; m_axis = q.vec() / n; } else { m_angle = Scalar(0); m_axis << Scalar(1), Scalar(0), Scalar(0); } return *this; } /** Set \c *this from a 3x3 rotation matrix \a mat. */ template template EIGEN_DEVICE_FUNC AngleAxis& AngleAxis::operator=(const MatrixBase& mat) { // Since a direct conversion would not be really faster, // let's use the robust Quaternion implementation: return *this = QuaternionType(mat); } /** * \brief Sets \c *this from a 3x3 rotation matrix. **/ template template EIGEN_DEVICE_FUNC AngleAxis& AngleAxis::fromRotationMatrix(const MatrixBase& mat) { return *this = QuaternionType(mat); } /** Constructs and \returns an equivalent 3x3 rotation matrix. */ template typename AngleAxis::Matrix3 EIGEN_DEVICE_FUNC AngleAxis::toRotationMatrix(void) const { EIGEN_USING_STD(sin) EIGEN_USING_STD(cos) Matrix3 res; Vector3 sin_axis = sin(m_angle) * m_axis; Scalar c = cos(m_angle); Vector3 cos1_axis = (Scalar(1)-c) * m_axis; Scalar tmp; tmp = cos1_axis.x() * m_axis.y(); res.coeffRef(0,1) = tmp - sin_axis.z(); res.coeffRef(1,0) = tmp + sin_axis.z(); tmp = cos1_axis.x() * m_axis.z(); res.coeffRef(0,2) = tmp + sin_axis.y(); res.coeffRef(2,0) = tmp - sin_axis.y(); tmp = cos1_axis.y() * m_axis.z(); res.coeffRef(1,2) = tmp - sin_axis.x(); res.coeffRef(2,1) = tmp + sin_axis.x(); res.diagonal() = (cos1_axis.cwiseProduct(m_axis)).array() + c; return res; } } // end namespace Eigen #endif // EIGEN_ANGLEAXIS_H piqp-octave/src/eigen/Eigen/src/Geometry/Scaling.h0000644000175100001770000001510414107270226021675 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SCALING_H #define EIGEN_SCALING_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class UniformScaling * * \brief Represents a generic uniform scaling transformation * * \tparam _Scalar the scalar type, i.e., the type of the coefficients. * * This class represent a uniform scaling transformation. It is the return * type of Scaling(Scalar), and most of the time this is the only way it * is used. In particular, this class is not aimed to be used to store a scaling transformation, * but rather to make easier the constructions and updates of Transform objects. * * To represent an axis aligned scaling, use the DiagonalMatrix class. * * \sa Scaling(), class DiagonalMatrix, MatrixBase::asDiagonal(), class Translation, class Transform */ namespace internal { // This helper helps nvcc+MSVC to properly parse this file. // See bug 1412. template struct uniformscaling_times_affine_returntype { enum { NewMode = int(Mode) == int(Isometry) ? Affine : Mode }; typedef Transform type; }; } template class UniformScaling { public: /** the scalar type of the coefficients */ typedef _Scalar Scalar; protected: Scalar m_factor; public: /** Default constructor without initialization. */ UniformScaling() {} /** Constructs and initialize a uniform scaling transformation */ explicit inline UniformScaling(const Scalar& s) : m_factor(s) {} inline const Scalar& factor() const { return m_factor; } inline Scalar& factor() { return m_factor; } /** Concatenates two uniform scaling */ inline UniformScaling operator* (const UniformScaling& other) const { return UniformScaling(m_factor * other.factor()); } /** Concatenates a uniform scaling and a translation */ template inline Transform operator* (const Translation& t) const; /** Concatenates a uniform scaling and an affine transformation */ template inline typename internal::uniformscaling_times_affine_returntype::type operator* (const Transform& t) const { typename internal::uniformscaling_times_affine_returntype::type res = t; res.prescale(factor()); return res; } /** Concatenates a uniform scaling and a linear transformation matrix */ // TODO returns an expression template inline typename Eigen::internal::plain_matrix_type::type operator* (const MatrixBase& other) const { return other * m_factor; } template inline Matrix operator*(const RotationBase& r) const { return r.toRotationMatrix() * m_factor; } /** \returns the inverse scaling */ inline UniformScaling inverse() const { return UniformScaling(Scalar(1)/m_factor); } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template inline UniformScaling cast() const { return UniformScaling(NewScalarType(m_factor)); } /** Copy constructor with scalar type conversion */ template inline explicit UniformScaling(const UniformScaling& other) { m_factor = Scalar(other.factor()); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ bool isApprox(const UniformScaling& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return internal::isApprox(m_factor, other.factor(), prec); } }; /** \addtogroup Geometry_Module */ //@{ /** Concatenates a linear transformation matrix and a uniform scaling * \relates UniformScaling */ // NOTE this operator is defined in MatrixBase and not as a friend function // of UniformScaling to fix an internal crash of Intel's ICC template EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Derived,Scalar,product) operator*(const MatrixBase& matrix, const UniformScaling& s) { return matrix.derived() * s.factor(); } /** Constructs a uniform scaling from scale factor \a s */ inline UniformScaling Scaling(float s) { return UniformScaling(s); } /** Constructs a uniform scaling from scale factor \a s */ inline UniformScaling Scaling(double s) { return UniformScaling(s); } /** Constructs a uniform scaling from scale factor \a s */ template inline UniformScaling > Scaling(const std::complex& s) { return UniformScaling >(s); } /** Constructs a 2D axis aligned scaling */ template inline DiagonalMatrix Scaling(const Scalar& sx, const Scalar& sy) { return DiagonalMatrix(sx, sy); } /** Constructs a 3D axis aligned scaling */ template inline DiagonalMatrix Scaling(const Scalar& sx, const Scalar& sy, const Scalar& sz) { return DiagonalMatrix(sx, sy, sz); } /** Constructs an axis aligned scaling expression from vector expression \a coeffs * This is an alias for coeffs.asDiagonal() */ template inline const DiagonalWrapper Scaling(const MatrixBase& coeffs) { return coeffs.asDiagonal(); } /** \deprecated */ typedef DiagonalMatrix AlignedScaling2f; /** \deprecated */ typedef DiagonalMatrix AlignedScaling2d; /** \deprecated */ typedef DiagonalMatrix AlignedScaling3f; /** \deprecated */ typedef DiagonalMatrix AlignedScaling3d; //@} template template inline Transform UniformScaling::operator* (const Translation& t) const { Transform res; res.matrix().setZero(); res.linear().diagonal().fill(factor()); res.translation() = factor() * t.vector(); res(Dim,Dim) = Scalar(1); return res; } } // end namespace Eigen #endif // EIGEN_SCALING_H piqp-octave/src/eigen/Eigen/src/Geometry/arch/0000755000175100001770000000000014107270226021060 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Geometry/arch/Geometry_SIMD.h0000644000175100001770000001347114107270226023646 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Rohit Garg // Copyright (C) 2009-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_GEOMETRY_SIMD_H #define EIGEN_GEOMETRY_SIMD_H namespace Eigen { namespace internal { template struct quat_product { enum { AAlignment = traits::Alignment, BAlignment = traits::Alignment, ResAlignment = traits >::Alignment }; static inline Quaternion run(const QuaternionBase& _a, const QuaternionBase& _b) { evaluator ae(_a.coeffs()); evaluator be(_b.coeffs()); Quaternion res; const float neg_zero = numext::bit_cast(0x80000000u); const float arr[4] = {0.f, 0.f, 0.f, neg_zero}; const Packet4f mask = ploadu(arr); Packet4f a = ae.template packet(0); Packet4f b = be.template packet(0); Packet4f s1 = pmul(vec4f_swizzle1(a,1,2,0,2),vec4f_swizzle1(b,2,0,1,2)); Packet4f s2 = pmul(vec4f_swizzle1(a,3,3,3,1),vec4f_swizzle1(b,0,1,2,1)); pstoret( &res.x(), padd(psub(pmul(a,vec4f_swizzle1(b,3,3,3,3)), pmul(vec4f_swizzle1(a,2,0,1,0), vec4f_swizzle1(b,1,2,0,0))), pxor(mask,padd(s1,s2)))); return res; } }; template struct quat_conj { enum { ResAlignment = traits >::Alignment }; static inline Quaternion run(const QuaternionBase& q) { evaluator qe(q.coeffs()); Quaternion res; const float neg_zero = numext::bit_cast(0x80000000u); const float arr[4] = {neg_zero, neg_zero, neg_zero,0.f}; const Packet4f mask = ploadu(arr); pstoret(&res.x(), pxor(mask, qe.template packet::Alignment,Packet4f>(0))); return res; } }; template struct cross3_impl { enum { ResAlignment = traits::type>::Alignment }; static inline typename plain_matrix_type::type run(const VectorLhs& lhs, const VectorRhs& rhs) { evaluator lhs_eval(lhs); evaluator rhs_eval(rhs); Packet4f a = lhs_eval.template packet::Alignment,Packet4f>(0); Packet4f b = rhs_eval.template packet::Alignment,Packet4f>(0); Packet4f mul1 = pmul(vec4f_swizzle1(a,1,2,0,3),vec4f_swizzle1(b,2,0,1,3)); Packet4f mul2 = pmul(vec4f_swizzle1(a,2,0,1,3),vec4f_swizzle1(b,1,2,0,3)); typename plain_matrix_type::type res; pstoret(&res.x(),psub(mul1,mul2)); return res; } }; #if (defined EIGEN_VECTORIZE_SSE) || (EIGEN_ARCH_ARM64) template struct quat_product { enum { BAlignment = traits::Alignment, ResAlignment = traits >::Alignment }; static inline Quaternion run(const QuaternionBase& _a, const QuaternionBase& _b) { Quaternion res; evaluator ae(_a.coeffs()); evaluator be(_b.coeffs()); const double* a = _a.coeffs().data(); Packet2d b_xy = be.template packet(0); Packet2d b_zw = be.template packet(2); Packet2d a_xx = pset1(a[0]); Packet2d a_yy = pset1(a[1]); Packet2d a_zz = pset1(a[2]); Packet2d a_ww = pset1(a[3]); // two temporaries: Packet2d t1, t2; /* * t1 = ww*xy + yy*zw * t2 = zz*xy - xx*zw * res.xy = t1 +/- swap(t2) */ t1 = padd(pmul(a_ww, b_xy), pmul(a_yy, b_zw)); t2 = psub(pmul(a_zz, b_xy), pmul(a_xx, b_zw)); pstoret(&res.x(), paddsub(t1, preverse(t2))); /* * t1 = ww*zw - yy*xy * t2 = zz*zw + xx*xy * res.zw = t1 -/+ swap(t2) = swap( swap(t1) +/- t2) */ t1 = psub(pmul(a_ww, b_zw), pmul(a_yy, b_xy)); t2 = padd(pmul(a_zz, b_zw), pmul(a_xx, b_xy)); pstoret(&res.z(), preverse(paddsub(preverse(t1), t2))); return res; } }; template struct quat_conj { enum { ResAlignment = traits >::Alignment }; static inline Quaternion run(const QuaternionBase& q) { evaluator qe(q.coeffs()); Quaternion res; const double neg_zero = numext::bit_cast(0x8000000000000000ull); const double arr1[2] = {neg_zero, neg_zero}; const double arr2[2] = {neg_zero, 0.0}; const Packet2d mask0 = ploadu(arr1); const Packet2d mask2 = ploadu(arr2); pstoret(&res.x(), pxor(mask0, qe.template packet::Alignment,Packet2d>(0))); pstoret(&res.z(), pxor(mask2, qe.template packet::Alignment,Packet2d>(2))); return res; } }; #endif // end EIGEN_VECTORIZE_SSE_OR_EIGEN_ARCH_ARM64 } // end namespace internal } // end namespace Eigen #endif // EIGEN_GEOMETRY_SIMD_H piqp-octave/src/eigen/Eigen/src/Geometry/Hyperplane.h0000644000175100001770000002727214107270226022435 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_HYPERPLANE_H #define EIGEN_HYPERPLANE_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class Hyperplane * * \brief A hyperplane * * A hyperplane is an affine subspace of dimension n-1 in a space of dimension n. * For example, a hyperplane in a plane is a line; a hyperplane in 3-space is a plane. * * \tparam _Scalar the scalar type, i.e., the type of the coefficients * \tparam _AmbientDim the dimension of the ambient space, can be a compile time value or Dynamic. * Notice that the dimension of the hyperplane is _AmbientDim-1. * * This class represents an hyperplane as the zero set of the implicit equation * \f$ n \cdot x + d = 0 \f$ where \f$ n \f$ is a unit normal vector of the plane (linear part) * and \f$ d \f$ is the distance (offset) to the origin. */ template class Hyperplane { public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF_VECTORIZABLE_FIXED_SIZE(_Scalar,_AmbientDim==Dynamic ? Dynamic : _AmbientDim+1) enum { AmbientDimAtCompileTime = _AmbientDim, Options = _Options }; typedef _Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix VectorType; typedef Matrix Coefficients; typedef Block NormalReturnType; typedef const Block ConstNormalReturnType; /** Default constructor without initialization */ EIGEN_DEVICE_FUNC inline Hyperplane() {} template EIGEN_DEVICE_FUNC Hyperplane(const Hyperplane& other) : m_coeffs(other.coeffs()) {} /** Constructs a dynamic-size hyperplane with \a _dim the dimension * of the ambient space */ EIGEN_DEVICE_FUNC inline explicit Hyperplane(Index _dim) : m_coeffs(_dim+1) {} /** Construct a plane from its normal \a n and a point \a e onto the plane. * \warning the vector normal is assumed to be normalized. */ EIGEN_DEVICE_FUNC inline Hyperplane(const VectorType& n, const VectorType& e) : m_coeffs(n.size()+1) { normal() = n; offset() = -n.dot(e); } /** Constructs a plane from its normal \a n and distance to the origin \a d * such that the algebraic equation of the plane is \f$ n \cdot x + d = 0 \f$. * \warning the vector normal is assumed to be normalized. */ EIGEN_DEVICE_FUNC inline Hyperplane(const VectorType& n, const Scalar& d) : m_coeffs(n.size()+1) { normal() = n; offset() = d; } /** Constructs a hyperplane passing through the two points. If the dimension of the ambient space * is greater than 2, then there isn't uniqueness, so an arbitrary choice is made. */ EIGEN_DEVICE_FUNC static inline Hyperplane Through(const VectorType& p0, const VectorType& p1) { Hyperplane result(p0.size()); result.normal() = (p1 - p0).unitOrthogonal(); result.offset() = -p0.dot(result.normal()); return result; } /** Constructs a hyperplane passing through the three points. The dimension of the ambient space * is required to be exactly 3. */ EIGEN_DEVICE_FUNC static inline Hyperplane Through(const VectorType& p0, const VectorType& p1, const VectorType& p2) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(VectorType, 3) Hyperplane result(p0.size()); VectorType v0(p2 - p0), v1(p1 - p0); result.normal() = v0.cross(v1); RealScalar norm = result.normal().norm(); if(norm <= v0.norm() * v1.norm() * NumTraits::epsilon()) { Matrix m; m << v0.transpose(), v1.transpose(); JacobiSVD > svd(m, ComputeFullV); result.normal() = svd.matrixV().col(2); } else result.normal() /= norm; result.offset() = -p0.dot(result.normal()); return result; } /** Constructs a hyperplane passing through the parametrized line \a parametrized. * If the dimension of the ambient space is greater than 2, then there isn't uniqueness, * so an arbitrary choice is made. */ // FIXME to be consistent with the rest this could be implemented as a static Through function ?? EIGEN_DEVICE_FUNC explicit Hyperplane(const ParametrizedLine& parametrized) { normal() = parametrized.direction().unitOrthogonal(); offset() = -parametrized.origin().dot(normal()); } EIGEN_DEVICE_FUNC ~Hyperplane() {} /** \returns the dimension in which the plane holds */ EIGEN_DEVICE_FUNC inline Index dim() const { return AmbientDimAtCompileTime==Dynamic ? m_coeffs.size()-1 : Index(AmbientDimAtCompileTime); } /** normalizes \c *this */ EIGEN_DEVICE_FUNC void normalize(void) { m_coeffs /= normal().norm(); } /** \returns the signed distance between the plane \c *this and a point \a p. * \sa absDistance() */ EIGEN_DEVICE_FUNC inline Scalar signedDistance(const VectorType& p) const { return normal().dot(p) + offset(); } /** \returns the absolute distance between the plane \c *this and a point \a p. * \sa signedDistance() */ EIGEN_DEVICE_FUNC inline Scalar absDistance(const VectorType& p) const { return numext::abs(signedDistance(p)); } /** \returns the projection of a point \a p onto the plane \c *this. */ EIGEN_DEVICE_FUNC inline VectorType projection(const VectorType& p) const { return p - signedDistance(p) * normal(); } /** \returns a constant reference to the unit normal vector of the plane, which corresponds * to the linear part of the implicit equation. */ EIGEN_DEVICE_FUNC inline ConstNormalReturnType normal() const { return ConstNormalReturnType(m_coeffs,0,0,dim(),1); } /** \returns a non-constant reference to the unit normal vector of the plane, which corresponds * to the linear part of the implicit equation. */ EIGEN_DEVICE_FUNC inline NormalReturnType normal() { return NormalReturnType(m_coeffs,0,0,dim(),1); } /** \returns the distance to the origin, which is also the "constant term" of the implicit equation * \warning the vector normal is assumed to be normalized. */ EIGEN_DEVICE_FUNC inline const Scalar& offset() const { return m_coeffs.coeff(dim()); } /** \returns a non-constant reference to the distance to the origin, which is also the constant part * of the implicit equation */ EIGEN_DEVICE_FUNC inline Scalar& offset() { return m_coeffs(dim()); } /** \returns a constant reference to the coefficients c_i of the plane equation: * \f$ c_0*x_0 + ... + c_{d-1}*x_{d-1} + c_d = 0 \f$ */ EIGEN_DEVICE_FUNC inline const Coefficients& coeffs() const { return m_coeffs; } /** \returns a non-constant reference to the coefficients c_i of the plane equation: * \f$ c_0*x_0 + ... + c_{d-1}*x_{d-1} + c_d = 0 \f$ */ EIGEN_DEVICE_FUNC inline Coefficients& coeffs() { return m_coeffs; } /** \returns the intersection of *this with \a other. * * \warning The ambient space must be a plane, i.e. have dimension 2, so that \c *this and \a other are lines. * * \note If \a other is approximately parallel to *this, this method will return any point on *this. */ EIGEN_DEVICE_FUNC VectorType intersection(const Hyperplane& other) const { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(VectorType, 2) Scalar det = coeffs().coeff(0) * other.coeffs().coeff(1) - coeffs().coeff(1) * other.coeffs().coeff(0); // since the line equations ax+by=c are normalized with a^2+b^2=1, the following tests // whether the two lines are approximately parallel. if(internal::isMuchSmallerThan(det, Scalar(1))) { // special case where the two lines are approximately parallel. Pick any point on the first line. if(numext::abs(coeffs().coeff(1))>numext::abs(coeffs().coeff(0))) return VectorType(coeffs().coeff(1), -coeffs().coeff(2)/coeffs().coeff(1)-coeffs().coeff(0)); else return VectorType(-coeffs().coeff(2)/coeffs().coeff(0)-coeffs().coeff(1), coeffs().coeff(0)); } else { // general case Scalar invdet = Scalar(1) / det; return VectorType(invdet*(coeffs().coeff(1)*other.coeffs().coeff(2)-other.coeffs().coeff(1)*coeffs().coeff(2)), invdet*(other.coeffs().coeff(0)*coeffs().coeff(2)-coeffs().coeff(0)*other.coeffs().coeff(2))); } } /** Applies the transformation matrix \a mat to \c *this and returns a reference to \c *this. * * \param mat the Dim x Dim transformation matrix * \param traits specifies whether the matrix \a mat represents an #Isometry * or a more generic #Affine transformation. The default is #Affine. */ template EIGEN_DEVICE_FUNC inline Hyperplane& transform(const MatrixBase& mat, TransformTraits traits = Affine) { if (traits==Affine) { normal() = mat.inverse().transpose() * normal(); m_coeffs /= normal().norm(); } else if (traits==Isometry) normal() = mat * normal(); else { eigen_assert(0 && "invalid traits value in Hyperplane::transform()"); } return *this; } /** Applies the transformation \a t to \c *this and returns a reference to \c *this. * * \param t the transformation of dimension Dim * \param traits specifies whether the transformation \a t represents an #Isometry * or a more generic #Affine transformation. The default is #Affine. * Other kind of transformations are not supported. */ template EIGEN_DEVICE_FUNC inline Hyperplane& transform(const Transform& t, TransformTraits traits = Affine) { transform(t.linear(), traits); offset() -= normal().dot(t.translation()); return *this; } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit Hyperplane(const Hyperplane& other) { m_coeffs = other.coeffs().template cast(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ template EIGEN_DEVICE_FUNC bool isApprox(const Hyperplane& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return m_coeffs.isApprox(other.m_coeffs, prec); } protected: Coefficients m_coeffs; }; } // end namespace Eigen #endif // EIGEN_HYPERPLANE_H piqp-octave/src/eigen/Eigen/src/Geometry/AlignedBox.h0000644000175100001770000004477314107270226022347 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // Function void Eigen::AlignedBox::transform(const Transform& transform) // is provided under the following license agreement: // // Software License Agreement (BSD License) // // Copyright (c) 2011-2014, Willow Garage, Inc. // Copyright (c) 2014-2015, Open Source Robotics Foundation // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions // are met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following // disclaimer in the documentation and/or other materials provided // with the distribution. // * Neither the name of Open Source Robotics Foundation nor the names of its // contributors may be used to endorse or promote products derived // from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS // FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE // COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, // INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, // BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; // LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER // CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT // LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN // ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE // POSSIBILITY OF SUCH DAMAGE. #ifndef EIGEN_ALIGNEDBOX_H #define EIGEN_ALIGNEDBOX_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * * \class AlignedBox * * \brief An axis aligned box * * \tparam _Scalar the type of the scalar coefficients * \tparam _AmbientDim the dimension of the ambient space, can be a compile time value or Dynamic. * * This class represents an axis aligned box as a pair of the minimal and maximal corners. * \warning The result of most methods is undefined when applied to an empty box. You can check for empty boxes using isEmpty(). * \sa alignedboxtypedefs */ template class AlignedBox { public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF_VECTORIZABLE_FIXED_SIZE(_Scalar,_AmbientDim) enum { AmbientDimAtCompileTime = _AmbientDim }; typedef _Scalar Scalar; typedef NumTraits ScalarTraits; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef typename ScalarTraits::Real RealScalar; typedef typename ScalarTraits::NonInteger NonInteger; typedef Matrix VectorType; typedef CwiseBinaryOp, const VectorType, const VectorType> VectorTypeSum; /** Define constants to name the corners of a 1D, 2D or 3D axis aligned bounding box */ enum CornerType { /** 1D names @{ */ Min=0, Max=1, /** @} */ /** Identifier for 2D corner @{ */ BottomLeft=0, BottomRight=1, TopLeft=2, TopRight=3, /** @} */ /** Identifier for 3D corner @{ */ BottomLeftFloor=0, BottomRightFloor=1, TopLeftFloor=2, TopRightFloor=3, BottomLeftCeil=4, BottomRightCeil=5, TopLeftCeil=6, TopRightCeil=7 /** @} */ }; /** Default constructor initializing a null box. */ EIGEN_DEVICE_FUNC inline AlignedBox() { if (EIGEN_CONST_CONDITIONAL(AmbientDimAtCompileTime!=Dynamic)) setEmpty(); } /** Constructs a null box with \a _dim the dimension of the ambient space. */ EIGEN_DEVICE_FUNC inline explicit AlignedBox(Index _dim) : m_min(_dim), m_max(_dim) { setEmpty(); } /** Constructs a box with extremities \a _min and \a _max. * \warning If either component of \a _min is larger than the same component of \a _max, the constructed box is empty. */ template EIGEN_DEVICE_FUNC inline AlignedBox(const OtherVectorType1& _min, const OtherVectorType2& _max) : m_min(_min), m_max(_max) {} /** Constructs a box containing a single point \a p. */ template EIGEN_DEVICE_FUNC inline explicit AlignedBox(const MatrixBase& p) : m_min(p), m_max(m_min) { } EIGEN_DEVICE_FUNC ~AlignedBox() {} /** \returns the dimension in which the box holds */ EIGEN_DEVICE_FUNC inline Index dim() const { return AmbientDimAtCompileTime==Dynamic ? m_min.size() : Index(AmbientDimAtCompileTime); } /** \deprecated use isEmpty() */ EIGEN_DEVICE_FUNC inline bool isNull() const { return isEmpty(); } /** \deprecated use setEmpty() */ EIGEN_DEVICE_FUNC inline void setNull() { setEmpty(); } /** \returns true if the box is empty. * \sa setEmpty */ EIGEN_DEVICE_FUNC inline bool isEmpty() const { return (m_min.array() > m_max.array()).any(); } /** Makes \c *this an empty box. * \sa isEmpty */ EIGEN_DEVICE_FUNC inline void setEmpty() { m_min.setConstant( ScalarTraits::highest() ); m_max.setConstant( ScalarTraits::lowest() ); } /** \returns the minimal corner */ EIGEN_DEVICE_FUNC inline const VectorType& (min)() const { return m_min; } /** \returns a non const reference to the minimal corner */ EIGEN_DEVICE_FUNC inline VectorType& (min)() { return m_min; } /** \returns the maximal corner */ EIGEN_DEVICE_FUNC inline const VectorType& (max)() const { return m_max; } /** \returns a non const reference to the maximal corner */ EIGEN_DEVICE_FUNC inline VectorType& (max)() { return m_max; } /** \returns the center of the box */ EIGEN_DEVICE_FUNC inline const EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(VectorTypeSum, RealScalar, quotient) center() const { return (m_min+m_max)/RealScalar(2); } /** \returns the lengths of the sides of the bounding box. * Note that this function does not get the same * result for integral or floating scalar types: see */ EIGEN_DEVICE_FUNC inline const CwiseBinaryOp< internal::scalar_difference_op, const VectorType, const VectorType> sizes() const { return m_max - m_min; } /** \returns the volume of the bounding box */ EIGEN_DEVICE_FUNC inline Scalar volume() const { return sizes().prod(); } /** \returns an expression for the bounding box diagonal vector * if the length of the diagonal is needed: diagonal().norm() * will provide it. */ EIGEN_DEVICE_FUNC inline CwiseBinaryOp< internal::scalar_difference_op, const VectorType, const VectorType> diagonal() const { return sizes(); } /** \returns the vertex of the bounding box at the corner defined by * the corner-id corner. It works only for a 1D, 2D or 3D bounding box. * For 1D bounding boxes corners are named by 2 enum constants: * BottomLeft and BottomRight. * For 2D bounding boxes, corners are named by 4 enum constants: * BottomLeft, BottomRight, TopLeft, TopRight. * For 3D bounding boxes, the following names are added: * BottomLeftCeil, BottomRightCeil, TopLeftCeil, TopRightCeil. */ EIGEN_DEVICE_FUNC inline VectorType corner(CornerType corner) const { EIGEN_STATIC_ASSERT(_AmbientDim <= 3, THIS_METHOD_IS_ONLY_FOR_VECTORS_OF_A_SPECIFIC_SIZE); VectorType res; Index mult = 1; for(Index d=0; d(Scalar(0), Scalar(1)); } else r[d] = internal::random(m_min[d], m_max[d]); } return r; } /** \returns true if the point \a p is inside the box \c *this. */ template EIGEN_DEVICE_FUNC inline bool contains(const MatrixBase& p) const { typename internal::nested_eval::type p_n(p.derived()); return (m_min.array()<=p_n.array()).all() && (p_n.array()<=m_max.array()).all(); } /** \returns true if the box \a b is entirely inside the box \c *this. */ EIGEN_DEVICE_FUNC inline bool contains(const AlignedBox& b) const { return (m_min.array()<=(b.min)().array()).all() && ((b.max)().array()<=m_max.array()).all(); } /** \returns true if the box \a b is intersecting the box \c *this. * \sa intersection, clamp */ EIGEN_DEVICE_FUNC inline bool intersects(const AlignedBox& b) const { return (m_min.array()<=(b.max)().array()).all() && ((b.min)().array()<=m_max.array()).all(); } /** Extends \c *this such that it contains the point \a p and returns a reference to \c *this. * \sa extend(const AlignedBox&) */ template EIGEN_DEVICE_FUNC inline AlignedBox& extend(const MatrixBase& p) { typename internal::nested_eval::type p_n(p.derived()); m_min = m_min.cwiseMin(p_n); m_max = m_max.cwiseMax(p_n); return *this; } /** Extends \c *this such that it contains the box \a b and returns a reference to \c *this. * \sa merged, extend(const MatrixBase&) */ EIGEN_DEVICE_FUNC inline AlignedBox& extend(const AlignedBox& b) { m_min = m_min.cwiseMin(b.m_min); m_max = m_max.cwiseMax(b.m_max); return *this; } /** Clamps \c *this by the box \a b and returns a reference to \c *this. * \note If the boxes don't intersect, the resulting box is empty. * \sa intersection(), intersects() */ EIGEN_DEVICE_FUNC inline AlignedBox& clamp(const AlignedBox& b) { m_min = m_min.cwiseMax(b.m_min); m_max = m_max.cwiseMin(b.m_max); return *this; } /** Returns an AlignedBox that is the intersection of \a b and \c *this * \note If the boxes don't intersect, the resulting box is empty. * \sa intersects(), clamp, contains() */ EIGEN_DEVICE_FUNC inline AlignedBox intersection(const AlignedBox& b) const {return AlignedBox(m_min.cwiseMax(b.m_min), m_max.cwiseMin(b.m_max)); } /** Returns an AlignedBox that is the union of \a b and \c *this. * \note Merging with an empty box may result in a box bigger than \c *this. * \sa extend(const AlignedBox&) */ EIGEN_DEVICE_FUNC inline AlignedBox merged(const AlignedBox& b) const { return AlignedBox(m_min.cwiseMin(b.m_min), m_max.cwiseMax(b.m_max)); } /** Translate \c *this by the vector \a t and returns a reference to \c *this. */ template EIGEN_DEVICE_FUNC inline AlignedBox& translate(const MatrixBase& a_t) { const typename internal::nested_eval::type t(a_t.derived()); m_min += t; m_max += t; return *this; } /** \returns a copy of \c *this translated by the vector \a t. */ template EIGEN_DEVICE_FUNC inline AlignedBox translated(const MatrixBase& a_t) const { AlignedBox result(m_min, m_max); result.translate(a_t); return result; } /** \returns the squared distance between the point \a p and the box \c *this, * and zero if \a p is inside the box. * \sa exteriorDistance(const MatrixBase&), squaredExteriorDistance(const AlignedBox&) */ template EIGEN_DEVICE_FUNC inline Scalar squaredExteriorDistance(const MatrixBase& p) const; /** \returns the squared distance between the boxes \a b and \c *this, * and zero if the boxes intersect. * \sa exteriorDistance(const AlignedBox&), squaredExteriorDistance(const MatrixBase&) */ EIGEN_DEVICE_FUNC inline Scalar squaredExteriorDistance(const AlignedBox& b) const; /** \returns the distance between the point \a p and the box \c *this, * and zero if \a p is inside the box. * \sa squaredExteriorDistance(const MatrixBase&), exteriorDistance(const AlignedBox&) */ template EIGEN_DEVICE_FUNC inline NonInteger exteriorDistance(const MatrixBase& p) const { EIGEN_USING_STD(sqrt) return sqrt(NonInteger(squaredExteriorDistance(p))); } /** \returns the distance between the boxes \a b and \c *this, * and zero if the boxes intersect. * \sa squaredExteriorDistance(const AlignedBox&), exteriorDistance(const MatrixBase&) */ EIGEN_DEVICE_FUNC inline NonInteger exteriorDistance(const AlignedBox& b) const { EIGEN_USING_STD(sqrt) return sqrt(NonInteger(squaredExteriorDistance(b))); } /** * Specialization of transform for pure translation. */ template EIGEN_DEVICE_FUNC inline void transform( const typename Transform::TranslationType& translation) { this->translate(translation); } /** * Transforms this box by \a transform and recomputes it to * still be an axis-aligned box. * * \note This method is provided under BSD license (see the top of this file). */ template EIGEN_DEVICE_FUNC inline void transform(const Transform& transform) { // Only Affine and Isometry transforms are currently supported. EIGEN_STATIC_ASSERT(Mode == Affine || Mode == AffineCompact || Mode == Isometry, THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS); // Method adapted from FCL src/shape/geometric_shapes_utility.cpp#computeBV(...) // https://github.com/flexible-collision-library/fcl/blob/fcl-0.4/src/shape/geometric_shapes_utility.cpp#L292 // // Here's a nice explanation why it works: https://zeuxcg.org/2010/10/17/aabb-from-obb-with-component-wise-abs/ // two times rotated extent const VectorType rotated_extent_2 = transform.linear().cwiseAbs() * sizes(); // two times new center const VectorType rotated_center_2 = transform.linear() * (this->m_max + this->m_min) + Scalar(2) * transform.translation(); this->m_max = (rotated_center_2 + rotated_extent_2) / Scalar(2); this->m_min = (rotated_center_2 - rotated_extent_2) / Scalar(2); } /** * \returns a copy of \c *this transformed by \a transform and recomputed to * still be an axis-aligned box. */ template EIGEN_DEVICE_FUNC AlignedBox transformed(const Transform& transform) const { AlignedBox result(m_min, m_max); result.transform(transform); return result; } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit AlignedBox(const AlignedBox& other) { m_min = (other.min)().template cast(); m_max = (other.max)().template cast(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const AlignedBox& other, const RealScalar& prec = ScalarTraits::dummy_precision()) const { return m_min.isApprox(other.m_min, prec) && m_max.isApprox(other.m_max, prec); } protected: VectorType m_min, m_max; }; template template EIGEN_DEVICE_FUNC inline Scalar AlignedBox::squaredExteriorDistance(const MatrixBase& a_p) const { typename internal::nested_eval::type p(a_p.derived()); Scalar dist2(0); Scalar aux; for (Index k=0; k p[k] ) { aux = m_min[k] - p[k]; dist2 += aux*aux; } else if( p[k] > m_max[k] ) { aux = p[k] - m_max[k]; dist2 += aux*aux; } } return dist2; } template EIGEN_DEVICE_FUNC inline Scalar AlignedBox::squaredExteriorDistance(const AlignedBox& b) const { Scalar dist2(0); Scalar aux; for (Index k=0; k b.m_max[k] ) { aux = m_min[k] - b.m_max[k]; dist2 += aux*aux; } else if( b.m_min[k] > m_max[k] ) { aux = b.m_min[k] - m_max[k]; dist2 += aux*aux; } } return dist2; } /** \defgroup alignedboxtypedefs Global aligned box typedefs * * \ingroup Geometry_Module * * Eigen defines several typedef shortcuts for most common aligned box types. * * The general patterns are the following: * * \c AlignedBoxSizeType where \c Size can be \c 1, \c 2,\c 3,\c 4 for fixed size boxes or \c X for dynamic size, * and where \c Type can be \c i for integer, \c f for float, \c d for double. * * For example, \c AlignedBox3d is a fixed-size 3x3 aligned box type of doubles, and \c AlignedBoxXf is a dynamic-size aligned box of floats. * * \sa class AlignedBox */ #define EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, Size, SizeSuffix) \ /** \ingroup alignedboxtypedefs */ \ typedef AlignedBox AlignedBox##SizeSuffix##TypeSuffix; #define EIGEN_MAKE_TYPEDEFS_ALL_SIZES(Type, TypeSuffix) \ EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, 1, 1) \ EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, 2, 2) \ EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, 3, 3) \ EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, 4, 4) \ EIGEN_MAKE_TYPEDEFS(Type, TypeSuffix, Dynamic, X) EIGEN_MAKE_TYPEDEFS_ALL_SIZES(int, i) EIGEN_MAKE_TYPEDEFS_ALL_SIZES(float, f) EIGEN_MAKE_TYPEDEFS_ALL_SIZES(double, d) #undef EIGEN_MAKE_TYPEDEFS_ALL_SIZES #undef EIGEN_MAKE_TYPEDEFS } // end namespace Eigen #endif // EIGEN_ALIGNEDBOX_H piqp-octave/src/eigen/Eigen/src/Geometry/Translation.h0000644000175100001770000001676014107270226022624 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_TRANSLATION_H #define EIGEN_TRANSLATION_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class Translation * * \brief Represents a translation transformation * * \tparam _Scalar the scalar type, i.e., the type of the coefficients. * \tparam _Dim the dimension of the space, can be a compile time value or Dynamic * * \note This class is not aimed to be used to store a translation transformation, * but rather to make easier the constructions and updates of Transform objects. * * \sa class Scaling, class Transform */ template class Translation { public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF_VECTORIZABLE_FIXED_SIZE(_Scalar,_Dim) /** dimension of the space */ enum { Dim = _Dim }; /** the scalar type of the coefficients */ typedef _Scalar Scalar; /** corresponding vector type */ typedef Matrix VectorType; /** corresponding linear transformation matrix type */ typedef Matrix LinearMatrixType; /** corresponding affine transformation type */ typedef Transform AffineTransformType; /** corresponding isometric transformation type */ typedef Transform IsometryTransformType; protected: VectorType m_coeffs; public: /** Default constructor without initialization. */ EIGEN_DEVICE_FUNC Translation() {} /** */ EIGEN_DEVICE_FUNC inline Translation(const Scalar& sx, const Scalar& sy) { eigen_assert(Dim==2); m_coeffs.x() = sx; m_coeffs.y() = sy; } /** */ EIGEN_DEVICE_FUNC inline Translation(const Scalar& sx, const Scalar& sy, const Scalar& sz) { eigen_assert(Dim==3); m_coeffs.x() = sx; m_coeffs.y() = sy; m_coeffs.z() = sz; } /** Constructs and initialize the translation transformation from a vector of translation coefficients */ EIGEN_DEVICE_FUNC explicit inline Translation(const VectorType& vector) : m_coeffs(vector) {} /** \brief Returns the x-translation by value. **/ EIGEN_DEVICE_FUNC inline Scalar x() const { return m_coeffs.x(); } /** \brief Returns the y-translation by value. **/ EIGEN_DEVICE_FUNC inline Scalar y() const { return m_coeffs.y(); } /** \brief Returns the z-translation by value. **/ EIGEN_DEVICE_FUNC inline Scalar z() const { return m_coeffs.z(); } /** \brief Returns the x-translation as a reference. **/ EIGEN_DEVICE_FUNC inline Scalar& x() { return m_coeffs.x(); } /** \brief Returns the y-translation as a reference. **/ EIGEN_DEVICE_FUNC inline Scalar& y() { return m_coeffs.y(); } /** \brief Returns the z-translation as a reference. **/ EIGEN_DEVICE_FUNC inline Scalar& z() { return m_coeffs.z(); } EIGEN_DEVICE_FUNC const VectorType& vector() const { return m_coeffs; } EIGEN_DEVICE_FUNC VectorType& vector() { return m_coeffs; } EIGEN_DEVICE_FUNC const VectorType& translation() const { return m_coeffs; } EIGEN_DEVICE_FUNC VectorType& translation() { return m_coeffs; } /** Concatenates two translation */ EIGEN_DEVICE_FUNC inline Translation operator* (const Translation& other) const { return Translation(m_coeffs + other.m_coeffs); } /** Concatenates a translation and a uniform scaling */ EIGEN_DEVICE_FUNC inline AffineTransformType operator* (const UniformScaling& other) const; /** Concatenates a translation and a linear transformation */ template EIGEN_DEVICE_FUNC inline AffineTransformType operator* (const EigenBase& linear) const; /** Concatenates a translation and a rotation */ template EIGEN_DEVICE_FUNC inline IsometryTransformType operator*(const RotationBase& r) const { return *this * IsometryTransformType(r); } /** \returns the concatenation of a linear transformation \a l with the translation \a t */ // its a nightmare to define a templated friend function outside its declaration template friend EIGEN_DEVICE_FUNC inline AffineTransformType operator*(const EigenBase& linear, const Translation& t) { AffineTransformType res; res.matrix().setZero(); res.linear() = linear.derived(); res.translation() = linear.derived() * t.m_coeffs; res.matrix().row(Dim).setZero(); res(Dim,Dim) = Scalar(1); return res; } /** Concatenates a translation and a transformation */ template EIGEN_DEVICE_FUNC inline Transform operator* (const Transform& t) const { Transform res = t; res.pretranslate(m_coeffs); return res; } /** Applies translation to vector */ template inline typename internal::enable_if::type operator* (const MatrixBase& vec) const { return m_coeffs + vec.derived(); } /** \returns the inverse translation (opposite) */ Translation inverse() const { return Translation(-m_coeffs); } static const Translation Identity() { return Translation(VectorType::Zero()); } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit Translation(const Translation& other) { m_coeffs = other.vector().template cast(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const Translation& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return m_coeffs.isApprox(other.m_coeffs, prec); } }; /** \addtogroup Geometry_Module */ //@{ typedef Translation Translation2f; typedef Translation Translation2d; typedef Translation Translation3f; typedef Translation Translation3d; //@} template EIGEN_DEVICE_FUNC inline typename Translation::AffineTransformType Translation::operator* (const UniformScaling& other) const { AffineTransformType res; res.matrix().setZero(); res.linear().diagonal().fill(other.factor()); res.translation() = m_coeffs; res(Dim,Dim) = Scalar(1); return res; } template template EIGEN_DEVICE_FUNC inline typename Translation::AffineTransformType Translation::operator* (const EigenBase& linear) const { AffineTransformType res; res.matrix().setZero(); res.linear() = linear.derived(); res.translation() = m_coeffs; res.matrix().row(Dim).setZero(); res(Dim,Dim) = Scalar(1); return res; } } // end namespace Eigen #endif // EIGEN_TRANSLATION_H piqp-octave/src/eigen/Eigen/src/Geometry/RotationBase.h0000644000175100001770000001757714107270226022727 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ROTATIONBASE_H #define EIGEN_ROTATIONBASE_H namespace Eigen { // forward declaration namespace internal { template struct rotation_base_generic_product_selector; } /** \class RotationBase * * \brief Common base class for compact rotation representations * * \tparam Derived is the derived type, i.e., a rotation type * \tparam _Dim the dimension of the space */ template class RotationBase { public: enum { Dim = _Dim }; /** the scalar type of the coefficients */ typedef typename internal::traits::Scalar Scalar; /** corresponding linear transformation matrix type */ typedef Matrix RotationMatrixType; typedef Matrix VectorType; public: EIGEN_DEVICE_FUNC inline const Derived& derived() const { return *static_cast(this); } EIGEN_DEVICE_FUNC inline Derived& derived() { return *static_cast(this); } /** \returns an equivalent rotation matrix */ EIGEN_DEVICE_FUNC inline RotationMatrixType toRotationMatrix() const { return derived().toRotationMatrix(); } /** \returns an equivalent rotation matrix * This function is added to be conform with the Transform class' naming scheme. */ EIGEN_DEVICE_FUNC inline RotationMatrixType matrix() const { return derived().toRotationMatrix(); } /** \returns the inverse rotation */ EIGEN_DEVICE_FUNC inline Derived inverse() const { return derived().inverse(); } /** \returns the concatenation of the rotation \c *this with a translation \a t */ EIGEN_DEVICE_FUNC inline Transform operator*(const Translation& t) const { return Transform(*this) * t; } /** \returns the concatenation of the rotation \c *this with a uniform scaling \a s */ EIGEN_DEVICE_FUNC inline RotationMatrixType operator*(const UniformScaling& s) const { return toRotationMatrix() * s.factor(); } /** \returns the concatenation of the rotation \c *this with a generic expression \a e * \a e can be: * - a DimxDim linear transformation matrix * - a DimxDim diagonal matrix (axis aligned scaling) * - a vector of size Dim */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::rotation_base_generic_product_selector::ReturnType operator*(const EigenBase& e) const { return internal::rotation_base_generic_product_selector::run(derived(), e.derived()); } /** \returns the concatenation of a linear transformation \a l with the rotation \a r */ template friend EIGEN_DEVICE_FUNC inline RotationMatrixType operator*(const EigenBase& l, const Derived& r) { return l.derived() * r.toRotationMatrix(); } /** \returns the concatenation of a scaling \a l with the rotation \a r */ EIGEN_DEVICE_FUNC friend inline Transform operator*(const DiagonalMatrix& l, const Derived& r) { Transform res(r); res.linear().applyOnTheLeft(l); return res; } /** \returns the concatenation of the rotation \c *this with a transformation \a t */ template EIGEN_DEVICE_FUNC inline Transform operator*(const Transform& t) const { return toRotationMatrix() * t; } template EIGEN_DEVICE_FUNC inline VectorType _transformVector(const OtherVectorType& v) const { return toRotationMatrix() * v; } }; namespace internal { // implementation of the generic product rotation * matrix template struct rotation_base_generic_product_selector { enum { Dim = RotationDerived::Dim }; typedef Matrix ReturnType; EIGEN_DEVICE_FUNC static inline ReturnType run(const RotationDerived& r, const MatrixType& m) { return r.toRotationMatrix() * m; } }; template struct rotation_base_generic_product_selector< RotationDerived, DiagonalMatrix, false > { typedef Transform ReturnType; EIGEN_DEVICE_FUNC static inline ReturnType run(const RotationDerived& r, const DiagonalMatrix& m) { ReturnType res(r); res.linear() *= m; return res; } }; template struct rotation_base_generic_product_selector { enum { Dim = RotationDerived::Dim }; typedef Matrix ReturnType; EIGEN_DEVICE_FUNC static EIGEN_STRONG_INLINE ReturnType run(const RotationDerived& r, const OtherVectorType& v) { return r._transformVector(v); } }; } // end namespace internal /** \geometry_module * * \brief Constructs a Dim x Dim rotation matrix from the rotation \a r */ template template EIGEN_DEVICE_FUNC Matrix<_Scalar, _Rows, _Cols, _Storage, _MaxRows, _MaxCols> ::Matrix(const RotationBase& r) { EIGEN_STATIC_ASSERT_MATRIX_SPECIFIC_SIZE(Matrix,int(OtherDerived::Dim),int(OtherDerived::Dim)) *this = r.toRotationMatrix(); } /** \geometry_module * * \brief Set a Dim x Dim rotation matrix from the rotation \a r */ template template EIGEN_DEVICE_FUNC Matrix<_Scalar, _Rows, _Cols, _Storage, _MaxRows, _MaxCols>& Matrix<_Scalar, _Rows, _Cols, _Storage, _MaxRows, _MaxCols> ::operator=(const RotationBase& r) { EIGEN_STATIC_ASSERT_MATRIX_SPECIFIC_SIZE(Matrix,int(OtherDerived::Dim),int(OtherDerived::Dim)) return *this = r.toRotationMatrix(); } namespace internal { /** \internal * * Helper function to return an arbitrary rotation object to a rotation matrix. * * \tparam Scalar the numeric type of the matrix coefficients * \tparam Dim the dimension of the current space * * It returns a Dim x Dim fixed size matrix. * * Default specializations are provided for: * - any scalar type (2D), * - any matrix expression, * - any type based on RotationBase (e.g., Quaternion, AngleAxis, Rotation2D) * * Currently toRotationMatrix is only used by Transform. * * \sa class Transform, class Rotation2D, class Quaternion, class AngleAxis */ template EIGEN_DEVICE_FUNC static inline Matrix toRotationMatrix(const Scalar& s) { EIGEN_STATIC_ASSERT(Dim==2,YOU_MADE_A_PROGRAMMING_MISTAKE) return Rotation2D(s).toRotationMatrix(); } template EIGEN_DEVICE_FUNC static inline Matrix toRotationMatrix(const RotationBase& r) { return r.toRotationMatrix(); } template EIGEN_DEVICE_FUNC static inline const MatrixBase& toRotationMatrix(const MatrixBase& mat) { EIGEN_STATIC_ASSERT(OtherDerived::RowsAtCompileTime==Dim && OtherDerived::ColsAtCompileTime==Dim, YOU_MADE_A_PROGRAMMING_MISTAKE) return mat; } } // end namespace internal } // end namespace Eigen #endif // EIGEN_ROTATIONBASE_H piqp-octave/src/eigen/Eigen/src/Geometry/Rotation2D.h0000644000175100001770000001531614107270226022307 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ROTATION2D_H #define EIGEN_ROTATION2D_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class Rotation2D * * \brief Represents a rotation/orientation in a 2 dimensional space. * * \tparam _Scalar the scalar type, i.e., the type of the coefficients * * This class is equivalent to a single scalar representing a counter clock wise rotation * as a single angle in radian. It provides some additional features such as the automatic * conversion from/to a 2x2 rotation matrix. Moreover this class aims to provide a similar * interface to Quaternion in order to facilitate the writing of generic algorithms * dealing with rotations. * * \sa class Quaternion, class Transform */ namespace internal { template struct traits > { typedef _Scalar Scalar; }; } // end namespace internal template class Rotation2D : public RotationBase,2> { typedef RotationBase,2> Base; public: using Base::operator*; enum { Dim = 2 }; /** the scalar type of the coefficients */ typedef _Scalar Scalar; typedef Matrix Vector2; typedef Matrix Matrix2; protected: Scalar m_angle; public: /** Construct a 2D counter clock wise rotation from the angle \a a in radian. */ EIGEN_DEVICE_FUNC explicit inline Rotation2D(const Scalar& a) : m_angle(a) {} /** Default constructor wihtout initialization. The represented rotation is undefined. */ EIGEN_DEVICE_FUNC Rotation2D() {} /** Construct a 2D rotation from a 2x2 rotation matrix \a mat. * * \sa fromRotationMatrix() */ template EIGEN_DEVICE_FUNC explicit Rotation2D(const MatrixBase& m) { fromRotationMatrix(m.derived()); } /** \returns the rotation angle */ EIGEN_DEVICE_FUNC inline Scalar angle() const { return m_angle; } /** \returns a read-write reference to the rotation angle */ EIGEN_DEVICE_FUNC inline Scalar& angle() { return m_angle; } /** \returns the rotation angle in [0,2pi] */ EIGEN_DEVICE_FUNC inline Scalar smallestPositiveAngle() const { Scalar tmp = numext::fmod(m_angle,Scalar(2*EIGEN_PI)); return tmpScalar(EIGEN_PI)) tmp -= Scalar(2*EIGEN_PI); else if(tmp<-Scalar(EIGEN_PI)) tmp += Scalar(2*EIGEN_PI); return tmp; } /** \returns the inverse rotation */ EIGEN_DEVICE_FUNC inline Rotation2D inverse() const { return Rotation2D(-m_angle); } /** Concatenates two rotations */ EIGEN_DEVICE_FUNC inline Rotation2D operator*(const Rotation2D& other) const { return Rotation2D(m_angle + other.m_angle); } /** Concatenates two rotations */ EIGEN_DEVICE_FUNC inline Rotation2D& operator*=(const Rotation2D& other) { m_angle += other.m_angle; return *this; } /** Applies the rotation to a 2D vector */ EIGEN_DEVICE_FUNC Vector2 operator* (const Vector2& vec) const { return toRotationMatrix() * vec; } template EIGEN_DEVICE_FUNC Rotation2D& fromRotationMatrix(const MatrixBase& m); EIGEN_DEVICE_FUNC Matrix2 toRotationMatrix() const; /** Set \c *this from a 2x2 rotation matrix \a mat. * In other words, this function extract the rotation angle from the rotation matrix. * * This method is an alias for fromRotationMatrix() * * \sa fromRotationMatrix() */ template EIGEN_DEVICE_FUNC Rotation2D& operator=(const MatrixBase& m) { return fromRotationMatrix(m.derived()); } /** \returns the spherical interpolation between \c *this and \a other using * parameter \a t. It is in fact equivalent to a linear interpolation. */ EIGEN_DEVICE_FUNC inline Rotation2D slerp(const Scalar& t, const Rotation2D& other) const { Scalar dist = Rotation2D(other.m_angle-m_angle).smallestAngle(); return Rotation2D(m_angle + dist*t); } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit Rotation2D(const Rotation2D& other) { m_angle = Scalar(other.angle()); } EIGEN_DEVICE_FUNC static inline Rotation2D Identity() { return Rotation2D(0); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const Rotation2D& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return internal::isApprox(m_angle,other.m_angle, prec); } }; /** \ingroup Geometry_Module * single precision 2D rotation type */ typedef Rotation2D Rotation2Df; /** \ingroup Geometry_Module * double precision 2D rotation type */ typedef Rotation2D Rotation2Dd; /** Set \c *this from a 2x2 rotation matrix \a mat. * In other words, this function extract the rotation angle * from the rotation matrix. */ template template EIGEN_DEVICE_FUNC Rotation2D& Rotation2D::fromRotationMatrix(const MatrixBase& mat) { EIGEN_USING_STD(atan2) EIGEN_STATIC_ASSERT(Derived::RowsAtCompileTime==2 && Derived::ColsAtCompileTime==2,YOU_MADE_A_PROGRAMMING_MISTAKE) m_angle = atan2(mat.coeff(1,0), mat.coeff(0,0)); return *this; } /** Constructs and \returns an equivalent 2x2 rotation matrix. */ template typename Rotation2D::Matrix2 EIGEN_DEVICE_FUNC Rotation2D::toRotationMatrix(void) const { EIGEN_USING_STD(sin) EIGEN_USING_STD(cos) Scalar sinA = sin(m_angle); Scalar cosA = cos(m_angle); return (Matrix2() << cosA, -sinA, sinA, cosA).finished(); } } // end namespace Eigen #endif // EIGEN_ROTATION2D_H piqp-octave/src/eigen/Eigen/src/Geometry/Quaternion.h0000644000175100001770000010307714107270226022451 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // Copyright (C) 2009 Mathieu Gautier // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_QUATERNION_H #define EIGEN_QUATERNION_H namespace Eigen { /*************************************************************************** * Definition of QuaternionBase * The implementation is at the end of the file ***************************************************************************/ namespace internal { template struct quaternionbase_assign_impl; } /** \geometry_module \ingroup Geometry_Module * \class QuaternionBase * \brief Base class for quaternion expressions * \tparam Derived derived type (CRTP) * \sa class Quaternion */ template class QuaternionBase : public RotationBase { public: typedef RotationBase Base; using Base::operator*; using Base::derived; typedef typename internal::traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef typename internal::traits::Coefficients Coefficients; typedef typename Coefficients::CoeffReturnType CoeffReturnType; typedef typename internal::conditional::Flags&LvalueBit), Scalar&, CoeffReturnType>::type NonConstCoeffReturnType; enum { Flags = Eigen::internal::traits::Flags }; // typedef typename Matrix Coefficients; /** the type of a 3D vector */ typedef Matrix Vector3; /** the equivalent rotation matrix type */ typedef Matrix Matrix3; /** the equivalent angle-axis type */ typedef AngleAxis AngleAxisType; /** \returns the \c x coefficient */ EIGEN_DEVICE_FUNC inline CoeffReturnType x() const { return this->derived().coeffs().coeff(0); } /** \returns the \c y coefficient */ EIGEN_DEVICE_FUNC inline CoeffReturnType y() const { return this->derived().coeffs().coeff(1); } /** \returns the \c z coefficient */ EIGEN_DEVICE_FUNC inline CoeffReturnType z() const { return this->derived().coeffs().coeff(2); } /** \returns the \c w coefficient */ EIGEN_DEVICE_FUNC inline CoeffReturnType w() const { return this->derived().coeffs().coeff(3); } /** \returns a reference to the \c x coefficient (if Derived is a non-const lvalue) */ EIGEN_DEVICE_FUNC inline NonConstCoeffReturnType x() { return this->derived().coeffs().x(); } /** \returns a reference to the \c y coefficient (if Derived is a non-const lvalue) */ EIGEN_DEVICE_FUNC inline NonConstCoeffReturnType y() { return this->derived().coeffs().y(); } /** \returns a reference to the \c z coefficient (if Derived is a non-const lvalue) */ EIGEN_DEVICE_FUNC inline NonConstCoeffReturnType z() { return this->derived().coeffs().z(); } /** \returns a reference to the \c w coefficient (if Derived is a non-const lvalue) */ EIGEN_DEVICE_FUNC inline NonConstCoeffReturnType w() { return this->derived().coeffs().w(); } /** \returns a read-only vector expression of the imaginary part (x,y,z) */ EIGEN_DEVICE_FUNC inline const VectorBlock vec() const { return coeffs().template head<3>(); } /** \returns a vector expression of the imaginary part (x,y,z) */ EIGEN_DEVICE_FUNC inline VectorBlock vec() { return coeffs().template head<3>(); } /** \returns a read-only vector expression of the coefficients (x,y,z,w) */ EIGEN_DEVICE_FUNC inline const typename internal::traits::Coefficients& coeffs() const { return derived().coeffs(); } /** \returns a vector expression of the coefficients (x,y,z,w) */ EIGEN_DEVICE_FUNC inline typename internal::traits::Coefficients& coeffs() { return derived().coeffs(); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE QuaternionBase& operator=(const QuaternionBase& other); template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator=(const QuaternionBase& other); // disabled this copy operator as it is giving very strange compilation errors when compiling // test_stdvector with GCC 4.4.2. This looks like a GCC bug though, so feel free to re-enable it if it's // useful; however notice that we already have the templated operator= above and e.g. in MatrixBase // we didn't have to add, in addition to templated operator=, such a non-templated copy operator. // Derived& operator=(const QuaternionBase& other) // { return operator=(other); } EIGEN_DEVICE_FUNC Derived& operator=(const AngleAxisType& aa); template EIGEN_DEVICE_FUNC Derived& operator=(const MatrixBase& m); /** \returns a quaternion representing an identity rotation * \sa MatrixBase::Identity() */ EIGEN_DEVICE_FUNC static inline Quaternion Identity() { return Quaternion(Scalar(1), Scalar(0), Scalar(0), Scalar(0)); } /** \sa QuaternionBase::Identity(), MatrixBase::setIdentity() */ EIGEN_DEVICE_FUNC inline QuaternionBase& setIdentity() { coeffs() << Scalar(0), Scalar(0), Scalar(0), Scalar(1); return *this; } /** \returns the squared norm of the quaternion's coefficients * \sa QuaternionBase::norm(), MatrixBase::squaredNorm() */ EIGEN_DEVICE_FUNC inline Scalar squaredNorm() const { return coeffs().squaredNorm(); } /** \returns the norm of the quaternion's coefficients * \sa QuaternionBase::squaredNorm(), MatrixBase::norm() */ EIGEN_DEVICE_FUNC inline Scalar norm() const { return coeffs().norm(); } /** Normalizes the quaternion \c *this * \sa normalized(), MatrixBase::normalize() */ EIGEN_DEVICE_FUNC inline void normalize() { coeffs().normalize(); } /** \returns a normalized copy of \c *this * \sa normalize(), MatrixBase::normalized() */ EIGEN_DEVICE_FUNC inline Quaternion normalized() const { return Quaternion(coeffs().normalized()); } /** \returns the dot product of \c *this and \a other * Geometrically speaking, the dot product of two unit quaternions * corresponds to the cosine of half the angle between the two rotations. * \sa angularDistance() */ template EIGEN_DEVICE_FUNC inline Scalar dot(const QuaternionBase& other) const { return coeffs().dot(other.coeffs()); } template EIGEN_DEVICE_FUNC Scalar angularDistance(const QuaternionBase& other) const; /** \returns an equivalent 3x3 rotation matrix */ EIGEN_DEVICE_FUNC inline Matrix3 toRotationMatrix() const; /** \returns the quaternion which transform \a a into \a b through a rotation */ template EIGEN_DEVICE_FUNC Derived& setFromTwoVectors(const MatrixBase& a, const MatrixBase& b); template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Quaternion operator* (const QuaternionBase& q) const; template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator*= (const QuaternionBase& q); /** \returns the quaternion describing the inverse rotation */ EIGEN_DEVICE_FUNC Quaternion inverse() const; /** \returns the conjugated quaternion */ EIGEN_DEVICE_FUNC Quaternion conjugate() const; template EIGEN_DEVICE_FUNC Quaternion slerp(const Scalar& t, const QuaternionBase& other) const; /** \returns true if each coefficients of \c *this and \a other are all exactly equal. * \warning When using floating point scalar values you probably should rather use a * fuzzy comparison such as isApprox() * \sa isApprox(), operator!= */ template EIGEN_DEVICE_FUNC inline bool operator==(const QuaternionBase& other) const { return coeffs() == other.coeffs(); } /** \returns true if at least one pair of coefficients of \c *this and \a other are not exactly equal to each other. * \warning When using floating point scalar values you probably should rather use a * fuzzy comparison such as isApprox() * \sa isApprox(), operator== */ template EIGEN_DEVICE_FUNC inline bool operator!=(const QuaternionBase& other) const { return coeffs() != other.coeffs(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ template EIGEN_DEVICE_FUNC bool isApprox(const QuaternionBase& other, const RealScalar& prec = NumTraits::dummy_precision()) const { return coeffs().isApprox(other.coeffs(), prec); } /** return the result vector of \a v through the rotation*/ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Vector3 _transformVector(const Vector3& v) const; #ifdef EIGEN_PARSED_BY_DOXYGEN /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const; #else template EIGEN_DEVICE_FUNC inline typename internal::enable_if::value,const Derived&>::type cast() const { return derived(); } template EIGEN_DEVICE_FUNC inline typename internal::enable_if::value,Quaternion >::type cast() const { return Quaternion(coeffs().template cast()); } #endif #ifndef EIGEN_NO_IO friend std::ostream& operator<<(std::ostream& s, const QuaternionBase& q) { s << q.x() << "i + " << q.y() << "j + " << q.z() << "k" << " + " << q.w(); return s; } #endif #ifdef EIGEN_QUATERNIONBASE_PLUGIN # include EIGEN_QUATERNIONBASE_PLUGIN #endif protected: EIGEN_DEFAULT_COPY_CONSTRUCTOR(QuaternionBase) EIGEN_DEFAULT_EMPTY_CONSTRUCTOR_AND_DESTRUCTOR(QuaternionBase) }; /*************************************************************************** * Definition/implementation of Quaternion ***************************************************************************/ /** \geometry_module \ingroup Geometry_Module * * \class Quaternion * * \brief The quaternion class used to represent 3D orientations and rotations * * \tparam _Scalar the scalar type, i.e., the type of the coefficients * \tparam _Options controls the memory alignment of the coefficients. Can be \# AutoAlign or \# DontAlign. Default is AutoAlign. * * This class represents a quaternion \f$ w+xi+yj+zk \f$ that is a convenient representation of * orientations and rotations of objects in three dimensions. Compared to other representations * like Euler angles or 3x3 matrices, quaternions offer the following advantages: * \li \b compact storage (4 scalars) * \li \b efficient to compose (28 flops), * \li \b stable spherical interpolation * * The following two typedefs are provided for convenience: * \li \c Quaternionf for \c float * \li \c Quaterniond for \c double * * \warning Operations interpreting the quaternion as rotation have undefined behavior if the quaternion is not normalized. * * \sa class AngleAxis, class Transform */ namespace internal { template struct traits > { typedef Quaternion<_Scalar,_Options> PlainObject; typedef _Scalar Scalar; typedef Matrix<_Scalar,4,1,_Options> Coefficients; enum{ Alignment = internal::traits::Alignment, Flags = LvalueBit }; }; } template class Quaternion : public QuaternionBase > { public: typedef QuaternionBase > Base; enum { NeedsAlignment = internal::traits::Alignment>0 }; typedef _Scalar Scalar; EIGEN_INHERIT_ASSIGNMENT_OPERATORS(Quaternion) using Base::operator*=; typedef typename internal::traits::Coefficients Coefficients; typedef typename Base::AngleAxisType AngleAxisType; /** Default constructor leaving the quaternion uninitialized. */ EIGEN_DEVICE_FUNC inline Quaternion() {} /** Constructs and initializes the quaternion \f$ w+xi+yj+zk \f$ from * its four coefficients \a w, \a x, \a y and \a z. * * \warning Note the order of the arguments: the real \a w coefficient first, * while internally the coefficients are stored in the following order: * [\c x, \c y, \c z, \c w] */ EIGEN_DEVICE_FUNC inline Quaternion(const Scalar& w, const Scalar& x, const Scalar& y, const Scalar& z) : m_coeffs(x, y, z, w){} /** Constructs and initialize a quaternion from the array data */ EIGEN_DEVICE_FUNC explicit inline Quaternion(const Scalar* data) : m_coeffs(data) {} /** Copy constructor */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Quaternion(const QuaternionBase& other) { this->Base::operator=(other); } /** Constructs and initializes a quaternion from the angle-axis \a aa */ EIGEN_DEVICE_FUNC explicit inline Quaternion(const AngleAxisType& aa) { *this = aa; } /** Constructs and initializes a quaternion from either: * - a rotation matrix expression, * - a 4D vector expression representing quaternion coefficients. */ template EIGEN_DEVICE_FUNC explicit inline Quaternion(const MatrixBase& other) { *this = other; } /** Explicit copy constructor with scalar conversion */ template EIGEN_DEVICE_FUNC explicit inline Quaternion(const Quaternion& other) { m_coeffs = other.coeffs().template cast(); } #if EIGEN_HAS_RVALUE_REFERENCES // We define a copy constructor, which means we don't get an implicit move constructor or assignment operator. /** Default move constructor */ EIGEN_DEVICE_FUNC inline Quaternion(Quaternion&& other) EIGEN_NOEXCEPT_IF(std::is_nothrow_move_constructible::value) : m_coeffs(std::move(other.coeffs())) {} /** Default move assignment operator */ EIGEN_DEVICE_FUNC Quaternion& operator=(Quaternion&& other) EIGEN_NOEXCEPT_IF(std::is_nothrow_move_assignable::value) { m_coeffs = std::move(other.coeffs()); return *this; } #endif EIGEN_DEVICE_FUNC static Quaternion UnitRandom(); template EIGEN_DEVICE_FUNC static Quaternion FromTwoVectors(const MatrixBase& a, const MatrixBase& b); EIGEN_DEVICE_FUNC inline Coefficients& coeffs() { return m_coeffs;} EIGEN_DEVICE_FUNC inline const Coefficients& coeffs() const { return m_coeffs;} EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(bool(NeedsAlignment)) #ifdef EIGEN_QUATERNION_PLUGIN # include EIGEN_QUATERNION_PLUGIN #endif protected: Coefficients m_coeffs; #ifndef EIGEN_PARSED_BY_DOXYGEN static EIGEN_STRONG_INLINE void _check_template_params() { EIGEN_STATIC_ASSERT( (_Options & DontAlign) == _Options, INVALID_MATRIX_TEMPLATE_PARAMETERS) } #endif }; /** \ingroup Geometry_Module * single precision quaternion type */ typedef Quaternion Quaternionf; /** \ingroup Geometry_Module * double precision quaternion type */ typedef Quaternion Quaterniond; /*************************************************************************** * Specialization of Map> ***************************************************************************/ namespace internal { template struct traits, _Options> > : traits > { typedef Map, _Options> Coefficients; }; } namespace internal { template struct traits, _Options> > : traits > { typedef Map, _Options> Coefficients; typedef traits > TraitsBase; enum { Flags = TraitsBase::Flags & ~LvalueBit }; }; } /** \ingroup Geometry_Module * \brief Quaternion expression mapping a constant memory buffer * * \tparam _Scalar the type of the Quaternion coefficients * \tparam _Options see class Map * * This is a specialization of class Map for Quaternion. This class allows to view * a 4 scalar memory buffer as an Eigen's Quaternion object. * * \sa class Map, class Quaternion, class QuaternionBase */ template class Map, _Options > : public QuaternionBase, _Options> > { public: typedef QuaternionBase, _Options> > Base; typedef _Scalar Scalar; typedef typename internal::traits::Coefficients Coefficients; EIGEN_INHERIT_ASSIGNMENT_OPERATORS(Map) using Base::operator*=; /** Constructs a Mapped Quaternion object from the pointer \a coeffs * * The pointer \a coeffs must reference the four coefficients of Quaternion in the following order: * \code *coeffs == {x, y, z, w} \endcode * * If the template parameter _Options is set to #Aligned, then the pointer coeffs must be aligned. */ EIGEN_DEVICE_FUNC explicit EIGEN_STRONG_INLINE Map(const Scalar* coeffs) : m_coeffs(coeffs) {} EIGEN_DEVICE_FUNC inline const Coefficients& coeffs() const { return m_coeffs;} protected: const Coefficients m_coeffs; }; /** \ingroup Geometry_Module * \brief Expression of a quaternion from a memory buffer * * \tparam _Scalar the type of the Quaternion coefficients * \tparam _Options see class Map * * This is a specialization of class Map for Quaternion. This class allows to view * a 4 scalar memory buffer as an Eigen's Quaternion object. * * \sa class Map, class Quaternion, class QuaternionBase */ template class Map, _Options > : public QuaternionBase, _Options> > { public: typedef QuaternionBase, _Options> > Base; typedef _Scalar Scalar; typedef typename internal::traits::Coefficients Coefficients; EIGEN_INHERIT_ASSIGNMENT_OPERATORS(Map) using Base::operator*=; /** Constructs a Mapped Quaternion object from the pointer \a coeffs * * The pointer \a coeffs must reference the four coefficients of Quaternion in the following order: * \code *coeffs == {x, y, z, w} \endcode * * If the template parameter _Options is set to #Aligned, then the pointer coeffs must be aligned. */ EIGEN_DEVICE_FUNC explicit EIGEN_STRONG_INLINE Map(Scalar* coeffs) : m_coeffs(coeffs) {} EIGEN_DEVICE_FUNC inline Coefficients& coeffs() { return m_coeffs; } EIGEN_DEVICE_FUNC inline const Coefficients& coeffs() const { return m_coeffs; } protected: Coefficients m_coeffs; }; /** \ingroup Geometry_Module * Map an unaligned array of single precision scalars as a quaternion */ typedef Map, 0> QuaternionMapf; /** \ingroup Geometry_Module * Map an unaligned array of double precision scalars as a quaternion */ typedef Map, 0> QuaternionMapd; /** \ingroup Geometry_Module * Map a 16-byte aligned array of single precision scalars as a quaternion */ typedef Map, Aligned> QuaternionMapAlignedf; /** \ingroup Geometry_Module * Map a 16-byte aligned array of double precision scalars as a quaternion */ typedef Map, Aligned> QuaternionMapAlignedd; /*************************************************************************** * Implementation of QuaternionBase methods ***************************************************************************/ // Generic Quaternion * Quaternion product // This product can be specialized for a given architecture via the Arch template argument. namespace internal { template struct quat_product { EIGEN_DEVICE_FUNC static EIGEN_STRONG_INLINE Quaternion run(const QuaternionBase& a, const QuaternionBase& b){ return Quaternion ( a.w() * b.w() - a.x() * b.x() - a.y() * b.y() - a.z() * b.z(), a.w() * b.x() + a.x() * b.w() + a.y() * b.z() - a.z() * b.y(), a.w() * b.y() + a.y() * b.w() + a.z() * b.x() - a.x() * b.z(), a.w() * b.z() + a.z() * b.w() + a.x() * b.y() - a.y() * b.x() ); } }; } /** \returns the concatenation of two rotations as a quaternion-quaternion product */ template template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Quaternion::Scalar> QuaternionBase::operator* (const QuaternionBase& other) const { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) return internal::quat_product::Scalar>::run(*this, other); } /** \sa operator*(Quaternion) */ template template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& QuaternionBase::operator*= (const QuaternionBase& other) { derived() = derived() * other.derived(); return derived(); } /** Rotation of a vector by a quaternion. * \remarks If the quaternion is used to rotate several points (>1) * then it is much more efficient to first convert it to a 3x3 Matrix. * Comparison of the operation cost for n transformations: * - Quaternion2: 30n * - Via a Matrix3: 24 + 15n */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename QuaternionBase::Vector3 QuaternionBase::_transformVector(const Vector3& v) const { // Note that this algorithm comes from the optimization by hand // of the conversion to a Matrix followed by a Matrix/Vector product. // It appears to be much faster than the common algorithm found // in the literature (30 versus 39 flops). It also requires two // Vector3 as temporaries. Vector3 uv = this->vec().cross(v); uv += uv; return v + this->w() * uv + this->vec().cross(uv); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE QuaternionBase& QuaternionBase::operator=(const QuaternionBase& other) { coeffs() = other.coeffs(); return derived(); } template template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& QuaternionBase::operator=(const QuaternionBase& other) { coeffs() = other.coeffs(); return derived(); } /** Set \c *this from an angle-axis \a aa and returns a reference to \c *this */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& QuaternionBase::operator=(const AngleAxisType& aa) { EIGEN_USING_STD(cos) EIGEN_USING_STD(sin) Scalar ha = Scalar(0.5)*aa.angle(); // Scalar(0.5) to suppress precision loss warnings this->w() = cos(ha); this->vec() = sin(ha) * aa.axis(); return derived(); } /** Set \c *this from the expression \a xpr: * - if \a xpr is a 4x1 vector, then \a xpr is assumed to be a quaternion * - if \a xpr is a 3x3 matrix, then \a xpr is assumed to be rotation matrix * and \a xpr is converted to a quaternion */ template template EIGEN_DEVICE_FUNC inline Derived& QuaternionBase::operator=(const MatrixBase& xpr) { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) internal::quaternionbase_assign_impl::run(*this, xpr.derived()); return derived(); } /** Convert the quaternion to a 3x3 rotation matrix. The quaternion is required to * be normalized, otherwise the result is undefined. */ template EIGEN_DEVICE_FUNC inline typename QuaternionBase::Matrix3 QuaternionBase::toRotationMatrix(void) const { // NOTE if inlined, then gcc 4.2 and 4.4 get rid of the temporary (not gcc 4.3 !!) // if not inlined then the cost of the return by value is huge ~ +35%, // however, not inlining this function is an order of magnitude slower, so // it has to be inlined, and so the return by value is not an issue Matrix3 res; const Scalar tx = Scalar(2)*this->x(); const Scalar ty = Scalar(2)*this->y(); const Scalar tz = Scalar(2)*this->z(); const Scalar twx = tx*this->w(); const Scalar twy = ty*this->w(); const Scalar twz = tz*this->w(); const Scalar txx = tx*this->x(); const Scalar txy = ty*this->x(); const Scalar txz = tz*this->x(); const Scalar tyy = ty*this->y(); const Scalar tyz = tz*this->y(); const Scalar tzz = tz*this->z(); res.coeffRef(0,0) = Scalar(1)-(tyy+tzz); res.coeffRef(0,1) = txy-twz; res.coeffRef(0,2) = txz+twy; res.coeffRef(1,0) = txy+twz; res.coeffRef(1,1) = Scalar(1)-(txx+tzz); res.coeffRef(1,2) = tyz-twx; res.coeffRef(2,0) = txz-twy; res.coeffRef(2,1) = tyz+twx; res.coeffRef(2,2) = Scalar(1)-(txx+tyy); return res; } /** Sets \c *this to be a quaternion representing a rotation between * the two arbitrary vectors \a a and \a b. In other words, the built * rotation represent a rotation sending the line of direction \a a * to the line of direction \a b, both lines passing through the origin. * * \returns a reference to \c *this. * * Note that the two input vectors do \b not have to be normalized, and * do not need to have the same norm. */ template template EIGEN_DEVICE_FUNC inline Derived& QuaternionBase::setFromTwoVectors(const MatrixBase& a, const MatrixBase& b) { EIGEN_USING_STD(sqrt) Vector3 v0 = a.normalized(); Vector3 v1 = b.normalized(); Scalar c = v1.dot(v0); // if dot == -1, vectors are nearly opposites // => accurately compute the rotation axis by computing the // intersection of the two planes. This is done by solving: // x^T v0 = 0 // x^T v1 = 0 // under the constraint: // ||x|| = 1 // which yields a singular value problem if (c < Scalar(-1)+NumTraits::dummy_precision()) { c = numext::maxi(c,Scalar(-1)); Matrix m; m << v0.transpose(), v1.transpose(); JacobiSVD > svd(m, ComputeFullV); Vector3 axis = svd.matrixV().col(2); Scalar w2 = (Scalar(1)+c)*Scalar(0.5); this->w() = sqrt(w2); this->vec() = axis * sqrt(Scalar(1) - w2); return derived(); } Vector3 axis = v0.cross(v1); Scalar s = sqrt((Scalar(1)+c)*Scalar(2)); Scalar invs = Scalar(1)/s; this->vec() = axis * invs; this->w() = s * Scalar(0.5); return derived(); } /** \returns a random unit quaternion following a uniform distribution law on SO(3) * * \note The implementation is based on http://planning.cs.uiuc.edu/node198.html */ template EIGEN_DEVICE_FUNC Quaternion Quaternion::UnitRandom() { EIGEN_USING_STD(sqrt) EIGEN_USING_STD(sin) EIGEN_USING_STD(cos) const Scalar u1 = internal::random(0, 1), u2 = internal::random(0, 2*EIGEN_PI), u3 = internal::random(0, 2*EIGEN_PI); const Scalar a = sqrt(Scalar(1) - u1), b = sqrt(u1); return Quaternion (a * sin(u2), a * cos(u2), b * sin(u3), b * cos(u3)); } /** Returns a quaternion representing a rotation between * the two arbitrary vectors \a a and \a b. In other words, the built * rotation represent a rotation sending the line of direction \a a * to the line of direction \a b, both lines passing through the origin. * * \returns resulting quaternion * * Note that the two input vectors do \b not have to be normalized, and * do not need to have the same norm. */ template template EIGEN_DEVICE_FUNC Quaternion Quaternion::FromTwoVectors(const MatrixBase& a, const MatrixBase& b) { Quaternion quat; quat.setFromTwoVectors(a, b); return quat; } /** \returns the multiplicative inverse of \c *this * Note that in most cases, i.e., if you simply want the opposite rotation, * and/or the quaternion is normalized, then it is enough to use the conjugate. * * \sa QuaternionBase::conjugate() */ template EIGEN_DEVICE_FUNC inline Quaternion::Scalar> QuaternionBase::inverse() const { // FIXME should this function be called multiplicativeInverse and conjugate() be called inverse() or opposite() ?? Scalar n2 = this->squaredNorm(); if (n2 > Scalar(0)) return Quaternion(conjugate().coeffs() / n2); else { // return an invalid result to flag the error return Quaternion(Coefficients::Zero()); } } // Generic conjugate of a Quaternion namespace internal { template struct quat_conj { EIGEN_DEVICE_FUNC static EIGEN_STRONG_INLINE Quaternion run(const QuaternionBase& q){ return Quaternion(q.w(),-q.x(),-q.y(),-q.z()); } }; } /** \returns the conjugate of the \c *this which is equal to the multiplicative inverse * if the quaternion is normalized. * The conjugate of a quaternion represents the opposite rotation. * * \sa Quaternion2::inverse() */ template EIGEN_DEVICE_FUNC inline Quaternion::Scalar> QuaternionBase::conjugate() const { return internal::quat_conj::Scalar>::run(*this); } /** \returns the angle (in radian) between two rotations * \sa dot() */ template template EIGEN_DEVICE_FUNC inline typename internal::traits::Scalar QuaternionBase::angularDistance(const QuaternionBase& other) const { EIGEN_USING_STD(atan2) Quaternion d = (*this) * other.conjugate(); return Scalar(2) * atan2( d.vec().norm(), numext::abs(d.w()) ); } /** \returns the spherical linear interpolation between the two quaternions * \c *this and \a other at the parameter \a t in [0;1]. * * This represents an interpolation for a constant motion between \c *this and \a other, * see also http://en.wikipedia.org/wiki/Slerp. */ template template EIGEN_DEVICE_FUNC Quaternion::Scalar> QuaternionBase::slerp(const Scalar& t, const QuaternionBase& other) const { EIGEN_USING_STD(acos) EIGEN_USING_STD(sin) const Scalar one = Scalar(1) - NumTraits::epsilon(); Scalar d = this->dot(other); Scalar absD = numext::abs(d); Scalar scale0; Scalar scale1; if(absD>=one) { scale0 = Scalar(1) - t; scale1 = t; } else { // theta is the angle between the 2 quaternions Scalar theta = acos(absD); Scalar sinTheta = sin(theta); scale0 = sin( ( Scalar(1) - t ) * theta) / sinTheta; scale1 = sin( ( t * theta) ) / sinTheta; } if(d(scale0 * coeffs() + scale1 * other.coeffs()); } namespace internal { // set from a rotation matrix template struct quaternionbase_assign_impl { typedef typename Other::Scalar Scalar; template EIGEN_DEVICE_FUNC static inline void run(QuaternionBase& q, const Other& a_mat) { const typename internal::nested_eval::type mat(a_mat); EIGEN_USING_STD(sqrt) // This algorithm comes from "Quaternion Calculus and Fast Animation", // Ken Shoemake, 1987 SIGGRAPH course notes Scalar t = mat.trace(); if (t > Scalar(0)) { t = sqrt(t + Scalar(1.0)); q.w() = Scalar(0.5)*t; t = Scalar(0.5)/t; q.x() = (mat.coeff(2,1) - mat.coeff(1,2)) * t; q.y() = (mat.coeff(0,2) - mat.coeff(2,0)) * t; q.z() = (mat.coeff(1,0) - mat.coeff(0,1)) * t; } else { Index i = 0; if (mat.coeff(1,1) > mat.coeff(0,0)) i = 1; if (mat.coeff(2,2) > mat.coeff(i,i)) i = 2; Index j = (i+1)%3; Index k = (j+1)%3; t = sqrt(mat.coeff(i,i)-mat.coeff(j,j)-mat.coeff(k,k) + Scalar(1.0)); q.coeffs().coeffRef(i) = Scalar(0.5) * t; t = Scalar(0.5)/t; q.w() = (mat.coeff(k,j)-mat.coeff(j,k))*t; q.coeffs().coeffRef(j) = (mat.coeff(j,i)+mat.coeff(i,j))*t; q.coeffs().coeffRef(k) = (mat.coeff(k,i)+mat.coeff(i,k))*t; } } }; // set from a vector of coefficients assumed to be a quaternion template struct quaternionbase_assign_impl { typedef typename Other::Scalar Scalar; template EIGEN_DEVICE_FUNC static inline void run(QuaternionBase& q, const Other& vec) { q.coeffs() = vec; } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_QUATERNION_H piqp-octave/src/eigen/Eigen/src/Geometry/OrthoMethods.h0000644000175100001770000002137314107270226022741 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ORTHOMETHODS_H #define EIGEN_ORTHOMETHODS_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \returns the cross product of \c *this and \a other * * Here is a very good explanation of cross-product: http://xkcd.com/199/ * * With complex numbers, the cross product is implemented as * \f$ (\mathbf{a}+i\mathbf{b}) \times (\mathbf{c}+i\mathbf{d}) = (\mathbf{a} \times \mathbf{c} - \mathbf{b} \times \mathbf{d}) - i(\mathbf{a} \times \mathbf{d} - \mathbf{b} \times \mathbf{c})\f$ * * \sa MatrixBase::cross3() */ template template #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename MatrixBase::template cross_product_return_type::type #else typename MatrixBase::PlainObject #endif MatrixBase::cross(const MatrixBase& other) const { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(Derived,3) EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,3) // Note that there is no need for an expression here since the compiler // optimize such a small temporary very well (even within a complex expression) typename internal::nested_eval::type lhs(derived()); typename internal::nested_eval::type rhs(other.derived()); return typename cross_product_return_type::type( numext::conj(lhs.coeff(1) * rhs.coeff(2) - lhs.coeff(2) * rhs.coeff(1)), numext::conj(lhs.coeff(2) * rhs.coeff(0) - lhs.coeff(0) * rhs.coeff(2)), numext::conj(lhs.coeff(0) * rhs.coeff(1) - lhs.coeff(1) * rhs.coeff(0)) ); } namespace internal { template< int Arch,typename VectorLhs,typename VectorRhs, typename Scalar = typename VectorLhs::Scalar, bool Vectorizable = bool((VectorLhs::Flags&VectorRhs::Flags)&PacketAccessBit)> struct cross3_impl { EIGEN_DEVICE_FUNC static inline typename internal::plain_matrix_type::type run(const VectorLhs& lhs, const VectorRhs& rhs) { return typename internal::plain_matrix_type::type( numext::conj(lhs.coeff(1) * rhs.coeff(2) - lhs.coeff(2) * rhs.coeff(1)), numext::conj(lhs.coeff(2) * rhs.coeff(0) - lhs.coeff(0) * rhs.coeff(2)), numext::conj(lhs.coeff(0) * rhs.coeff(1) - lhs.coeff(1) * rhs.coeff(0)), 0 ); } }; } /** \geometry_module \ingroup Geometry_Module * * \returns the cross product of \c *this and \a other using only the x, y, and z coefficients * * The size of \c *this and \a other must be four. This function is especially useful * when using 4D vectors instead of 3D ones to get advantage of SSE/AltiVec vectorization. * * \sa MatrixBase::cross() */ template template EIGEN_DEVICE_FUNC inline typename MatrixBase::PlainObject MatrixBase::cross3(const MatrixBase& other) const { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(Derived,4) EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,4) typedef typename internal::nested_eval::type DerivedNested; typedef typename internal::nested_eval::type OtherDerivedNested; DerivedNested lhs(derived()); OtherDerivedNested rhs(other.derived()); return internal::cross3_impl::type, typename internal::remove_all::type>::run(lhs,rhs); } /** \geometry_module \ingroup Geometry_Module * * \returns a matrix expression of the cross product of each column or row * of the referenced expression with the \a other vector. * * The referenced matrix must have one dimension equal to 3. * The result matrix has the same dimensions than the referenced one. * * \sa MatrixBase::cross() */ template template EIGEN_DEVICE_FUNC const typename VectorwiseOp::CrossReturnType VectorwiseOp::cross(const MatrixBase& other) const { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,3) EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY) typename internal::nested_eval::type mat(_expression()); typename internal::nested_eval::type vec(other.derived()); CrossReturnType res(_expression().rows(),_expression().cols()); if(Direction==Vertical) { eigen_assert(CrossReturnType::RowsAtCompileTime==3 && "the matrix must have exactly 3 rows"); res.row(0) = (mat.row(1) * vec.coeff(2) - mat.row(2) * vec.coeff(1)).conjugate(); res.row(1) = (mat.row(2) * vec.coeff(0) - mat.row(0) * vec.coeff(2)).conjugate(); res.row(2) = (mat.row(0) * vec.coeff(1) - mat.row(1) * vec.coeff(0)).conjugate(); } else { eigen_assert(CrossReturnType::ColsAtCompileTime==3 && "the matrix must have exactly 3 columns"); res.col(0) = (mat.col(1) * vec.coeff(2) - mat.col(2) * vec.coeff(1)).conjugate(); res.col(1) = (mat.col(2) * vec.coeff(0) - mat.col(0) * vec.coeff(2)).conjugate(); res.col(2) = (mat.col(0) * vec.coeff(1) - mat.col(1) * vec.coeff(0)).conjugate(); } return res; } namespace internal { template struct unitOrthogonal_selector { typedef typename plain_matrix_type::type VectorType; typedef typename traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Matrix Vector2; EIGEN_DEVICE_FUNC static inline VectorType run(const Derived& src) { VectorType perp = VectorType::Zero(src.size()); Index maxi = 0; Index sndi = 0; src.cwiseAbs().maxCoeff(&maxi); if (maxi==0) sndi = 1; RealScalar invnm = RealScalar(1)/(Vector2() << src.coeff(sndi),src.coeff(maxi)).finished().norm(); perp.coeffRef(maxi) = -numext::conj(src.coeff(sndi)) * invnm; perp.coeffRef(sndi) = numext::conj(src.coeff(maxi)) * invnm; return perp; } }; template struct unitOrthogonal_selector { typedef typename plain_matrix_type::type VectorType; typedef typename traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; EIGEN_DEVICE_FUNC static inline VectorType run(const Derived& src) { VectorType perp; /* Let us compute the crossed product of *this with a vector * that is not too close to being colinear to *this. */ /* unless the x and y coords are both close to zero, we can * simply take ( -y, x, 0 ) and normalize it. */ if((!isMuchSmallerThan(src.x(), src.z())) || (!isMuchSmallerThan(src.y(), src.z()))) { RealScalar invnm = RealScalar(1)/src.template head<2>().norm(); perp.coeffRef(0) = -numext::conj(src.y())*invnm; perp.coeffRef(1) = numext::conj(src.x())*invnm; perp.coeffRef(2) = 0; } /* if both x and y are close to zero, then the vector is close * to the z-axis, so it's far from colinear to the x-axis for instance. * So we take the crossed product with (1,0,0) and normalize it. */ else { RealScalar invnm = RealScalar(1)/src.template tail<2>().norm(); perp.coeffRef(0) = 0; perp.coeffRef(1) = -numext::conj(src.z())*invnm; perp.coeffRef(2) = numext::conj(src.y())*invnm; } return perp; } }; template struct unitOrthogonal_selector { typedef typename plain_matrix_type::type VectorType; EIGEN_DEVICE_FUNC static inline VectorType run(const Derived& src) { return VectorType(-numext::conj(src.y()), numext::conj(src.x())).normalized(); } }; } // end namespace internal /** \geometry_module \ingroup Geometry_Module * * \returns a unit vector which is orthogonal to \c *this * * The size of \c *this must be at least 2. If the size is exactly 2, * then the returned vector is a counter clock wise rotation of \c *this, i.e., (-y,x).normalized(). * * \sa cross() */ template EIGEN_DEVICE_FUNC typename MatrixBase::PlainObject MatrixBase::unitOrthogonal() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return internal::unitOrthogonal_selector::run(derived()); } } // end namespace Eigen #endif // EIGEN_ORTHOMETHODS_H piqp-octave/src/eigen/Eigen/src/Geometry/Homogeneous.h0000644000175100001770000005036614107270226022616 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_HOMOGENEOUS_H #define EIGEN_HOMOGENEOUS_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * \class Homogeneous * * \brief Expression of one (or a set of) homogeneous vector(s) * * \param MatrixType the type of the object in which we are making homogeneous * * This class represents an expression of one (or a set of) homogeneous vector(s). * It is the return type of MatrixBase::homogeneous() and most of the time * this is the only way it is used. * * \sa MatrixBase::homogeneous() */ namespace internal { template struct traits > : traits { typedef typename traits::StorageKind StorageKind; typedef typename ref_selector::type MatrixTypeNested; typedef typename remove_reference::type _MatrixTypeNested; enum { RowsPlusOne = (MatrixType::RowsAtCompileTime != Dynamic) ? int(MatrixType::RowsAtCompileTime) + 1 : Dynamic, ColsPlusOne = (MatrixType::ColsAtCompileTime != Dynamic) ? int(MatrixType::ColsAtCompileTime) + 1 : Dynamic, RowsAtCompileTime = Direction==Vertical ? RowsPlusOne : MatrixType::RowsAtCompileTime, ColsAtCompileTime = Direction==Horizontal ? ColsPlusOne : MatrixType::ColsAtCompileTime, MaxRowsAtCompileTime = RowsAtCompileTime, MaxColsAtCompileTime = ColsAtCompileTime, TmpFlags = _MatrixTypeNested::Flags & HereditaryBits, Flags = ColsAtCompileTime==1 ? (TmpFlags & ~RowMajorBit) : RowsAtCompileTime==1 ? (TmpFlags | RowMajorBit) : TmpFlags }; }; template struct homogeneous_left_product_impl; template struct homogeneous_right_product_impl; } // end namespace internal template class Homogeneous : public MatrixBase >, internal::no_assignment_operator { public: typedef MatrixType NestedExpression; enum { Direction = _Direction }; typedef MatrixBase Base; EIGEN_DENSE_PUBLIC_INTERFACE(Homogeneous) EIGEN_DEVICE_FUNC explicit inline Homogeneous(const MatrixType& matrix) : m_matrix(matrix) {} EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rows() const EIGEN_NOEXCEPT { return m_matrix.rows() + (int(Direction)==Vertical ? 1 : 0); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index cols() const EIGEN_NOEXCEPT { return m_matrix.cols() + (int(Direction)==Horizontal ? 1 : 0); } EIGEN_DEVICE_FUNC const NestedExpression& nestedExpression() const { return m_matrix; } template EIGEN_DEVICE_FUNC inline const Product operator* (const MatrixBase& rhs) const { eigen_assert(int(Direction)==Horizontal); return Product(*this,rhs.derived()); } template friend EIGEN_DEVICE_FUNC inline const Product operator* (const MatrixBase& lhs, const Homogeneous& rhs) { eigen_assert(int(Direction)==Vertical); return Product(lhs.derived(),rhs); } template friend EIGEN_DEVICE_FUNC inline const Product, Homogeneous > operator* (const Transform& lhs, const Homogeneous& rhs) { eigen_assert(int(Direction)==Vertical); return Product, Homogeneous>(lhs,rhs); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::result_of::type redux(const Func& func) const { return func(m_matrix.redux(func), Scalar(1)); } protected: typename MatrixType::Nested m_matrix; }; /** \geometry_module \ingroup Geometry_Module * * \returns a vector expression that is one longer than the vector argument, with the value 1 symbolically appended as the last coefficient. * * This can be used to convert affine coordinates to homogeneous coordinates. * * \only_for_vectors * * Example: \include MatrixBase_homogeneous.cpp * Output: \verbinclude MatrixBase_homogeneous.out * * \sa VectorwiseOp::homogeneous(), class Homogeneous */ template EIGEN_DEVICE_FUNC inline typename MatrixBase::HomogeneousReturnType MatrixBase::homogeneous() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return HomogeneousReturnType(derived()); } /** \geometry_module \ingroup Geometry_Module * * \returns an expression where the value 1 is symbolically appended as the final coefficient to each column (or row) of the matrix. * * This can be used to convert affine coordinates to homogeneous coordinates. * * Example: \include VectorwiseOp_homogeneous.cpp * Output: \verbinclude VectorwiseOp_homogeneous.out * * \sa MatrixBase::homogeneous(), class Homogeneous */ template EIGEN_DEVICE_FUNC inline Homogeneous VectorwiseOp::homogeneous() const { return HomogeneousReturnType(_expression()); } /** \geometry_module \ingroup Geometry_Module * * \brief homogeneous normalization * * \returns a vector expression of the N-1 first coefficients of \c *this divided by that last coefficient. * * This can be used to convert homogeneous coordinates to affine coordinates. * * It is essentially a shortcut for: * \code this->head(this->size()-1)/this->coeff(this->size()-1); \endcode * * Example: \include MatrixBase_hnormalized.cpp * Output: \verbinclude MatrixBase_hnormalized.out * * \sa VectorwiseOp::hnormalized() */ template EIGEN_DEVICE_FUNC inline const typename MatrixBase::HNormalizedReturnType MatrixBase::hnormalized() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return ConstStartMinusOne(derived(),0,0, ColsAtCompileTime==1?size()-1:1, ColsAtCompileTime==1?1:size()-1) / coeff(size()-1); } /** \geometry_module \ingroup Geometry_Module * * \brief column or row-wise homogeneous normalization * * \returns an expression of the first N-1 coefficients of each column (or row) of \c *this divided by the last coefficient of each column (or row). * * This can be used to convert homogeneous coordinates to affine coordinates. * * It is conceptually equivalent to calling MatrixBase::hnormalized() to each column (or row) of \c *this. * * Example: \include DirectionWise_hnormalized.cpp * Output: \verbinclude DirectionWise_hnormalized.out * * \sa MatrixBase::hnormalized() */ template EIGEN_DEVICE_FUNC inline const typename VectorwiseOp::HNormalizedReturnType VectorwiseOp::hnormalized() const { return HNormalized_Block(_expression(),0,0, Direction==Vertical ? _expression().rows()-1 : _expression().rows(), Direction==Horizontal ? _expression().cols()-1 : _expression().cols()).cwiseQuotient( Replicate (HNormalized_Factors(_expression(), Direction==Vertical ? _expression().rows()-1:0, Direction==Horizontal ? _expression().cols()-1:0, Direction==Vertical ? 1 : _expression().rows(), Direction==Horizontal ? 1 : _expression().cols()), Direction==Vertical ? _expression().rows()-1 : 1, Direction==Horizontal ? _expression().cols()-1 : 1)); } namespace internal { template struct take_matrix_for_product { typedef MatrixOrTransformType type; EIGEN_DEVICE_FUNC static const type& run(const type &x) { return x; } }; template struct take_matrix_for_product > { typedef Transform TransformType; typedef typename internal::add_const::type type; EIGEN_DEVICE_FUNC static type run (const TransformType& x) { return x.affine(); } }; template struct take_matrix_for_product > { typedef Transform TransformType; typedef typename TransformType::MatrixType type; EIGEN_DEVICE_FUNC static const type& run (const TransformType& x) { return x.matrix(); } }; template struct traits,Lhs> > { typedef typename take_matrix_for_product::type LhsMatrixType; typedef typename remove_all::type MatrixTypeCleaned; typedef typename remove_all::type LhsMatrixTypeCleaned; typedef typename make_proper_matrix_type< typename traits::Scalar, LhsMatrixTypeCleaned::RowsAtCompileTime, MatrixTypeCleaned::ColsAtCompileTime, MatrixTypeCleaned::PlainObject::Options, LhsMatrixTypeCleaned::MaxRowsAtCompileTime, MatrixTypeCleaned::MaxColsAtCompileTime>::type ReturnType; }; template struct homogeneous_left_product_impl,Lhs> : public ReturnByValue,Lhs> > { typedef typename traits::LhsMatrixType LhsMatrixType; typedef typename remove_all::type LhsMatrixTypeCleaned; typedef typename remove_all::type LhsMatrixTypeNested; EIGEN_DEVICE_FUNC homogeneous_left_product_impl(const Lhs& lhs, const MatrixType& rhs) : m_lhs(take_matrix_for_product::run(lhs)), m_rhs(rhs) {} EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rows() const EIGEN_NOEXCEPT { return m_lhs.rows(); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index cols() const EIGEN_NOEXCEPT { return m_rhs.cols(); } template EIGEN_DEVICE_FUNC void evalTo(Dest& dst) const { // FIXME investigate how to allow lazy evaluation of this product when possible dst = Block (m_lhs,0,0,m_lhs.rows(),m_lhs.cols()-1) * m_rhs; dst += m_lhs.col(m_lhs.cols()-1).rowwise() .template replicate(m_rhs.cols()); } typename LhsMatrixTypeCleaned::Nested m_lhs; typename MatrixType::Nested m_rhs; }; template struct traits,Rhs> > { typedef typename make_proper_matrix_type::Scalar, MatrixType::RowsAtCompileTime, Rhs::ColsAtCompileTime, MatrixType::PlainObject::Options, MatrixType::MaxRowsAtCompileTime, Rhs::MaxColsAtCompileTime>::type ReturnType; }; template struct homogeneous_right_product_impl,Rhs> : public ReturnByValue,Rhs> > { typedef typename remove_all::type RhsNested; EIGEN_DEVICE_FUNC homogeneous_right_product_impl(const MatrixType& lhs, const Rhs& rhs) : m_lhs(lhs), m_rhs(rhs) {} EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rows() const EIGEN_NOEXCEPT { return m_lhs.rows(); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index cols() const EIGEN_NOEXCEPT { return m_rhs.cols(); } template EIGEN_DEVICE_FUNC void evalTo(Dest& dst) const { // FIXME investigate how to allow lazy evaluation of this product when possible dst = m_lhs * Block (m_rhs,0,0,m_rhs.rows()-1,m_rhs.cols()); dst += m_rhs.row(m_rhs.rows()-1).colwise() .template replicate(m_lhs.rows()); } typename MatrixType::Nested m_lhs; typename Rhs::Nested m_rhs; }; template struct evaluator_traits > { typedef typename storage_kind_to_evaluator_kind::Kind Kind; typedef HomogeneousShape Shape; }; template<> struct AssignmentKind { typedef Dense2Dense Kind; }; template struct unary_evaluator, IndexBased> : evaluator::PlainObject > { typedef Homogeneous XprType; typedef typename XprType::PlainObject PlainObject; typedef evaluator Base; EIGEN_DEVICE_FUNC explicit unary_evaluator(const XprType& op) : Base(), m_temp(op) { ::new (static_cast(this)) Base(m_temp); } protected: PlainObject m_temp; }; // dense = homogeneous template< typename DstXprType, typename ArgType, typename Scalar> struct Assignment, internal::assign_op, Dense2Dense> { typedef Homogeneous SrcXprType; EIGEN_DEVICE_FUNC static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { Index dstRows = src.rows(); Index dstCols = src.cols(); if((dst.rows()!=dstRows) || (dst.cols()!=dstCols)) dst.resize(dstRows, dstCols); dst.template topRows(src.nestedExpression().rows()) = src.nestedExpression(); dst.row(dst.rows()-1).setOnes(); } }; // dense = homogeneous template< typename DstXprType, typename ArgType, typename Scalar> struct Assignment, internal::assign_op, Dense2Dense> { typedef Homogeneous SrcXprType; EIGEN_DEVICE_FUNC static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { Index dstRows = src.rows(); Index dstCols = src.cols(); if((dst.rows()!=dstRows) || (dst.cols()!=dstCols)) dst.resize(dstRows, dstCols); dst.template leftCols(src.nestedExpression().cols()) = src.nestedExpression(); dst.col(dst.cols()-1).setOnes(); } }; template struct generic_product_impl, Rhs, HomogeneousShape, DenseShape, ProductTag> { template EIGEN_DEVICE_FUNC static void evalTo(Dest& dst, const Homogeneous& lhs, const Rhs& rhs) { homogeneous_right_product_impl, Rhs>(lhs.nestedExpression(), rhs).evalTo(dst); } }; template struct homogeneous_right_product_refactoring_helper { enum { Dim = Lhs::ColsAtCompileTime, Rows = Lhs::RowsAtCompileTime }; typedef typename Rhs::template ConstNRowsBlockXpr::Type LinearBlockConst; typedef typename remove_const::type LinearBlock; typedef typename Rhs::ConstRowXpr ConstantColumn; typedef Replicate ConstantBlock; typedef Product LinearProduct; typedef CwiseBinaryOp, const LinearProduct, const ConstantBlock> Xpr; }; template struct product_evaluator, ProductTag, HomogeneousShape, DenseShape> : public evaluator::Xpr> { typedef Product XprType; typedef homogeneous_right_product_refactoring_helper helper; typedef typename helper::ConstantBlock ConstantBlock; typedef typename helper::Xpr RefactoredXpr; typedef evaluator Base; EIGEN_DEVICE_FUNC explicit product_evaluator(const XprType& xpr) : Base( xpr.lhs().nestedExpression() .lazyProduct( xpr.rhs().template topRows(xpr.lhs().nestedExpression().cols()) ) + ConstantBlock(xpr.rhs().row(xpr.rhs().rows()-1),xpr.lhs().rows(), 1) ) {} }; template struct generic_product_impl, DenseShape, HomogeneousShape, ProductTag> { template EIGEN_DEVICE_FUNC static void evalTo(Dest& dst, const Lhs& lhs, const Homogeneous& rhs) { homogeneous_left_product_impl, Lhs>(lhs, rhs.nestedExpression()).evalTo(dst); } }; // TODO: the following specialization is to address a regression from 3.2 to 3.3 // In the future, this path should be optimized. template struct generic_product_impl, TriangularShape, HomogeneousShape, ProductTag> { template static void evalTo(Dest& dst, const Lhs& lhs, const Homogeneous& rhs) { dst.noalias() = lhs * rhs.eval(); } }; template struct homogeneous_left_product_refactoring_helper { enum { Dim = Rhs::RowsAtCompileTime, Cols = Rhs::ColsAtCompileTime }; typedef typename Lhs::template ConstNColsBlockXpr::Type LinearBlockConst; typedef typename remove_const::type LinearBlock; typedef typename Lhs::ConstColXpr ConstantColumn; typedef Replicate ConstantBlock; typedef Product LinearProduct; typedef CwiseBinaryOp, const LinearProduct, const ConstantBlock> Xpr; }; template struct product_evaluator, ProductTag, DenseShape, HomogeneousShape> : public evaluator::Xpr> { typedef Product XprType; typedef homogeneous_left_product_refactoring_helper helper; typedef typename helper::ConstantBlock ConstantBlock; typedef typename helper::Xpr RefactoredXpr; typedef evaluator Base; EIGEN_DEVICE_FUNC explicit product_evaluator(const XprType& xpr) : Base( xpr.lhs().template leftCols(xpr.rhs().nestedExpression().rows()) .lazyProduct( xpr.rhs().nestedExpression() ) + ConstantBlock(xpr.lhs().col(xpr.lhs().cols()-1),1,xpr.rhs().cols()) ) {} }; template struct generic_product_impl, Homogeneous, DenseShape, HomogeneousShape, ProductTag> { typedef Transform TransformType; template EIGEN_DEVICE_FUNC static void evalTo(Dest& dst, const TransformType& lhs, const Homogeneous& rhs) { homogeneous_left_product_impl, TransformType>(lhs, rhs.nestedExpression()).evalTo(dst); } }; template struct permutation_matrix_product : public permutation_matrix_product {}; } // end namespace internal } // end namespace Eigen #endif // EIGEN_HOMOGENEOUS_H piqp-octave/src/eigen/Eigen/src/Geometry/EulerAngles.h0000644000175100001770000000705014107270226022524 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_EULERANGLES_H #define EIGEN_EULERANGLES_H namespace Eigen { /** \geometry_module \ingroup Geometry_Module * * * \returns the Euler-angles of the rotation matrix \c *this using the convention defined by the triplet (\a a0,\a a1,\a a2) * * Each of the three parameters \a a0,\a a1,\a a2 represents the respective rotation axis as an integer in {0,1,2}. * For instance, in: * \code Vector3f ea = mat.eulerAngles(2, 0, 2); \endcode * "2" represents the z axis and "0" the x axis, etc. The returned angles are such that * we have the following equality: * \code * mat == AngleAxisf(ea[0], Vector3f::UnitZ()) * * AngleAxisf(ea[1], Vector3f::UnitX()) * * AngleAxisf(ea[2], Vector3f::UnitZ()); \endcode * This corresponds to the right-multiply conventions (with right hand side frames). * * The returned angles are in the ranges [0:pi]x[-pi:pi]x[-pi:pi]. * * \sa class AngleAxis */ template EIGEN_DEVICE_FUNC inline Matrix::Scalar,3,1> MatrixBase::eulerAngles(Index a0, Index a1, Index a2) const { EIGEN_USING_STD(atan2) EIGEN_USING_STD(sin) EIGEN_USING_STD(cos) /* Implemented from Graphics Gems IV */ EIGEN_STATIC_ASSERT_MATRIX_SPECIFIC_SIZE(Derived,3,3) Matrix res; typedef Matrix Vector2; const Index odd = ((a0+1)%3 == a1) ? 0 : 1; const Index i = a0; const Index j = (a0 + 1 + odd)%3; const Index k = (a0 + 2 - odd)%3; if (a0==a2) { res[0] = atan2(coeff(j,i), coeff(k,i)); if((odd && res[0]Scalar(0))) { if(res[0] > Scalar(0)) { res[0] -= Scalar(EIGEN_PI); } else { res[0] += Scalar(EIGEN_PI); } Scalar s2 = Vector2(coeff(j,i), coeff(k,i)).norm(); res[1] = -atan2(s2, coeff(i,i)); } else { Scalar s2 = Vector2(coeff(j,i), coeff(k,i)).norm(); res[1] = atan2(s2, coeff(i,i)); } // With a=(0,1,0), we have i=0; j=1; k=2, and after computing the first two angles, // we can compute their respective rotation, and apply its inverse to M. Since the result must // be a rotation around x, we have: // // c2 s1.s2 c1.s2 1 0 0 // 0 c1 -s1 * M = 0 c3 s3 // -s2 s1.c2 c1.c2 0 -s3 c3 // // Thus: m11.c1 - m21.s1 = c3 & m12.c1 - m22.s1 = s3 Scalar s1 = sin(res[0]); Scalar c1 = cos(res[0]); res[2] = atan2(c1*coeff(j,k)-s1*coeff(k,k), c1*coeff(j,j) - s1 * coeff(k,j)); } else { res[0] = atan2(coeff(j,k), coeff(k,k)); Scalar c2 = Vector2(coeff(i,i), coeff(i,j)).norm(); if((odd && res[0]Scalar(0))) { if(res[0] > Scalar(0)) { res[0] -= Scalar(EIGEN_PI); } else { res[0] += Scalar(EIGEN_PI); } res[1] = atan2(-coeff(i,k), -c2); } else res[1] = atan2(-coeff(i,k), c2); Scalar s1 = sin(res[0]); Scalar c1 = cos(res[0]); res[2] = atan2(s1*coeff(k,i)-c1*coeff(j,i), c1*coeff(j,j) - s1 * coeff(k,j)); } if (!odd) res = -res; return res; } } // end namespace Eigen #endif // EIGEN_EULERANGLES_H piqp-octave/src/eigen/Eigen/src/Geometry/Transform.h0000644000175100001770000017075214107270226022303 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2009 Benoit Jacob // Copyright (C) 2010 Hauke Heibel // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_TRANSFORM_H #define EIGEN_TRANSFORM_H namespace Eigen { namespace internal { template struct transform_traits { enum { Dim = Transform::Dim, HDim = Transform::HDim, Mode = Transform::Mode, IsProjective = (int(Mode)==int(Projective)) }; }; template< typename TransformType, typename MatrixType, int Case = transform_traits::IsProjective ? 0 : int(MatrixType::RowsAtCompileTime) == int(transform_traits::HDim) ? 1 : 2, int RhsCols = MatrixType::ColsAtCompileTime> struct transform_right_product_impl; template< typename Other, int Mode, int Options, int Dim, int HDim, int OtherRows=Other::RowsAtCompileTime, int OtherCols=Other::ColsAtCompileTime> struct transform_left_product_impl; template< typename Lhs, typename Rhs, bool AnyProjective = transform_traits::IsProjective || transform_traits::IsProjective> struct transform_transform_product_impl; template< typename Other, int Mode, int Options, int Dim, int HDim, int OtherRows=Other::RowsAtCompileTime, int OtherCols=Other::ColsAtCompileTime> struct transform_construct_from_matrix; template struct transform_take_affine_part; template struct traits > { typedef _Scalar Scalar; typedef Eigen::Index StorageIndex; typedef Dense StorageKind; enum { Dim1 = _Dim==Dynamic ? _Dim : _Dim + 1, RowsAtCompileTime = _Mode==Projective ? Dim1 : _Dim, ColsAtCompileTime = Dim1, MaxRowsAtCompileTime = RowsAtCompileTime, MaxColsAtCompileTime = ColsAtCompileTime, Flags = 0 }; }; template struct transform_make_affine; } // end namespace internal /** \geometry_module \ingroup Geometry_Module * * \class Transform * * \brief Represents an homogeneous transformation in a N dimensional space * * \tparam _Scalar the scalar type, i.e., the type of the coefficients * \tparam _Dim the dimension of the space * \tparam _Mode the type of the transformation. Can be: * - #Affine: the transformation is stored as a (Dim+1)^2 matrix, * where the last row is assumed to be [0 ... 0 1]. * - #AffineCompact: the transformation is stored as a (Dim)x(Dim+1) matrix. * - #Projective: the transformation is stored as a (Dim+1)^2 matrix * without any assumption. * - #Isometry: same as #Affine with the additional assumption that * the linear part represents a rotation. This assumption is exploited * to speed up some functions such as inverse() and rotation(). * \tparam _Options has the same meaning as in class Matrix. It allows to specify DontAlign and/or RowMajor. * These Options are passed directly to the underlying matrix type. * * The homography is internally represented and stored by a matrix which * is available through the matrix() method. To understand the behavior of * this class you have to think a Transform object as its internal * matrix representation. The chosen convention is right multiply: * * \code v' = T * v \endcode * * Therefore, an affine transformation matrix M is shaped like this: * * \f$ \left( \begin{array}{cc} * linear & translation\\ * 0 ... 0 & 1 * \end{array} \right) \f$ * * Note that for a projective transformation the last row can be anything, * and then the interpretation of different parts might be slightly different. * * However, unlike a plain matrix, the Transform class provides many features * simplifying both its assembly and usage. In particular, it can be composed * with any other transformations (Transform,Translation,RotationBase,DiagonalMatrix) * and can be directly used to transform implicit homogeneous vectors. All these * operations are handled via the operator*. For the composition of transformations, * its principle consists to first convert the right/left hand sides of the product * to a compatible (Dim+1)^2 matrix and then perform a pure matrix product. * Of course, internally, operator* tries to perform the minimal number of operations * according to the nature of each terms. Likewise, when applying the transform * to points, the latters are automatically promoted to homogeneous vectors * before doing the matrix product. The conventions to homogeneous representations * are performed as follow: * * \b Translation t (Dim)x(1): * \f$ \left( \begin{array}{cc} * I & t \\ * 0\,...\,0 & 1 * \end{array} \right) \f$ * * \b Rotation R (Dim)x(Dim): * \f$ \left( \begin{array}{cc} * R & 0\\ * 0\,...\,0 & 1 * \end{array} \right) \f$ * * \b Scaling \b DiagonalMatrix S (Dim)x(Dim): * \f$ \left( \begin{array}{cc} * S & 0\\ * 0\,...\,0 & 1 * \end{array} \right) \f$ * * \b Column \b point v (Dim)x(1): * \f$ \left( \begin{array}{c} * v\\ * 1 * \end{array} \right) \f$ * * \b Set \b of \b column \b points V1...Vn (Dim)x(n): * \f$ \left( \begin{array}{ccc} * v_1 & ... & v_n\\ * 1 & ... & 1 * \end{array} \right) \f$ * * The concatenation of a Transform object with any kind of other transformation * always returns a Transform object. * * A little exception to the "as pure matrix product" rule is the case of the * transformation of non homogeneous vectors by an affine transformation. In * that case the last matrix row can be ignored, and the product returns non * homogeneous vectors. * * Since, for instance, a Dim x Dim matrix is interpreted as a linear transformation, * it is not possible to directly transform Dim vectors stored in a Dim x Dim matrix. * The solution is either to use a Dim x Dynamic matrix or explicitly request a * vector transformation by making the vector homogeneous: * \code * m' = T * m.colwise().homogeneous(); * \endcode * Note that there is zero overhead. * * Conversion methods from/to Qt's QMatrix and QTransform are available if the * preprocessor token EIGEN_QT_SUPPORT is defined. * * This class can be extended with the help of the plugin mechanism described on the page * \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_TRANSFORM_PLUGIN. * * \sa class Matrix, class Quaternion */ template class Transform { public: EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF_VECTORIZABLE_FIXED_SIZE(_Scalar,_Dim==Dynamic ? Dynamic : (_Dim+1)*(_Dim+1)) enum { Mode = _Mode, Options = _Options, Dim = _Dim, ///< space dimension in which the transformation holds HDim = _Dim+1, ///< size of a respective homogeneous vector Rows = int(Mode)==(AffineCompact) ? Dim : HDim }; /** the scalar type of the coefficients */ typedef _Scalar Scalar; typedef Eigen::Index StorageIndex; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** type of the matrix used to represent the transformation */ typedef typename internal::make_proper_matrix_type::type MatrixType; /** constified MatrixType */ typedef const MatrixType ConstMatrixType; /** type of the matrix used to represent the linear part of the transformation */ typedef Matrix LinearMatrixType; /** type of read/write reference to the linear part of the transformation */ typedef Block LinearPart; /** type of read reference to the linear part of the transformation */ typedef const Block ConstLinearPart; /** type of read/write reference to the affine part of the transformation */ typedef typename internal::conditional >::type AffinePart; /** type of read reference to the affine part of the transformation */ typedef typename internal::conditional >::type ConstAffinePart; /** type of a vector */ typedef Matrix VectorType; /** type of a read/write reference to the translation part of the rotation */ typedef Block::Flags & RowMajorBit)> TranslationPart; /** type of a read reference to the translation part of the rotation */ typedef const Block::Flags & RowMajorBit)> ConstTranslationPart; /** corresponding translation type */ typedef Translation TranslationType; // this intermediate enum is needed to avoid an ICE with gcc 3.4 and 4.0 enum { TransformTimeDiagonalMode = ((Mode==int(Isometry))?Affine:int(Mode)) }; /** The return type of the product between a diagonal matrix and a transform */ typedef Transform TransformTimeDiagonalReturnType; protected: MatrixType m_matrix; public: /** Default constructor without initialization of the meaningful coefficients. * If Mode==Affine or Mode==Isometry, then the last row is set to [0 ... 0 1] */ EIGEN_DEVICE_FUNC inline Transform() { check_template_params(); internal::transform_make_affine<(int(Mode)==Affine || int(Mode)==Isometry) ? Affine : AffineCompact>::run(m_matrix); } EIGEN_DEVICE_FUNC inline explicit Transform(const TranslationType& t) { check_template_params(); *this = t; } EIGEN_DEVICE_FUNC inline explicit Transform(const UniformScaling& s) { check_template_params(); *this = s; } template EIGEN_DEVICE_FUNC inline explicit Transform(const RotationBase& r) { check_template_params(); *this = r; } typedef internal::transform_take_affine_part take_affine_part; /** Constructs and initializes a transformation from a Dim^2 or a (Dim+1)^2 matrix. */ template EIGEN_DEVICE_FUNC inline explicit Transform(const EigenBase& other) { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY); check_template_params(); internal::transform_construct_from_matrix::run(this, other.derived()); } /** Set \c *this from a Dim^2 or (Dim+1)^2 matrix. */ template EIGEN_DEVICE_FUNC inline Transform& operator=(const EigenBase& other) { EIGEN_STATIC_ASSERT((internal::is_same::value), YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY); internal::transform_construct_from_matrix::run(this, other.derived()); return *this; } template EIGEN_DEVICE_FUNC inline Transform(const Transform& other) { check_template_params(); // only the options change, we can directly copy the matrices m_matrix = other.matrix(); } template EIGEN_DEVICE_FUNC inline Transform(const Transform& other) { check_template_params(); // prevent conversions as: // Affine | AffineCompact | Isometry = Projective EIGEN_STATIC_ASSERT(EIGEN_IMPLIES(OtherMode==int(Projective), Mode==int(Projective)), YOU_PERFORMED_AN_INVALID_TRANSFORMATION_CONVERSION) // prevent conversions as: // Isometry = Affine | AffineCompact EIGEN_STATIC_ASSERT(EIGEN_IMPLIES(OtherMode==int(Affine)||OtherMode==int(AffineCompact), Mode!=int(Isometry)), YOU_PERFORMED_AN_INVALID_TRANSFORMATION_CONVERSION) enum { ModeIsAffineCompact = Mode == int(AffineCompact), OtherModeIsAffineCompact = OtherMode == int(AffineCompact) }; if(EIGEN_CONST_CONDITIONAL(ModeIsAffineCompact == OtherModeIsAffineCompact)) { // We need the block expression because the code is compiled for all // combinations of transformations and will trigger a compile time error // if one tries to assign the matrices directly m_matrix.template block(0,0) = other.matrix().template block(0,0); makeAffine(); } else if(EIGEN_CONST_CONDITIONAL(OtherModeIsAffineCompact)) { typedef typename Transform::MatrixType OtherMatrixType; internal::transform_construct_from_matrix::run(this, other.matrix()); } else { // here we know that Mode == AffineCompact and OtherMode != AffineCompact. // if OtherMode were Projective, the static assert above would already have caught it. // So the only possibility is that OtherMode == Affine linear() = other.linear(); translation() = other.translation(); } } template EIGEN_DEVICE_FUNC Transform(const ReturnByValue& other) { check_template_params(); other.evalTo(*this); } template EIGEN_DEVICE_FUNC Transform& operator=(const ReturnByValue& other) { other.evalTo(*this); return *this; } #ifdef EIGEN_QT_SUPPORT inline Transform(const QMatrix& other); inline Transform& operator=(const QMatrix& other); inline QMatrix toQMatrix(void) const; inline Transform(const QTransform& other); inline Transform& operator=(const QTransform& other); inline QTransform toQTransform(void) const; #endif EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return int(Mode)==int(Projective) ? m_matrix.cols() : (m_matrix.cols()-1); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_matrix.cols(); } /** shortcut for m_matrix(row,col); * \sa MatrixBase::operator(Index,Index) const */ EIGEN_DEVICE_FUNC inline Scalar operator() (Index row, Index col) const { return m_matrix(row,col); } /** shortcut for m_matrix(row,col); * \sa MatrixBase::operator(Index,Index) */ EIGEN_DEVICE_FUNC inline Scalar& operator() (Index row, Index col) { return m_matrix(row,col); } /** \returns a read-only expression of the transformation matrix */ EIGEN_DEVICE_FUNC inline const MatrixType& matrix() const { return m_matrix; } /** \returns a writable expression of the transformation matrix */ EIGEN_DEVICE_FUNC inline MatrixType& matrix() { return m_matrix; } /** \returns a read-only expression of the linear part of the transformation */ EIGEN_DEVICE_FUNC inline ConstLinearPart linear() const { return ConstLinearPart(m_matrix,0,0); } /** \returns a writable expression of the linear part of the transformation */ EIGEN_DEVICE_FUNC inline LinearPart linear() { return LinearPart(m_matrix,0,0); } /** \returns a read-only expression of the Dim x HDim affine part of the transformation */ EIGEN_DEVICE_FUNC inline ConstAffinePart affine() const { return take_affine_part::run(m_matrix); } /** \returns a writable expression of the Dim x HDim affine part of the transformation */ EIGEN_DEVICE_FUNC inline AffinePart affine() { return take_affine_part::run(m_matrix); } /** \returns a read-only expression of the translation vector of the transformation */ EIGEN_DEVICE_FUNC inline ConstTranslationPart translation() const { return ConstTranslationPart(m_matrix,0,Dim); } /** \returns a writable expression of the translation vector of the transformation */ EIGEN_DEVICE_FUNC inline TranslationPart translation() { return TranslationPart(m_matrix,0,Dim); } /** \returns an expression of the product between the transform \c *this and a matrix expression \a other. * * The right-hand-side \a other can be either: * \li an homogeneous vector of size Dim+1, * \li a set of homogeneous vectors of size Dim+1 x N, * \li a transformation matrix of size Dim+1 x Dim+1. * * Moreover, if \c *this represents an affine transformation (i.e., Mode!=Projective), then \a other can also be: * \li a point of size Dim (computes: \code this->linear() * other + this->translation()\endcode), * \li a set of N points as a Dim x N matrix (computes: \code (this->linear() * other).colwise() + this->translation()\endcode), * * In all cases, the return type is a matrix or vector of same sizes as the right-hand-side \a other. * * If you want to interpret \a other as a linear or affine transformation, then first convert it to a Transform<> type, * or do your own cooking. * * Finally, if you want to apply Affine transformations to vectors, then explicitly apply the linear part only: * \code * Affine3f A; * Vector3f v1, v2; * v2 = A.linear() * v1; * \endcode * */ // note: this function is defined here because some compilers cannot find the respective declaration template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename internal::transform_right_product_impl::ResultType operator * (const EigenBase &other) const { return internal::transform_right_product_impl::run(*this,other.derived()); } /** \returns the product expression of a transformation matrix \a a times a transform \a b * * The left hand side \a other can be either: * \li a linear transformation matrix of size Dim x Dim, * \li an affine transformation matrix of size Dim x Dim+1, * \li a general transformation matrix of size Dim+1 x Dim+1. */ template friend EIGEN_DEVICE_FUNC inline const typename internal::transform_left_product_impl::ResultType operator * (const EigenBase &a, const Transform &b) { return internal::transform_left_product_impl::run(a.derived(),b); } /** \returns The product expression of a transform \a a times a diagonal matrix \a b * * The rhs diagonal matrix is interpreted as an affine scaling transformation. The * product results in a Transform of the same type (mode) as the lhs only if the lhs * mode is no isometry. In that case, the returned transform is an affinity. */ template EIGEN_DEVICE_FUNC inline const TransformTimeDiagonalReturnType operator * (const DiagonalBase &b) const { TransformTimeDiagonalReturnType res(*this); res.linearExt() *= b; return res; } /** \returns The product expression of a diagonal matrix \a a times a transform \a b * * The lhs diagonal matrix is interpreted as an affine scaling transformation. The * product results in a Transform of the same type (mode) as the lhs only if the lhs * mode is no isometry. In that case, the returned transform is an affinity. */ template EIGEN_DEVICE_FUNC friend inline TransformTimeDiagonalReturnType operator * (const DiagonalBase &a, const Transform &b) { TransformTimeDiagonalReturnType res; res.linear().noalias() = a*b.linear(); res.translation().noalias() = a*b.translation(); if (EIGEN_CONST_CONDITIONAL(Mode!=int(AffineCompact))) res.matrix().row(Dim) = b.matrix().row(Dim); return res; } template EIGEN_DEVICE_FUNC inline Transform& operator*=(const EigenBase& other) { return *this = *this * other; } /** Concatenates two transformations */ EIGEN_DEVICE_FUNC inline const Transform operator * (const Transform& other) const { return internal::transform_transform_product_impl::run(*this,other); } #if EIGEN_COMP_ICC private: // this intermediate structure permits to workaround a bug in ICC 11: // error: template instantiation resulted in unexpected function type of "Eigen::Transform // (const Eigen::Transform &) const" // (the meaning of a name may have changed since the template declaration -- the type of the template is: // "Eigen::internal::transform_transform_product_impl, // Eigen::Transform, >::ResultType (const Eigen::Transform &) const") // template struct icc_11_workaround { typedef internal::transform_transform_product_impl > ProductType; typedef typename ProductType::ResultType ResultType; }; public: /** Concatenates two different transformations */ template inline typename icc_11_workaround::ResultType operator * (const Transform& other) const { typedef typename icc_11_workaround::ProductType ProductType; return ProductType::run(*this,other); } #else /** Concatenates two different transformations */ template EIGEN_DEVICE_FUNC inline typename internal::transform_transform_product_impl >::ResultType operator * (const Transform& other) const { return internal::transform_transform_product_impl >::run(*this,other); } #endif /** \sa MatrixBase::setIdentity() */ EIGEN_DEVICE_FUNC void setIdentity() { m_matrix.setIdentity(); } /** * \brief Returns an identity transformation. * \todo In the future this function should be returning a Transform expression. */ EIGEN_DEVICE_FUNC static const Transform Identity() { return Transform(MatrixType::Identity()); } template EIGEN_DEVICE_FUNC inline Transform& scale(const MatrixBase &other); template EIGEN_DEVICE_FUNC inline Transform& prescale(const MatrixBase &other); EIGEN_DEVICE_FUNC inline Transform& scale(const Scalar& s); EIGEN_DEVICE_FUNC inline Transform& prescale(const Scalar& s); template EIGEN_DEVICE_FUNC inline Transform& translate(const MatrixBase &other); template EIGEN_DEVICE_FUNC inline Transform& pretranslate(const MatrixBase &other); template EIGEN_DEVICE_FUNC inline Transform& rotate(const RotationType& rotation); template EIGEN_DEVICE_FUNC inline Transform& prerotate(const RotationType& rotation); EIGEN_DEVICE_FUNC Transform& shear(const Scalar& sx, const Scalar& sy); EIGEN_DEVICE_FUNC Transform& preshear(const Scalar& sx, const Scalar& sy); EIGEN_DEVICE_FUNC inline Transform& operator=(const TranslationType& t); EIGEN_DEVICE_FUNC inline Transform& operator*=(const TranslationType& t) { return translate(t.vector()); } EIGEN_DEVICE_FUNC inline Transform operator*(const TranslationType& t) const; EIGEN_DEVICE_FUNC inline Transform& operator=(const UniformScaling& t); EIGEN_DEVICE_FUNC inline Transform& operator*=(const UniformScaling& s) { return scale(s.factor()); } EIGEN_DEVICE_FUNC inline TransformTimeDiagonalReturnType operator*(const UniformScaling& s) const { TransformTimeDiagonalReturnType res = *this; res.scale(s.factor()); return res; } EIGEN_DEVICE_FUNC inline Transform& operator*=(const DiagonalMatrix& s) { linearExt() *= s; return *this; } template EIGEN_DEVICE_FUNC inline Transform& operator=(const RotationBase& r); template EIGEN_DEVICE_FUNC inline Transform& operator*=(const RotationBase& r) { return rotate(r.toRotationMatrix()); } template EIGEN_DEVICE_FUNC inline Transform operator*(const RotationBase& r) const; typedef typename internal::conditional::type RotationReturnType; EIGEN_DEVICE_FUNC RotationReturnType rotation() const; template EIGEN_DEVICE_FUNC void computeRotationScaling(RotationMatrixType *rotation, ScalingMatrixType *scaling) const; template EIGEN_DEVICE_FUNC void computeScalingRotation(ScalingMatrixType *scaling, RotationMatrixType *rotation) const; template EIGEN_DEVICE_FUNC Transform& fromPositionOrientationScale(const MatrixBase &position, const OrientationType& orientation, const MatrixBase &scale); EIGEN_DEVICE_FUNC inline Transform inverse(TransformTraits traits = (TransformTraits)Mode) const; /** \returns a const pointer to the column major internal matrix */ EIGEN_DEVICE_FUNC const Scalar* data() const { return m_matrix.data(); } /** \returns a non-const pointer to the column major internal matrix */ EIGEN_DEVICE_FUNC Scalar* data() { return m_matrix.data(); } /** \returns \c *this with scalar type casted to \a NewScalarType * * Note that if \a NewScalarType is equal to the current scalar type of \c *this * then this function smartly returns a const reference to \c *this. */ template EIGEN_DEVICE_FUNC inline typename internal::cast_return_type >::type cast() const { return typename internal::cast_return_type >::type(*this); } /** Copy constructor with scalar type conversion */ template EIGEN_DEVICE_FUNC inline explicit Transform(const Transform& other) { check_template_params(); m_matrix = other.matrix().template cast(); } /** \returns \c true if \c *this is approximately equal to \a other, within the precision * determined by \a prec. * * \sa MatrixBase::isApprox() */ EIGEN_DEVICE_FUNC bool isApprox(const Transform& other, const typename NumTraits::Real& prec = NumTraits::dummy_precision()) const { return m_matrix.isApprox(other.m_matrix, prec); } /** Sets the last row to [0 ... 0 1] */ EIGEN_DEVICE_FUNC void makeAffine() { internal::transform_make_affine::run(m_matrix); } /** \internal * \returns the Dim x Dim linear part if the transformation is affine, * and the HDim x Dim part for projective transformations. */ EIGEN_DEVICE_FUNC inline Block linearExt() { return m_matrix.template block(0,0); } /** \internal * \returns the Dim x Dim linear part if the transformation is affine, * and the HDim x Dim part for projective transformations. */ EIGEN_DEVICE_FUNC inline const Block linearExt() const { return m_matrix.template block(0,0); } /** \internal * \returns the translation part if the transformation is affine, * and the last column for projective transformations. */ EIGEN_DEVICE_FUNC inline Block translationExt() { return m_matrix.template block(0,Dim); } /** \internal * \returns the translation part if the transformation is affine, * and the last column for projective transformations. */ EIGEN_DEVICE_FUNC inline const Block translationExt() const { return m_matrix.template block(0,Dim); } #ifdef EIGEN_TRANSFORM_PLUGIN #include EIGEN_TRANSFORM_PLUGIN #endif protected: #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_DEVICE_FUNC static EIGEN_STRONG_INLINE void check_template_params() { EIGEN_STATIC_ASSERT((Options & (DontAlign|RowMajor)) == Options, INVALID_MATRIX_TEMPLATE_PARAMETERS) } #endif }; /** \ingroup Geometry_Module */ typedef Transform Isometry2f; /** \ingroup Geometry_Module */ typedef Transform Isometry3f; /** \ingroup Geometry_Module */ typedef Transform Isometry2d; /** \ingroup Geometry_Module */ typedef Transform Isometry3d; /** \ingroup Geometry_Module */ typedef Transform Affine2f; /** \ingroup Geometry_Module */ typedef Transform Affine3f; /** \ingroup Geometry_Module */ typedef Transform Affine2d; /** \ingroup Geometry_Module */ typedef Transform Affine3d; /** \ingroup Geometry_Module */ typedef Transform AffineCompact2f; /** \ingroup Geometry_Module */ typedef Transform AffineCompact3f; /** \ingroup Geometry_Module */ typedef Transform AffineCompact2d; /** \ingroup Geometry_Module */ typedef Transform AffineCompact3d; /** \ingroup Geometry_Module */ typedef Transform Projective2f; /** \ingroup Geometry_Module */ typedef Transform Projective3f; /** \ingroup Geometry_Module */ typedef Transform Projective2d; /** \ingroup Geometry_Module */ typedef Transform Projective3d; /************************** *** Optional QT support *** **************************/ #ifdef EIGEN_QT_SUPPORT /** Initializes \c *this from a QMatrix assuming the dimension is 2. * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template Transform::Transform(const QMatrix& other) { check_template_params(); *this = other; } /** Set \c *this from a QMatrix assuming the dimension is 2. * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template Transform& Transform::operator=(const QMatrix& other) { EIGEN_STATIC_ASSERT(Dim==2, YOU_MADE_A_PROGRAMMING_MISTAKE) if (EIGEN_CONST_CONDITIONAL(Mode == int(AffineCompact))) m_matrix << other.m11(), other.m21(), other.dx(), other.m12(), other.m22(), other.dy(); else m_matrix << other.m11(), other.m21(), other.dx(), other.m12(), other.m22(), other.dy(), 0, 0, 1; return *this; } /** \returns a QMatrix from \c *this assuming the dimension is 2. * * \warning this conversion might loss data if \c *this is not affine * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template QMatrix Transform::toQMatrix(void) const { check_template_params(); EIGEN_STATIC_ASSERT(Dim==2, YOU_MADE_A_PROGRAMMING_MISTAKE) return QMatrix(m_matrix.coeff(0,0), m_matrix.coeff(1,0), m_matrix.coeff(0,1), m_matrix.coeff(1,1), m_matrix.coeff(0,2), m_matrix.coeff(1,2)); } /** Initializes \c *this from a QTransform assuming the dimension is 2. * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template Transform::Transform(const QTransform& other) { check_template_params(); *this = other; } /** Set \c *this from a QTransform assuming the dimension is 2. * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template Transform& Transform::operator=(const QTransform& other) { check_template_params(); EIGEN_STATIC_ASSERT(Dim==2, YOU_MADE_A_PROGRAMMING_MISTAKE) if (EIGEN_CONST_CONDITIONAL(Mode == int(AffineCompact))) m_matrix << other.m11(), other.m21(), other.dx(), other.m12(), other.m22(), other.dy(); else m_matrix << other.m11(), other.m21(), other.dx(), other.m12(), other.m22(), other.dy(), other.m13(), other.m23(), other.m33(); return *this; } /** \returns a QTransform from \c *this assuming the dimension is 2. * * This function is available only if the token EIGEN_QT_SUPPORT is defined. */ template QTransform Transform::toQTransform(void) const { EIGEN_STATIC_ASSERT(Dim==2, YOU_MADE_A_PROGRAMMING_MISTAKE) if (EIGEN_CONST_CONDITIONAL(Mode == int(AffineCompact))) return QTransform(m_matrix.coeff(0,0), m_matrix.coeff(1,0), m_matrix.coeff(0,1), m_matrix.coeff(1,1), m_matrix.coeff(0,2), m_matrix.coeff(1,2)); else return QTransform(m_matrix.coeff(0,0), m_matrix.coeff(1,0), m_matrix.coeff(2,0), m_matrix.coeff(0,1), m_matrix.coeff(1,1), m_matrix.coeff(2,1), m_matrix.coeff(0,2), m_matrix.coeff(1,2), m_matrix.coeff(2,2)); } #endif /********************* *** Procedural API *** *********************/ /** Applies on the right the non uniform scale transformation represented * by the vector \a other to \c *this and returns a reference to \c *this. * \sa prescale() */ template template EIGEN_DEVICE_FUNC Transform& Transform::scale(const MatrixBase &other) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,int(Dim)) EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) linearExt().noalias() = (linearExt() * other.asDiagonal()); return *this; } /** Applies on the right a uniform scale of a factor \a c to \c *this * and returns a reference to \c *this. * \sa prescale(Scalar) */ template EIGEN_DEVICE_FUNC inline Transform& Transform::scale(const Scalar& s) { EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) linearExt() *= s; return *this; } /** Applies on the left the non uniform scale transformation represented * by the vector \a other to \c *this and returns a reference to \c *this. * \sa scale() */ template template EIGEN_DEVICE_FUNC Transform& Transform::prescale(const MatrixBase &other) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,int(Dim)) EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) affine().noalias() = (other.asDiagonal() * affine()); return *this; } /** Applies on the left a uniform scale of a factor \a c to \c *this * and returns a reference to \c *this. * \sa scale(Scalar) */ template EIGEN_DEVICE_FUNC inline Transform& Transform::prescale(const Scalar& s) { EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) m_matrix.template topRows() *= s; return *this; } /** Applies on the right the translation matrix represented by the vector \a other * to \c *this and returns a reference to \c *this. * \sa pretranslate() */ template template EIGEN_DEVICE_FUNC Transform& Transform::translate(const MatrixBase &other) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,int(Dim)) translationExt() += linearExt() * other; return *this; } /** Applies on the left the translation matrix represented by the vector \a other * to \c *this and returns a reference to \c *this. * \sa translate() */ template template EIGEN_DEVICE_FUNC Transform& Transform::pretranslate(const MatrixBase &other) { EIGEN_STATIC_ASSERT_VECTOR_SPECIFIC_SIZE(OtherDerived,int(Dim)) if(EIGEN_CONST_CONDITIONAL(int(Mode)==int(Projective))) affine() += other * m_matrix.row(Dim); else translation() += other; return *this; } /** Applies on the right the rotation represented by the rotation \a rotation * to \c *this and returns a reference to \c *this. * * The template parameter \a RotationType is the type of the rotation which * must be known by internal::toRotationMatrix<>. * * Natively supported types includes: * - any scalar (2D), * - a Dim x Dim matrix expression, * - a Quaternion (3D), * - a AngleAxis (3D) * * This mechanism is easily extendable to support user types such as Euler angles, * or a pair of Quaternion for 4D rotations. * * \sa rotate(Scalar), class Quaternion, class AngleAxis, prerotate(RotationType) */ template template EIGEN_DEVICE_FUNC Transform& Transform::rotate(const RotationType& rotation) { linearExt() *= internal::toRotationMatrix(rotation); return *this; } /** Applies on the left the rotation represented by the rotation \a rotation * to \c *this and returns a reference to \c *this. * * See rotate() for further details. * * \sa rotate() */ template template EIGEN_DEVICE_FUNC Transform& Transform::prerotate(const RotationType& rotation) { m_matrix.template block(0,0) = internal::toRotationMatrix(rotation) * m_matrix.template block(0,0); return *this; } /** Applies on the right the shear transformation represented * by the vector \a other to \c *this and returns a reference to \c *this. * \warning 2D only. * \sa preshear() */ template EIGEN_DEVICE_FUNC Transform& Transform::shear(const Scalar& sx, const Scalar& sy) { EIGEN_STATIC_ASSERT(int(Dim)==2, YOU_MADE_A_PROGRAMMING_MISTAKE) EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) VectorType tmp = linear().col(0)*sy + linear().col(1); linear() << linear().col(0) + linear().col(1)*sx, tmp; return *this; } /** Applies on the left the shear transformation represented * by the vector \a other to \c *this and returns a reference to \c *this. * \warning 2D only. * \sa shear() */ template EIGEN_DEVICE_FUNC Transform& Transform::preshear(const Scalar& sx, const Scalar& sy) { EIGEN_STATIC_ASSERT(int(Dim)==2, YOU_MADE_A_PROGRAMMING_MISTAKE) EIGEN_STATIC_ASSERT(Mode!=int(Isometry), THIS_METHOD_IS_ONLY_FOR_SPECIFIC_TRANSFORMATIONS) m_matrix.template block(0,0) = LinearMatrixType(1, sx, sy, 1) * m_matrix.template block(0,0); return *this; } /****************************************************** *** Scaling, Translation and Rotation compatibility *** ******************************************************/ template EIGEN_DEVICE_FUNC inline Transform& Transform::operator=(const TranslationType& t) { linear().setIdentity(); translation() = t.vector(); makeAffine(); return *this; } template EIGEN_DEVICE_FUNC inline Transform Transform::operator*(const TranslationType& t) const { Transform res = *this; res.translate(t.vector()); return res; } template EIGEN_DEVICE_FUNC inline Transform& Transform::operator=(const UniformScaling& s) { m_matrix.setZero(); linear().diagonal().fill(s.factor()); makeAffine(); return *this; } template template EIGEN_DEVICE_FUNC inline Transform& Transform::operator=(const RotationBase& r) { linear() = internal::toRotationMatrix(r); translation().setZero(); makeAffine(); return *this; } template template EIGEN_DEVICE_FUNC inline Transform Transform::operator*(const RotationBase& r) const { Transform res = *this; res.rotate(r.derived()); return res; } /************************ *** Special functions *** ************************/ namespace internal { template struct transform_rotation_impl { template EIGEN_DEVICE_FUNC static inline const typename TransformType::LinearMatrixType run(const TransformType& t) { typedef typename TransformType::LinearMatrixType LinearMatrixType; LinearMatrixType result; t.computeRotationScaling(&result, (LinearMatrixType*)0); return result; } }; template<> struct transform_rotation_impl { template EIGEN_DEVICE_FUNC static inline typename TransformType::ConstLinearPart run(const TransformType& t) { return t.linear(); } }; } /** \returns the rotation part of the transformation * * If Mode==Isometry, then this method is an alias for linear(), * otherwise it calls computeRotationScaling() to extract the rotation * through a SVD decomposition. * * \svd_module * * \sa computeRotationScaling(), computeScalingRotation(), class SVD */ template EIGEN_DEVICE_FUNC typename Transform::RotationReturnType Transform::rotation() const { return internal::transform_rotation_impl::run(*this); } /** decomposes the linear part of the transformation as a product rotation x scaling, the scaling being * not necessarily positive. * * If either pointer is zero, the corresponding computation is skipped. * * * * \svd_module * * \sa computeScalingRotation(), rotation(), class SVD */ template template EIGEN_DEVICE_FUNC void Transform::computeRotationScaling(RotationMatrixType *rotation, ScalingMatrixType *scaling) const { // Note that JacobiSVD is faster than BDCSVD for small matrices. JacobiSVD svd(linear(), ComputeFullU | ComputeFullV); Scalar x = (svd.matrixU() * svd.matrixV().adjoint()).determinant() < Scalar(0) ? Scalar(-1) : Scalar(1); // so x has absolute value 1 VectorType sv(svd.singularValues()); sv.coeffRef(Dim-1) *= x; if(scaling) *scaling = svd.matrixV() * sv.asDiagonal() * svd.matrixV().adjoint(); if(rotation) { LinearMatrixType m(svd.matrixU()); m.col(Dim-1) *= x; *rotation = m * svd.matrixV().adjoint(); } } /** decomposes the linear part of the transformation as a product scaling x rotation, the scaling being * not necessarily positive. * * If either pointer is zero, the corresponding computation is skipped. * * * * \svd_module * * \sa computeRotationScaling(), rotation(), class SVD */ template template EIGEN_DEVICE_FUNC void Transform::computeScalingRotation(ScalingMatrixType *scaling, RotationMatrixType *rotation) const { // Note that JacobiSVD is faster than BDCSVD for small matrices. JacobiSVD svd(linear(), ComputeFullU | ComputeFullV); Scalar x = (svd.matrixU() * svd.matrixV().adjoint()).determinant() < Scalar(0) ? Scalar(-1) : Scalar(1); // so x has absolute value 1 VectorType sv(svd.singularValues()); sv.coeffRef(Dim-1) *= x; if(scaling) *scaling = svd.matrixU() * sv.asDiagonal() * svd.matrixU().adjoint(); if(rotation) { LinearMatrixType m(svd.matrixU()); m.col(Dim-1) *= x; *rotation = m * svd.matrixV().adjoint(); } } /** Convenient method to set \c *this from a position, orientation and scale * of a 3D object. */ template template EIGEN_DEVICE_FUNC Transform& Transform::fromPositionOrientationScale(const MatrixBase &position, const OrientationType& orientation, const MatrixBase &scale) { linear() = internal::toRotationMatrix(orientation); linear() *= scale.asDiagonal(); translation() = position; makeAffine(); return *this; } namespace internal { template struct transform_make_affine { template EIGEN_DEVICE_FUNC static void run(MatrixType &mat) { static const int Dim = MatrixType::ColsAtCompileTime-1; mat.template block<1,Dim>(Dim,0).setZero(); mat.coeffRef(Dim,Dim) = typename MatrixType::Scalar(1); } }; template<> struct transform_make_affine { template EIGEN_DEVICE_FUNC static void run(MatrixType &) { } }; // selector needed to avoid taking the inverse of a 3x4 matrix template struct projective_transform_inverse { EIGEN_DEVICE_FUNC static inline void run(const TransformType&, TransformType&) {} }; template struct projective_transform_inverse { EIGEN_DEVICE_FUNC static inline void run(const TransformType& m, TransformType& res) { res.matrix() = m.matrix().inverse(); } }; } // end namespace internal /** * * \returns the inverse transformation according to some given knowledge * on \c *this. * * \param hint allows to optimize the inversion process when the transformation * is known to be not a general transformation (optional). The possible values are: * - #Projective if the transformation is not necessarily affine, i.e., if the * last row is not guaranteed to be [0 ... 0 1] * - #Affine if the last row can be assumed to be [0 ... 0 1] * - #Isometry if the transformation is only a concatenations of translations * and rotations. * The default is the template class parameter \c Mode. * * \warning unless \a traits is always set to NoShear or NoScaling, this function * requires the generic inverse method of MatrixBase defined in the LU module. If * you forget to include this module, then you will get hard to debug linking errors. * * \sa MatrixBase::inverse() */ template EIGEN_DEVICE_FUNC Transform Transform::inverse(TransformTraits hint) const { Transform res; if (hint == Projective) { internal::projective_transform_inverse::run(*this, res); } else { if (hint == Isometry) { res.matrix().template topLeftCorner() = linear().transpose(); } else if(hint&Affine) { res.matrix().template topLeftCorner() = linear().inverse(); } else { eigen_assert(false && "Invalid transform traits in Transform::Inverse"); } // translation and remaining parts res.matrix().template topRightCorner() = - res.matrix().template topLeftCorner() * translation(); res.makeAffine(); // we do need this, because in the beginning res is uninitialized } return res; } namespace internal { /***************************************************** *** Specializations of take affine part *** *****************************************************/ template struct transform_take_affine_part { typedef typename TransformType::MatrixType MatrixType; typedef typename TransformType::AffinePart AffinePart; typedef typename TransformType::ConstAffinePart ConstAffinePart; static inline AffinePart run(MatrixType& m) { return m.template block(0,0); } static inline ConstAffinePart run(const MatrixType& m) { return m.template block(0,0); } }; template struct transform_take_affine_part > { typedef typename Transform::MatrixType MatrixType; static inline MatrixType& run(MatrixType& m) { return m; } static inline const MatrixType& run(const MatrixType& m) { return m; } }; /***************************************************** *** Specializations of construct from matrix *** *****************************************************/ template struct transform_construct_from_matrix { static inline void run(Transform *transform, const Other& other) { transform->linear() = other; transform->translation().setZero(); transform->makeAffine(); } }; template struct transform_construct_from_matrix { static inline void run(Transform *transform, const Other& other) { transform->affine() = other; transform->makeAffine(); } }; template struct transform_construct_from_matrix { static inline void run(Transform *transform, const Other& other) { transform->matrix() = other; } }; template struct transform_construct_from_matrix { static inline void run(Transform *transform, const Other& other) { transform->matrix() = other.template block(0,0); } }; /********************************************************** *** Specializations of operator* with rhs EigenBase *** **********************************************************/ template struct transform_product_result { enum { Mode = (LhsMode == (int)Projective || RhsMode == (int)Projective ) ? Projective : (LhsMode == (int)Affine || RhsMode == (int)Affine ) ? Affine : (LhsMode == (int)AffineCompact || RhsMode == (int)AffineCompact ) ? AffineCompact : (LhsMode == (int)Isometry || RhsMode == (int)Isometry ) ? Isometry : Projective }; }; template< typename TransformType, typename MatrixType, int RhsCols> struct transform_right_product_impl< TransformType, MatrixType, 0, RhsCols> { typedef typename MatrixType::PlainObject ResultType; static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ResultType run(const TransformType& T, const MatrixType& other) { return T.matrix() * other; } }; template< typename TransformType, typename MatrixType, int RhsCols> struct transform_right_product_impl< TransformType, MatrixType, 1, RhsCols> { enum { Dim = TransformType::Dim, HDim = TransformType::HDim, OtherRows = MatrixType::RowsAtCompileTime, OtherCols = MatrixType::ColsAtCompileTime }; typedef typename MatrixType::PlainObject ResultType; static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ResultType run(const TransformType& T, const MatrixType& other) { EIGEN_STATIC_ASSERT(OtherRows==HDim, YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES); typedef Block TopLeftLhs; ResultType res(other.rows(),other.cols()); TopLeftLhs(res, 0, 0, Dim, other.cols()).noalias() = T.affine() * other; res.row(OtherRows-1) = other.row(OtherRows-1); return res; } }; template< typename TransformType, typename MatrixType, int RhsCols> struct transform_right_product_impl< TransformType, MatrixType, 2, RhsCols> { enum { Dim = TransformType::Dim, HDim = TransformType::HDim, OtherRows = MatrixType::RowsAtCompileTime, OtherCols = MatrixType::ColsAtCompileTime }; typedef typename MatrixType::PlainObject ResultType; static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ResultType run(const TransformType& T, const MatrixType& other) { EIGEN_STATIC_ASSERT(OtherRows==Dim, YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES); typedef Block TopLeftLhs; ResultType res(Replicate(T.translation(),1,other.cols())); TopLeftLhs(res, 0, 0, Dim, other.cols()).noalias() += T.linear() * other; return res; } }; template< typename TransformType, typename MatrixType > struct transform_right_product_impl< TransformType, MatrixType, 2, 1> // rhs is a vector of size Dim { typedef typename TransformType::MatrixType TransformMatrix; enum { Dim = TransformType::Dim, HDim = TransformType::HDim, OtherRows = MatrixType::RowsAtCompileTime, WorkingRows = EIGEN_PLAIN_ENUM_MIN(TransformMatrix::RowsAtCompileTime,HDim) }; typedef typename MatrixType::PlainObject ResultType; static EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ResultType run(const TransformType& T, const MatrixType& other) { EIGEN_STATIC_ASSERT(OtherRows==Dim, YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES); Matrix rhs; rhs.template head() = other; rhs[Dim] = typename ResultType::Scalar(1); Matrix res(T.matrix() * rhs); return res.template head(); } }; /********************************************************** *** Specializations of operator* with lhs EigenBase *** **********************************************************/ // generic HDim x HDim matrix * T => Projective template struct transform_left_product_impl { typedef Transform TransformType; typedef typename TransformType::MatrixType MatrixType; typedef Transform ResultType; static ResultType run(const Other& other,const TransformType& tr) { return ResultType(other * tr.matrix()); } }; // generic HDim x HDim matrix * AffineCompact => Projective template struct transform_left_product_impl { typedef Transform TransformType; typedef typename TransformType::MatrixType MatrixType; typedef Transform ResultType; static ResultType run(const Other& other,const TransformType& tr) { ResultType res; res.matrix().noalias() = other.template block(0,0) * tr.matrix(); res.matrix().col(Dim) += other.col(Dim); return res; } }; // affine matrix * T template struct transform_left_product_impl { typedef Transform TransformType; typedef typename TransformType::MatrixType MatrixType; typedef TransformType ResultType; static ResultType run(const Other& other,const TransformType& tr) { ResultType res; res.affine().noalias() = other * tr.matrix(); res.matrix().row(Dim) = tr.matrix().row(Dim); return res; } }; // affine matrix * AffineCompact template struct transform_left_product_impl { typedef Transform TransformType; typedef typename TransformType::MatrixType MatrixType; typedef TransformType ResultType; static ResultType run(const Other& other,const TransformType& tr) { ResultType res; res.matrix().noalias() = other.template block(0,0) * tr.matrix(); res.translation() += other.col(Dim); return res; } }; // linear matrix * T template struct transform_left_product_impl { typedef Transform TransformType; typedef typename TransformType::MatrixType MatrixType; typedef TransformType ResultType; static ResultType run(const Other& other, const TransformType& tr) { TransformType res; if(Mode!=int(AffineCompact)) res.matrix().row(Dim) = tr.matrix().row(Dim); res.matrix().template topRows().noalias() = other * tr.matrix().template topRows(); return res; } }; /********************************************************** *** Specializations of operator* with another Transform *** **********************************************************/ template struct transform_transform_product_impl,Transform,false > { enum { ResultMode = transform_product_result::Mode }; typedef Transform Lhs; typedef Transform Rhs; typedef Transform ResultType; static ResultType run(const Lhs& lhs, const Rhs& rhs) { ResultType res; res.linear() = lhs.linear() * rhs.linear(); res.translation() = lhs.linear() * rhs.translation() + lhs.translation(); res.makeAffine(); return res; } }; template struct transform_transform_product_impl,Transform,true > { typedef Transform Lhs; typedef Transform Rhs; typedef Transform ResultType; static ResultType run(const Lhs& lhs, const Rhs& rhs) { return ResultType( lhs.matrix() * rhs.matrix() ); } }; template struct transform_transform_product_impl,Transform,true > { typedef Transform Lhs; typedef Transform Rhs; typedef Transform ResultType; static ResultType run(const Lhs& lhs, const Rhs& rhs) { ResultType res; res.matrix().template topRows() = lhs.matrix() * rhs.matrix(); res.matrix().row(Dim) = rhs.matrix().row(Dim); return res; } }; template struct transform_transform_product_impl,Transform,true > { typedef Transform Lhs; typedef Transform Rhs; typedef Transform ResultType; static ResultType run(const Lhs& lhs, const Rhs& rhs) { ResultType res(lhs.matrix().template leftCols() * rhs.matrix()); res.matrix().col(Dim) += lhs.matrix().col(Dim); return res; } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_TRANSFORM_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/0000755000175100001770000000000014107270226023015 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/IncompleteCholesky.h0000644000175100001770000003527414107270226027002 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_INCOMPLETE_CHOlESKY_H #define EIGEN_INCOMPLETE_CHOlESKY_H #include #include namespace Eigen { /** * \brief Modified Incomplete Cholesky with dual threshold * * References : C-J. Lin and J. J. Moré, Incomplete Cholesky Factorizations with * Limited memory, SIAM J. Sci. Comput. 21(1), pp. 24-45, 1999 * * \tparam Scalar the scalar type of the input matrices * \tparam _UpLo The triangular part that will be used for the computations. It can be Lower * or Upper. Default is Lower. * \tparam _OrderingType The ordering method to use, either AMDOrdering<> or NaturalOrdering<>. Default is AMDOrdering, * unless EIGEN_MPL2_ONLY is defined, in which case the default is NaturalOrdering. * * \implsparsesolverconcept * * It performs the following incomplete factorization: \f$ S P A P' S \approx L L' \f$ * where L is a lower triangular factor, S is a diagonal scaling matrix, and P is a * fill-in reducing permutation as computed by the ordering method. * * \b Shifting \b strategy: Let \f$ B = S P A P' S \f$ be the scaled matrix on which the factorization is carried out, * and \f$ \beta \f$ be the minimum value of the diagonal. If \f$ \beta > 0 \f$ then, the factorization is directly performed * on the matrix B. Otherwise, the factorization is performed on the shifted matrix \f$ B + (\sigma+|\beta| I \f$ where * \f$ \sigma \f$ is the initial shift value as returned and set by setInitialShift() method. The default value is \f$ \sigma = 10^{-3} \f$. * If the factorization fails, then the shift in doubled until it succeed or a maximum of ten attempts. If it still fails, as returned by * the info() method, then you can either increase the initial shift, or better use another preconditioning technique. * */ template > class IncompleteCholesky : public SparseSolverBase > { protected: typedef SparseSolverBase > Base; using Base::m_isInitialized; public: typedef typename NumTraits::Real RealScalar; typedef _OrderingType OrderingType; typedef typename OrderingType::PermutationType PermutationType; typedef typename PermutationType::StorageIndex StorageIndex; typedef SparseMatrix FactorType; typedef Matrix VectorSx; typedef Matrix VectorRx; typedef Matrix VectorIx; typedef std::vector > VectorList; enum { UpLo = _UpLo }; enum { ColsAtCompileTime = Dynamic, MaxColsAtCompileTime = Dynamic }; public: /** Default constructor leaving the object in a partly non-initialized stage. * * You must call compute() or the pair analyzePattern()/factorize() to make it valid. * * \sa IncompleteCholesky(const MatrixType&) */ IncompleteCholesky() : m_initialShift(1e-3),m_analysisIsOk(false),m_factorizationIsOk(false) {} /** Constructor computing the incomplete factorization for the given matrix \a matrix. */ template IncompleteCholesky(const MatrixType& matrix) : m_initialShift(1e-3),m_analysisIsOk(false),m_factorizationIsOk(false) { compute(matrix); } /** \returns number of rows of the factored matrix */ EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_L.rows(); } /** \returns number of columns of the factored matrix */ EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_L.cols(); } /** \brief Reports whether previous computation was successful. * * It triggers an assertion if \c *this has not been initialized through the respective constructor, * or a call to compute() or analyzePattern(). * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix appears to be negative. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "IncompleteCholesky is not initialized."); return m_info; } /** \brief Set the initial shift parameter \f$ \sigma \f$. */ void setInitialShift(RealScalar shift) { m_initialShift = shift; } /** \brief Computes the fill reducing permutation vector using the sparsity pattern of \a mat */ template void analyzePattern(const MatrixType& mat) { OrderingType ord; PermutationType pinv; ord(mat.template selfadjointView(), pinv); if(pinv.size()>0) m_perm = pinv.inverse(); else m_perm.resize(0); m_L.resize(mat.rows(), mat.cols()); m_analysisIsOk = true; m_isInitialized = true; m_info = Success; } /** \brief Performs the numerical factorization of the input matrix \a mat * * The method analyzePattern() or compute() must have been called beforehand * with a matrix having the same pattern. * * \sa compute(), analyzePattern() */ template void factorize(const MatrixType& mat); /** Computes or re-computes the incomplete Cholesky factorization of the input matrix \a mat * * It is a shortcut for a sequential call to the analyzePattern() and factorize() methods. * * \sa analyzePattern(), factorize() */ template void compute(const MatrixType& mat) { analyzePattern(mat); factorize(mat); } // internal template void _solve_impl(const Rhs& b, Dest& x) const { eigen_assert(m_factorizationIsOk && "factorize() should be called first"); if (m_perm.rows() == b.rows()) x = m_perm * b; else x = b; x = m_scale.asDiagonal() * x; x = m_L.template triangularView().solve(x); x = m_L.adjoint().template triangularView().solve(x); x = m_scale.asDiagonal() * x; if (m_perm.rows() == b.rows()) x = m_perm.inverse() * x; } /** \returns the sparse lower triangular factor L */ const FactorType& matrixL() const { eigen_assert("m_factorizationIsOk"); return m_L; } /** \returns a vector representing the scaling factor S */ const VectorRx& scalingS() const { eigen_assert("m_factorizationIsOk"); return m_scale; } /** \returns the fill-in reducing permutation P (can be empty for a natural ordering) */ const PermutationType& permutationP() const { eigen_assert("m_analysisIsOk"); return m_perm; } protected: FactorType m_L; // The lower part stored in CSC VectorRx m_scale; // The vector for scaling the matrix RealScalar m_initialShift; // The initial shift parameter bool m_analysisIsOk; bool m_factorizationIsOk; ComputationInfo m_info; PermutationType m_perm; private: inline void updateList(Ref colPtr, Ref rowIdx, Ref vals, const Index& col, const Index& jk, VectorIx& firstElt, VectorList& listCol); }; // Based on the following paper: // C-J. Lin and J. J. Moré, Incomplete Cholesky Factorizations with // Limited memory, SIAM J. Sci. Comput. 21(1), pp. 24-45, 1999 // http://ftp.mcs.anl.gov/pub/tech_reports/reports/P682.pdf template template void IncompleteCholesky::factorize(const _MatrixType& mat) { using std::sqrt; eigen_assert(m_analysisIsOk && "analyzePattern() should be called first"); // Dropping strategy : Keep only the p largest elements per column, where p is the number of elements in the column of the original matrix. Other strategies will be added // Apply the fill-reducing permutation computed in analyzePattern() if (m_perm.rows() == mat.rows() ) // To detect the null permutation { // The temporary is needed to make sure that the diagonal entry is properly sorted FactorType tmp(mat.rows(), mat.cols()); tmp = mat.template selfadjointView<_UpLo>().twistedBy(m_perm); m_L.template selfadjointView() = tmp.template selfadjointView(); } else { m_L.template selfadjointView() = mat.template selfadjointView<_UpLo>(); } Index n = m_L.cols(); Index nnz = m_L.nonZeros(); Map vals(m_L.valuePtr(), nnz); //values Map rowIdx(m_L.innerIndexPtr(), nnz); //Row indices Map colPtr( m_L.outerIndexPtr(), n+1); // Pointer to the beginning of each row VectorIx firstElt(n-1); // for each j, points to the next entry in vals that will be used in the factorization VectorList listCol(n); // listCol(j) is a linked list of columns to update column j VectorSx col_vals(n); // Store a nonzero values in each column VectorIx col_irow(n); // Row indices of nonzero elements in each column VectorIx col_pattern(n); col_pattern.fill(-1); StorageIndex col_nnz; // Computes the scaling factors m_scale.resize(n); m_scale.setZero(); for (Index j = 0; j < n; j++) for (Index k = colPtr[j]; k < colPtr[j+1]; k++) { m_scale(j) += numext::abs2(vals(k)); if(rowIdx[k]!=j) m_scale(rowIdx[k]) += numext::abs2(vals(k)); } m_scale = m_scale.cwiseSqrt().cwiseSqrt(); for (Index j = 0; j < n; ++j) if(m_scale(j)>(std::numeric_limits::min)()) m_scale(j) = RealScalar(1)/m_scale(j); else m_scale(j) = 1; // TODO disable scaling if not needed, i.e., if it is roughly uniform? (this will make solve() faster) // Scale and compute the shift for the matrix RealScalar mindiag = NumTraits::highest(); for (Index j = 0; j < n; j++) { for (Index k = colPtr[j]; k < colPtr[j+1]; k++) vals[k] *= (m_scale(j)*m_scale(rowIdx[k])); eigen_internal_assert(rowIdx[colPtr[j]]==j && "IncompleteCholesky: only the lower triangular part must be stored"); mindiag = numext::mini(numext::real(vals[colPtr[j]]), mindiag); } FactorType L_save = m_L; RealScalar shift = 0; if(mindiag <= RealScalar(0.)) shift = m_initialShift - mindiag; m_info = NumericalIssue; // Try to perform the incomplete factorization using the current shift int iter = 0; do { // Apply the shift to the diagonal elements of the matrix for (Index j = 0; j < n; j++) vals[colPtr[j]] += shift; // jki version of the Cholesky factorization Index j=0; for (; j < n; ++j) { // Left-looking factorization of the j-th column // First, load the j-th column into col_vals Scalar diag = vals[colPtr[j]]; // It is assumed that only the lower part is stored col_nnz = 0; for (Index i = colPtr[j] + 1; i < colPtr[j+1]; i++) { StorageIndex l = rowIdx[i]; col_vals(col_nnz) = vals[i]; col_irow(col_nnz) = l; col_pattern(l) = col_nnz; col_nnz++; } { typename std::list::iterator k; // Browse all previous columns that will update column j for(k = listCol[j].begin(); k != listCol[j].end(); k++) { Index jk = firstElt(*k); // First element to use in the column eigen_internal_assert(rowIdx[jk]==j); Scalar v_j_jk = numext::conj(vals[jk]); jk += 1; for (Index i = jk; i < colPtr[*k+1]; i++) { StorageIndex l = rowIdx[i]; if(col_pattern[l]<0) { col_vals(col_nnz) = vals[i] * v_j_jk; col_irow[col_nnz] = l; col_pattern(l) = col_nnz; col_nnz++; } else col_vals(col_pattern[l]) -= vals[i] * v_j_jk; } updateList(colPtr,rowIdx,vals, *k, jk, firstElt, listCol); } } // Scale the current column if(numext::real(diag) <= 0) { if(++iter>=10) return; // increase shift shift = numext::maxi(m_initialShift,RealScalar(2)*shift); // restore m_L, col_pattern, and listCol vals = Map(L_save.valuePtr(), nnz); rowIdx = Map(L_save.innerIndexPtr(), nnz); colPtr = Map(L_save.outerIndexPtr(), n+1); col_pattern.fill(-1); for(Index i=0; i cvals = col_vals.head(col_nnz); Ref cirow = col_irow.head(col_nnz); internal::QuickSplit(cvals,cirow, p); // Insert the largest p elements in the matrix Index cpt = 0; for (Index i = colPtr[j]+1; i < colPtr[j+1]; i++) { vals[i] = col_vals(cpt); rowIdx[i] = col_irow(cpt); // restore col_pattern: col_pattern(col_irow(cpt)) = -1; cpt++; } // Get the first smallest row index and put it after the diagonal element Index jk = colPtr(j)+1; updateList(colPtr,rowIdx,vals,j,jk,firstElt,listCol); } if(j==n) { m_factorizationIsOk = true; m_info = Success; } } while(m_info!=Success); } template inline void IncompleteCholesky::updateList(Ref colPtr, Ref rowIdx, Ref vals, const Index& col, const Index& jk, VectorIx& firstElt, VectorList& listCol) { if (jk < colPtr(col+1) ) { Index p = colPtr(col+1) - jk; Index minpos; rowIdx.segment(jk,p).minCoeff(&minpos); minpos += jk; if (rowIdx(minpos) != rowIdx(jk)) { //Swap std::swap(rowIdx(jk),rowIdx(minpos)); std::swap(vals(jk),vals(minpos)); } firstElt(col) = internal::convert_index(jk); listCol[rowIdx(jk)].push_back(internal::convert_index(col)); } } } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h0000644000175100001770000002126714107270226026573 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_CONJUGATE_GRADIENT_H #define EIGEN_CONJUGATE_GRADIENT_H namespace Eigen { namespace internal { /** \internal Low-level conjugate gradient algorithm * \param mat The matrix A * \param rhs The right hand side vector b * \param x On input and initial solution, on output the computed solution. * \param precond A preconditioner being able to efficiently solve for an * approximation of Ax=b (regardless of b) * \param iters On input the max number of iteration, on output the number of performed iterations. * \param tol_error On input the tolerance error, on output an estimation of the relative error. */ template EIGEN_DONT_INLINE void conjugate_gradient(const MatrixType& mat, const Rhs& rhs, Dest& x, const Preconditioner& precond, Index& iters, typename Dest::RealScalar& tol_error) { using std::sqrt; using std::abs; typedef typename Dest::RealScalar RealScalar; typedef typename Dest::Scalar Scalar; typedef Matrix VectorType; RealScalar tol = tol_error; Index maxIters = iters; Index n = mat.cols(); VectorType residual = rhs - mat * x; //initial residual RealScalar rhsNorm2 = rhs.squaredNorm(); if(rhsNorm2 == 0) { x.setZero(); iters = 0; tol_error = 0; return; } const RealScalar considerAsZero = (std::numeric_limits::min)(); RealScalar threshold = numext::maxi(RealScalar(tol*tol*rhsNorm2),considerAsZero); RealScalar residualNorm2 = residual.squaredNorm(); if (residualNorm2 < threshold) { iters = 0; tol_error = sqrt(residualNorm2 / rhsNorm2); return; } VectorType p(n); p = precond.solve(residual); // initial search direction VectorType z(n), tmp(n); RealScalar absNew = numext::real(residual.dot(p)); // the square of the absolute value of r scaled by invM Index i = 0; while(i < maxIters) { tmp.noalias() = mat * p; // the bottleneck of the algorithm Scalar alpha = absNew / p.dot(tmp); // the amount we travel on dir x += alpha * p; // update solution residual -= alpha * tmp; // update residual residualNorm2 = residual.squaredNorm(); if(residualNorm2 < threshold) break; z = precond.solve(residual); // approximately solve for "A z = residual" RealScalar absOld = absNew; absNew = numext::real(residual.dot(z)); // update the absolute value of r RealScalar beta = absNew / absOld; // calculate the Gram-Schmidt value used to create the new search direction p = z + beta * p; // update search direction i++; } tol_error = sqrt(residualNorm2 / rhsNorm2); iters = i; } } template< typename _MatrixType, int _UpLo=Lower, typename _Preconditioner = DiagonalPreconditioner > class ConjugateGradient; namespace internal { template< typename _MatrixType, int _UpLo, typename _Preconditioner> struct traits > { typedef _MatrixType MatrixType; typedef _Preconditioner Preconditioner; }; } /** \ingroup IterativeLinearSolvers_Module * \brief A conjugate gradient solver for sparse (or dense) self-adjoint problems * * This class allows to solve for A.x = b linear problems using an iterative conjugate gradient algorithm. * The matrix A must be selfadjoint. The matrix A and the vectors x and b can be either dense or sparse. * * \tparam _MatrixType the type of the matrix A, can be a dense or a sparse matrix. * \tparam _UpLo the triangular part that will be used for the computations. It can be Lower, * \c Upper, or \c Lower|Upper in which the full matrix entries will be considered. * Default is \c Lower, best performance is \c Lower|Upper. * \tparam _Preconditioner the type of the preconditioner. Default is DiagonalPreconditioner * * \implsparsesolverconcept * * The maximal number of iterations and tolerance value can be controlled via the setMaxIterations() * and setTolerance() methods. The defaults are the size of the problem for the maximal number of iterations * and NumTraits::epsilon() for the tolerance. * * The tolerance corresponds to the relative residual error: |Ax-b|/|b| * * \b Performance: Even though the default value of \c _UpLo is \c Lower, significantly higher performance is * achieved when using a complete matrix and \b Lower|Upper as the \a _UpLo template parameter. Moreover, in this * case multi-threading can be exploited if the user code is compiled with OpenMP enabled. * See \ref TopicMultiThreading for details. * * This class can be used as the direct solver classes. Here is a typical usage example: \code int n = 10000; VectorXd x(n), b(n); SparseMatrix A(n,n); // fill A and b ConjugateGradient, Lower|Upper> cg; cg.compute(A); x = cg.solve(b); std::cout << "#iterations: " << cg.iterations() << std::endl; std::cout << "estimated error: " << cg.error() << std::endl; // update b, and solve again x = cg.solve(b); \endcode * * By default the iterations start with x=0 as an initial guess of the solution. * One can control the start using the solveWithGuess() method. * * ConjugateGradient can also be used in a matrix-free context, see the following \link MatrixfreeSolverExample example \endlink. * * \sa class LeastSquaresConjugateGradient, class SimplicialCholesky, DiagonalPreconditioner, IdentityPreconditioner */ template< typename _MatrixType, int _UpLo, typename _Preconditioner> class ConjugateGradient : public IterativeSolverBase > { typedef IterativeSolverBase Base; using Base::matrix; using Base::m_error; using Base::m_iterations; using Base::m_info; using Base::m_isInitialized; public: typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef _Preconditioner Preconditioner; enum { UpLo = _UpLo }; public: /** Default constructor. */ ConjugateGradient() : Base() {} /** Initialize the solver with matrix \a A for further \c Ax=b solving. * * This constructor is a shortcut for the default constructor followed * by a call to compute(). * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template explicit ConjugateGradient(const EigenBase& A) : Base(A.derived()) {} ~ConjugateGradient() {} /** \internal */ template void _solve_vector_with_guess_impl(const Rhs& b, Dest& x) const { typedef typename Base::MatrixWrapper MatrixWrapper; typedef typename Base::ActualMatrixType ActualMatrixType; enum { TransposeInput = (!MatrixWrapper::MatrixFree) && (UpLo==(Lower|Upper)) && (!MatrixType::IsRowMajor) && (!NumTraits::IsComplex) }; typedef typename internal::conditional, ActualMatrixType const&>::type RowMajorWrapper; EIGEN_STATIC_ASSERT(EIGEN_IMPLIES(MatrixWrapper::MatrixFree,UpLo==(Lower|Upper)),MATRIX_FREE_CONJUGATE_GRADIENT_IS_COMPATIBLE_WITH_UPPER_UNION_LOWER_MODE_ONLY); typedef typename internal::conditional::Type >::type SelfAdjointWrapper; m_iterations = Base::maxIterations(); m_error = Base::m_tolerance; RowMajorWrapper row_mat(matrix()); internal::conjugate_gradient(SelfAdjointWrapper(row_mat), b, x, Base::m_preconditioner, m_iterations, m_error); m_info = m_error <= Base::m_tolerance ? Success : NoConvergence; } protected: }; } // end namespace Eigen #endif // EIGEN_CONJUGATE_GRADIENT_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h0000644000175100001770000001530214107270226024405 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011-2014 Gael Guennebaud // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BICGSTAB_H #define EIGEN_BICGSTAB_H namespace Eigen { namespace internal { /** \internal Low-level bi conjugate gradient stabilized algorithm * \param mat The matrix A * \param rhs The right hand side vector b * \param x On input and initial solution, on output the computed solution. * \param precond A preconditioner being able to efficiently solve for an * approximation of Ax=b (regardless of b) * \param iters On input the max number of iteration, on output the number of performed iterations. * \param tol_error On input the tolerance error, on output an estimation of the relative error. * \return false in the case of numerical issue, for example a break down of BiCGSTAB. */ template bool bicgstab(const MatrixType& mat, const Rhs& rhs, Dest& x, const Preconditioner& precond, Index& iters, typename Dest::RealScalar& tol_error) { using std::sqrt; using std::abs; typedef typename Dest::RealScalar RealScalar; typedef typename Dest::Scalar Scalar; typedef Matrix VectorType; RealScalar tol = tol_error; Index maxIters = iters; Index n = mat.cols(); VectorType r = rhs - mat * x; VectorType r0 = r; RealScalar r0_sqnorm = r0.squaredNorm(); RealScalar rhs_sqnorm = rhs.squaredNorm(); if(rhs_sqnorm == 0) { x.setZero(); return true; } Scalar rho = 1; Scalar alpha = 1; Scalar w = 1; VectorType v = VectorType::Zero(n), p = VectorType::Zero(n); VectorType y(n), z(n); VectorType kt(n), ks(n); VectorType s(n), t(n); RealScalar tol2 = tol*tol*rhs_sqnorm; RealScalar eps2 = NumTraits::epsilon()*NumTraits::epsilon(); Index i = 0; Index restarts = 0; while ( r.squaredNorm() > tol2 && iRealScalar(0)) w = t.dot(s) / tmp; else w = Scalar(0); x += alpha * y + w * z; r = s - w * t; ++i; } tol_error = sqrt(r.squaredNorm()/rhs_sqnorm); iters = i; return true; } } template< typename _MatrixType, typename _Preconditioner = DiagonalPreconditioner > class BiCGSTAB; namespace internal { template< typename _MatrixType, typename _Preconditioner> struct traits > { typedef _MatrixType MatrixType; typedef _Preconditioner Preconditioner; }; } /** \ingroup IterativeLinearSolvers_Module * \brief A bi conjugate gradient stabilized solver for sparse square problems * * This class allows to solve for A.x = b sparse linear problems using a bi conjugate gradient * stabilized algorithm. The vectors x and b can be either dense or sparse. * * \tparam _MatrixType the type of the sparse matrix A, can be a dense or a sparse matrix. * \tparam _Preconditioner the type of the preconditioner. Default is DiagonalPreconditioner * * \implsparsesolverconcept * * The maximal number of iterations and tolerance value can be controlled via the setMaxIterations() * and setTolerance() methods. The defaults are the size of the problem for the maximal number of iterations * and NumTraits::epsilon() for the tolerance. * * The tolerance corresponds to the relative residual error: |Ax-b|/|b| * * \b Performance: when using sparse matrices, best performance is achied for a row-major sparse matrix format. * Moreover, in this case multi-threading can be exploited if the user code is compiled with OpenMP enabled. * See \ref TopicMultiThreading for details. * * This class can be used as the direct solver classes. Here is a typical usage example: * \include BiCGSTAB_simple.cpp * * By default the iterations start with x=0 as an initial guess of the solution. * One can control the start using the solveWithGuess() method. * * BiCGSTAB can also be used in a matrix-free context, see the following \link MatrixfreeSolverExample example \endlink. * * \sa class SimplicialCholesky, DiagonalPreconditioner, IdentityPreconditioner */ template< typename _MatrixType, typename _Preconditioner> class BiCGSTAB : public IterativeSolverBase > { typedef IterativeSolverBase Base; using Base::matrix; using Base::m_error; using Base::m_iterations; using Base::m_info; using Base::m_isInitialized; public: typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef _Preconditioner Preconditioner; public: /** Default constructor. */ BiCGSTAB() : Base() {} /** Initialize the solver with matrix \a A for further \c Ax=b solving. * * This constructor is a shortcut for the default constructor followed * by a call to compute(). * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template explicit BiCGSTAB(const EigenBase& A) : Base(A.derived()) {} ~BiCGSTAB() {} /** \internal */ template void _solve_vector_with_guess_impl(const Rhs& b, Dest& x) const { m_iterations = Base::maxIterations(); m_error = Base::m_tolerance; bool ret = internal::bicgstab(matrix(), b, x, Base::m_preconditioner, m_iterations, m_error); m_info = (!ret) ? NumericalIssue : m_error <= Base::m_tolerance ? Success : NoConvergence; } protected: }; } // end namespace Eigen #endif // EIGEN_BICGSTAB_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/IncompleteLUT.h0000644000175100001770000003513414107270226025660 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // Copyright (C) 2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_INCOMPLETE_LUT_H #define EIGEN_INCOMPLETE_LUT_H namespace Eigen { namespace internal { /** \internal * Compute a quick-sort split of a vector * On output, the vector row is permuted such that its elements satisfy * abs(row(i)) >= abs(row(ncut)) if incut * \param row The vector of values * \param ind The array of index for the elements in @p row * \param ncut The number of largest elements to keep **/ template Index QuickSplit(VectorV &row, VectorI &ind, Index ncut) { typedef typename VectorV::RealScalar RealScalar; using std::swap; using std::abs; Index mid; Index n = row.size(); /* length of the vector */ Index first, last ; ncut--; /* to fit the zero-based indices */ first = 0; last = n-1; if (ncut < first || ncut > last ) return 0; do { mid = first; RealScalar abskey = abs(row(mid)); for (Index j = first + 1; j <= last; j++) { if ( abs(row(j)) > abskey) { ++mid; swap(row(mid), row(j)); swap(ind(mid), ind(j)); } } /* Interchange for the pivot element */ swap(row(mid), row(first)); swap(ind(mid), ind(first)); if (mid > ncut) last = mid - 1; else if (mid < ncut ) first = mid + 1; } while (mid != ncut ); return 0; /* mid is equal to ncut */ } }// end namespace internal /** \ingroup IterativeLinearSolvers_Module * \class IncompleteLUT * \brief Incomplete LU factorization with dual-threshold strategy * * \implsparsesolverconcept * * During the numerical factorization, two dropping rules are used : * 1) any element whose magnitude is less than some tolerance is dropped. * This tolerance is obtained by multiplying the input tolerance @p droptol * by the average magnitude of all the original elements in the current row. * 2) After the elimination of the row, only the @p fill largest elements in * the L part and the @p fill largest elements in the U part are kept * (in addition to the diagonal element ). Note that @p fill is computed from * the input parameter @p fillfactor which is used the ratio to control the fill_in * relatively to the initial number of nonzero elements. * * The two extreme cases are when @p droptol=0 (to keep all the @p fill*2 largest elements) * and when @p fill=n/2 with @p droptol being different to zero. * * References : Yousef Saad, ILUT: A dual threshold incomplete LU factorization, * Numerical Linear Algebra with Applications, 1(4), pp 387-402, 1994. * * NOTE : The following implementation is derived from the ILUT implementation * in the SPARSKIT package, Copyright (C) 2005, the Regents of the University of Minnesota * released under the terms of the GNU LGPL: * http://www-users.cs.umn.edu/~saad/software/SPARSKIT/README * However, Yousef Saad gave us permission to relicense his ILUT code to MPL2. * See the Eigen mailing list archive, thread: ILUT, date: July 8, 2012: * http://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2012/07/msg00064.html * alternatively, on GMANE: * http://comments.gmane.org/gmane.comp.lib.eigen/3302 */ template class IncompleteLUT : public SparseSolverBase > { protected: typedef SparseSolverBase Base; using Base::m_isInitialized; public: typedef _Scalar Scalar; typedef _StorageIndex StorageIndex; typedef typename NumTraits::Real RealScalar; typedef Matrix Vector; typedef Matrix VectorI; typedef SparseMatrix FactorType; enum { ColsAtCompileTime = Dynamic, MaxColsAtCompileTime = Dynamic }; public: IncompleteLUT() : m_droptol(NumTraits::dummy_precision()), m_fillfactor(10), m_analysisIsOk(false), m_factorizationIsOk(false) {} template explicit IncompleteLUT(const MatrixType& mat, const RealScalar& droptol=NumTraits::dummy_precision(), int fillfactor = 10) : m_droptol(droptol),m_fillfactor(fillfactor), m_analysisIsOk(false),m_factorizationIsOk(false) { eigen_assert(fillfactor != 0); compute(mat); } EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_lu.rows(); } EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_lu.cols(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix.appears to be negative. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "IncompleteLUT is not initialized."); return m_info; } template void analyzePattern(const MatrixType& amat); template void factorize(const MatrixType& amat); /** * Compute an incomplete LU factorization with dual threshold on the matrix mat * No pivoting is done in this version * **/ template IncompleteLUT& compute(const MatrixType& amat) { analyzePattern(amat); factorize(amat); return *this; } void setDroptol(const RealScalar& droptol); void setFillfactor(int fillfactor); template void _solve_impl(const Rhs& b, Dest& x) const { x = m_Pinv * b; x = m_lu.template triangularView().solve(x); x = m_lu.template triangularView().solve(x); x = m_P * x; } protected: /** keeps off-diagonal entries; drops diagonal entries */ struct keep_diag { inline bool operator() (const Index& row, const Index& col, const Scalar&) const { return row!=col; } }; protected: FactorType m_lu; RealScalar m_droptol; int m_fillfactor; bool m_analysisIsOk; bool m_factorizationIsOk; ComputationInfo m_info; PermutationMatrix m_P; // Fill-reducing permutation PermutationMatrix m_Pinv; // Inverse permutation }; /** * Set control parameter droptol * \param droptol Drop any element whose magnitude is less than this tolerance **/ template void IncompleteLUT::setDroptol(const RealScalar& droptol) { this->m_droptol = droptol; } /** * Set control parameter fillfactor * \param fillfactor This is used to compute the number @p fill_in of largest elements to keep on each row. **/ template void IncompleteLUT::setFillfactor(int fillfactor) { this->m_fillfactor = fillfactor; } template template void IncompleteLUT::analyzePattern(const _MatrixType& amat) { // Compute the Fill-reducing permutation // Since ILUT does not perform any numerical pivoting, // it is highly preferable to keep the diagonal through symmetric permutations. // To this end, let's symmetrize the pattern and perform AMD on it. SparseMatrix mat1 = amat; SparseMatrix mat2 = amat.transpose(); // FIXME for a matrix with nearly symmetric pattern, mat2+mat1 is the appropriate choice. // on the other hand for a really non-symmetric pattern, mat2*mat1 should be preferred... SparseMatrix AtA = mat2 + mat1; AMDOrdering ordering; ordering(AtA,m_P); m_Pinv = m_P.inverse(); // cache the inverse permutation m_analysisIsOk = true; m_factorizationIsOk = false; m_isInitialized = true; } template template void IncompleteLUT::factorize(const _MatrixType& amat) { using std::sqrt; using std::swap; using std::abs; using internal::convert_index; eigen_assert((amat.rows() == amat.cols()) && "The factorization should be done on a square matrix"); Index n = amat.cols(); // Size of the matrix m_lu.resize(n,n); // Declare Working vectors and variables Vector u(n) ; // real values of the row -- maximum size is n -- VectorI ju(n); // column position of the values in u -- maximum size is n VectorI jr(n); // Indicate the position of the nonzero elements in the vector u -- A zero location is indicated by -1 // Apply the fill-reducing permutation eigen_assert(m_analysisIsOk && "You must first call analyzePattern()"); SparseMatrix mat; mat = amat.twistedBy(m_Pinv); // Initialization jr.fill(-1); ju.fill(0); u.fill(0); // number of largest elements to keep in each row: Index fill_in = (amat.nonZeros()*m_fillfactor)/n + 1; if (fill_in > n) fill_in = n; // number of largest nonzero elements to keep in the L and the U part of the current row: Index nnzL = fill_in/2; Index nnzU = nnzL; m_lu.reserve(n * (nnzL + nnzU + 1)); // global loop over the rows of the sparse matrix for (Index ii = 0; ii < n; ii++) { // 1 - copy the lower and the upper part of the row i of mat in the working vector u Index sizeu = 1; // number of nonzero elements in the upper part of the current row Index sizel = 0; // number of nonzero elements in the lower part of the current row ju(ii) = convert_index(ii); u(ii) = 0; jr(ii) = convert_index(ii); RealScalar rownorm = 0; typename FactorType::InnerIterator j_it(mat, ii); // Iterate through the current row ii for (; j_it; ++j_it) { Index k = j_it.index(); if (k < ii) { // copy the lower part ju(sizel) = convert_index(k); u(sizel) = j_it.value(); jr(k) = convert_index(sizel); ++sizel; } else if (k == ii) { u(ii) = j_it.value(); } else { // copy the upper part Index jpos = ii + sizeu; ju(jpos) = convert_index(k); u(jpos) = j_it.value(); jr(k) = convert_index(jpos); ++sizeu; } rownorm += numext::abs2(j_it.value()); } // 2 - detect possible zero row if(rownorm==0) { m_info = NumericalIssue; return; } // Take the 2-norm of the current row as a relative tolerance rownorm = sqrt(rownorm); // 3 - eliminate the previous nonzero rows Index jj = 0; Index len = 0; while (jj < sizel) { // In order to eliminate in the correct order, // we must select first the smallest column index among ju(jj:sizel) Index k; Index minrow = ju.segment(jj,sizel-jj).minCoeff(&k); // k is relative to the segment k += jj; if (minrow != ju(jj)) { // swap the two locations Index j = ju(jj); swap(ju(jj), ju(k)); jr(minrow) = convert_index(jj); jr(j) = convert_index(k); swap(u(jj), u(k)); } // Reset this location jr(minrow) = -1; // Start elimination typename FactorType::InnerIterator ki_it(m_lu, minrow); while (ki_it && ki_it.index() < minrow) ++ki_it; eigen_internal_assert(ki_it && ki_it.col()==minrow); Scalar fact = u(jj) / ki_it.value(); // drop too small elements if(abs(fact) <= m_droptol) { jj++; continue; } // linear combination of the current row ii and the row minrow ++ki_it; for (; ki_it; ++ki_it) { Scalar prod = fact * ki_it.value(); Index j = ki_it.index(); Index jpos = jr(j); if (jpos == -1) // fill-in element { Index newpos; if (j >= ii) // dealing with the upper part { newpos = ii + sizeu; sizeu++; eigen_internal_assert(sizeu<=n); } else // dealing with the lower part { newpos = sizel; sizel++; eigen_internal_assert(sizel<=ii); } ju(newpos) = convert_index(j); u(newpos) = -prod; jr(j) = convert_index(newpos); } else u(jpos) -= prod; } // store the pivot element u(len) = fact; ju(len) = convert_index(minrow); ++len; jj++; } // end of the elimination on the row ii // reset the upper part of the pointer jr to zero for(Index k = 0; k m_droptol * rownorm ) { ++len; u(ii + len) = u(ii + k); ju(ii + len) = ju(ii + k); } } sizeu = len + 1; // +1 to take into account the diagonal element len = (std::min)(sizeu, nnzU); typename Vector::SegmentReturnType uu(u.segment(ii+1, sizeu-1)); typename VectorI::SegmentReturnType juu(ju.segment(ii+1, sizeu-1)); internal::QuickSplit(uu, juu, len); // store the largest elements of the U part for(Index k = ii + 1; k < ii + len; k++) m_lu.insertBackByOuterInnerUnordered(ii,ju(k)) = u(k); } m_lu.finalize(); m_lu.makeCompressed(); m_factorizationIsOk = true; m_info = Success; } } // end namespace Eigen #endif // EIGEN_INCOMPLETE_LUT_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/SolveWithGuess.h0000644000175100001770000001016414107270226026123 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SOLVEWITHGUESS_H #define EIGEN_SOLVEWITHGUESS_H namespace Eigen { template class SolveWithGuess; /** \class SolveWithGuess * \ingroup IterativeLinearSolvers_Module * * \brief Pseudo expression representing a solving operation * * \tparam Decomposition the type of the matrix or decomposion object * \tparam Rhstype the type of the right-hand side * * This class represents an expression of A.solve(B) * and most of the time this is the only way it is used. * */ namespace internal { template struct traits > : traits > {}; } template class SolveWithGuess : public internal::generic_xpr_base, MatrixXpr, typename internal::traits::StorageKind>::type { public: typedef typename internal::traits::Scalar Scalar; typedef typename internal::traits::PlainObject PlainObject; typedef typename internal::generic_xpr_base, MatrixXpr, typename internal::traits::StorageKind>::type Base; typedef typename internal::ref_selector::type Nested; SolveWithGuess(const Decomposition &dec, const RhsType &rhs, const GuessType &guess) : m_dec(dec), m_rhs(rhs), m_guess(guess) {} EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_dec.cols(); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_rhs.cols(); } EIGEN_DEVICE_FUNC const Decomposition& dec() const { return m_dec; } EIGEN_DEVICE_FUNC const RhsType& rhs() const { return m_rhs; } EIGEN_DEVICE_FUNC const GuessType& guess() const { return m_guess; } protected: const Decomposition &m_dec; const RhsType &m_rhs; const GuessType &m_guess; private: Scalar coeff(Index row, Index col) const; Scalar coeff(Index i) const; }; namespace internal { // Evaluator of SolveWithGuess -> eval into a temporary template struct evaluator > : public evaluator::PlainObject> { typedef SolveWithGuess SolveType; typedef typename SolveType::PlainObject PlainObject; typedef evaluator Base; evaluator(const SolveType& solve) : m_result(solve.rows(), solve.cols()) { ::new (static_cast(this)) Base(m_result); m_result = solve.guess(); solve.dec()._solve_with_guess_impl(solve.rhs(), m_result); } protected: PlainObject m_result; }; // Specialization for "dst = dec.solveWithGuess(rhs)" // NOTE we need to specialize it for Dense2Dense to avoid ambiguous specialization error and a Sparse2Sparse specialization must exist somewhere template struct Assignment, internal::assign_op, Dense2Dense> { typedef SolveWithGuess SrcXprType; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &) { Index dstRows = src.rows(); Index dstCols = src.cols(); if((dst.rows()!=dstRows) || (dst.cols()!=dstCols)) dst.resize(dstRows, dstCols); dst = src.guess(); src.dec()._solve_with_guess_impl(src.rhs(), dst/*, src.guess()*/); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_SOLVEWITHGUESS_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h0000644000175100001770000003210314107270226027107 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ITERATIVE_SOLVER_BASE_H #define EIGEN_ITERATIVE_SOLVER_BASE_H namespace Eigen { namespace internal { template struct is_ref_compatible_impl { private: template struct any_conversion { template any_conversion(const volatile T&); template any_conversion(T&); }; struct yes {int a[1];}; struct no {int a[2];}; template static yes test(const Ref&, int); template static no test(any_conversion, ...); public: static MatrixType ms_from; enum { value = sizeof(test(ms_from, 0))==sizeof(yes) }; }; template struct is_ref_compatible { enum { value = is_ref_compatible_impl::type>::value }; }; template::value> class generic_matrix_wrapper; // We have an explicit matrix at hand, compatible with Ref<> template class generic_matrix_wrapper { public: typedef Ref ActualMatrixType; template struct ConstSelfAdjointViewReturnType { typedef typename ActualMatrixType::template ConstSelfAdjointViewReturnType::Type Type; }; enum { MatrixFree = false }; generic_matrix_wrapper() : m_dummy(0,0), m_matrix(m_dummy) {} template generic_matrix_wrapper(const InputType &mat) : m_matrix(mat) {} const ActualMatrixType& matrix() const { return m_matrix; } template void grab(const EigenBase &mat) { m_matrix.~Ref(); ::new (&m_matrix) Ref(mat.derived()); } void grab(const Ref &mat) { if(&(mat.derived()) != &m_matrix) { m_matrix.~Ref(); ::new (&m_matrix) Ref(mat); } } protected: MatrixType m_dummy; // used to default initialize the Ref<> object ActualMatrixType m_matrix; }; // MatrixType is not compatible with Ref<> -> matrix-free wrapper template class generic_matrix_wrapper { public: typedef MatrixType ActualMatrixType; template struct ConstSelfAdjointViewReturnType { typedef ActualMatrixType Type; }; enum { MatrixFree = true }; generic_matrix_wrapper() : mp_matrix(0) {} generic_matrix_wrapper(const MatrixType &mat) : mp_matrix(&mat) {} const ActualMatrixType& matrix() const { return *mp_matrix; } void grab(const MatrixType &mat) { mp_matrix = &mat; } protected: const ActualMatrixType *mp_matrix; }; } /** \ingroup IterativeLinearSolvers_Module * \brief Base class for linear iterative solvers * * \sa class SimplicialCholesky, DiagonalPreconditioner, IdentityPreconditioner */ template< typename Derived> class IterativeSolverBase : public SparseSolverBase { protected: typedef SparseSolverBase Base; using Base::m_isInitialized; public: typedef typename internal::traits::MatrixType MatrixType; typedef typename internal::traits::Preconditioner Preconditioner; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef typename MatrixType::RealScalar RealScalar; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: using Base::derived; /** Default constructor. */ IterativeSolverBase() { init(); } /** Initialize the solver with matrix \a A for further \c Ax=b solving. * * This constructor is a shortcut for the default constructor followed * by a call to compute(). * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template explicit IterativeSolverBase(const EigenBase& A) : m_matrixWrapper(A.derived()) { init(); compute(matrix()); } ~IterativeSolverBase() {} /** Initializes the iterative solver for the sparsity pattern of the matrix \a A for further solving \c Ax=b problems. * * Currently, this function mostly calls analyzePattern on the preconditioner. In the future * we might, for instance, implement column reordering for faster matrix vector products. */ template Derived& analyzePattern(const EigenBase& A) { grab(A.derived()); m_preconditioner.analyzePattern(matrix()); m_isInitialized = true; m_analysisIsOk = true; m_info = m_preconditioner.info(); return derived(); } /** Initializes the iterative solver with the numerical values of the matrix \a A for further solving \c Ax=b problems. * * Currently, this function mostly calls factorize on the preconditioner. * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template Derived& factorize(const EigenBase& A) { eigen_assert(m_analysisIsOk && "You must first call analyzePattern()"); grab(A.derived()); m_preconditioner.factorize(matrix()); m_factorizationIsOk = true; m_info = m_preconditioner.info(); return derived(); } /** Initializes the iterative solver with the matrix \a A for further solving \c Ax=b problems. * * Currently, this function mostly initializes/computes the preconditioner. In the future * we might, for instance, implement column reordering for faster matrix vector products. * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template Derived& compute(const EigenBase& A) { grab(A.derived()); m_preconditioner.compute(matrix()); m_isInitialized = true; m_analysisIsOk = true; m_factorizationIsOk = true; m_info = m_preconditioner.info(); return derived(); } /** \internal */ EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return matrix().rows(); } /** \internal */ EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return matrix().cols(); } /** \returns the tolerance threshold used by the stopping criteria. * \sa setTolerance() */ RealScalar tolerance() const { return m_tolerance; } /** Sets the tolerance threshold used by the stopping criteria. * * This value is used as an upper bound to the relative residual error: |Ax-b|/|b|. * The default value is the machine precision given by NumTraits::epsilon() */ Derived& setTolerance(const RealScalar& tolerance) { m_tolerance = tolerance; return derived(); } /** \returns a read-write reference to the preconditioner for custom configuration. */ Preconditioner& preconditioner() { return m_preconditioner; } /** \returns a read-only reference to the preconditioner. */ const Preconditioner& preconditioner() const { return m_preconditioner; } /** \returns the max number of iterations. * It is either the value set by setMaxIterations or, by default, * twice the number of columns of the matrix. */ Index maxIterations() const { return (m_maxIterations<0) ? 2*matrix().cols() : m_maxIterations; } /** Sets the max number of iterations. * Default is twice the number of columns of the matrix. */ Derived& setMaxIterations(Index maxIters) { m_maxIterations = maxIters; return derived(); } /** \returns the number of iterations performed during the last solve */ Index iterations() const { eigen_assert(m_isInitialized && "ConjugateGradient is not initialized."); return m_iterations; } /** \returns the tolerance error reached during the last solve. * It is a close approximation of the true relative residual error |Ax-b|/|b|. */ RealScalar error() const { eigen_assert(m_isInitialized && "ConjugateGradient is not initialized."); return m_error; } /** \returns the solution x of \f$ A x = b \f$ using the current decomposition of A * and \a x0 as an initial solution. * * \sa solve(), compute() */ template inline const SolveWithGuess solveWithGuess(const MatrixBase& b, const Guess& x0) const { eigen_assert(m_isInitialized && "Solver is not initialized."); eigen_assert(derived().rows()==b.rows() && "solve(): invalid number of rows of the right hand side matrix b"); return SolveWithGuess(derived(), b.derived(), x0); } /** \returns Success if the iterations converged, and NoConvergence otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "IterativeSolverBase is not initialized."); return m_info; } /** \internal */ template void _solve_with_guess_impl(const Rhs& b, SparseMatrixBase &aDest) const { eigen_assert(rows()==b.rows()); Index rhsCols = b.cols(); Index size = b.rows(); DestDerived& dest(aDest.derived()); typedef typename DestDerived::Scalar DestScalar; Eigen::Matrix tb(size); Eigen::Matrix tx(cols()); // We do not directly fill dest because sparse expressions have to be free of aliasing issue. // For non square least-square problems, b and dest might not have the same size whereas they might alias each-other. typename DestDerived::PlainObject tmp(cols(),rhsCols); ComputationInfo global_info = Success; for(Index k=0; k typename internal::enable_if::type _solve_with_guess_impl(const Rhs& b, MatrixBase &aDest) const { eigen_assert(rows()==b.rows()); Index rhsCols = b.cols(); DestDerived& dest(aDest.derived()); ComputationInfo global_info = Success; for(Index k=0; k typename internal::enable_if::type _solve_with_guess_impl(const Rhs& b, MatrixBase &dest) const { derived()._solve_vector_with_guess_impl(b,dest.derived()); } /** \internal default initial guess = 0 */ template void _solve_impl(const Rhs& b, Dest& x) const { x.setZero(); derived()._solve_with_guess_impl(b,x); } protected: void init() { m_isInitialized = false; m_analysisIsOk = false; m_factorizationIsOk = false; m_maxIterations = -1; m_tolerance = NumTraits::epsilon(); } typedef internal::generic_matrix_wrapper MatrixWrapper; typedef typename MatrixWrapper::ActualMatrixType ActualMatrixType; const ActualMatrixType& matrix() const { return m_matrixWrapper.matrix(); } template void grab(const InputType &A) { m_matrixWrapper.grab(A); } MatrixWrapper m_matrixWrapper; Preconditioner m_preconditioner; Index m_maxIterations; RealScalar m_tolerance; mutable RealScalar m_error; mutable Index m_iterations; mutable ComputationInfo m_info; mutable bool m_analysisIsOk, m_factorizationIsOk; }; } // end namespace Eigen #endif // EIGEN_ITERATIVE_SOLVER_BASE_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/LeastSquareConjugateGradient.h0000644000175100001770000001626514107270226030747 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2015 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_LEAST_SQUARE_CONJUGATE_GRADIENT_H #define EIGEN_LEAST_SQUARE_CONJUGATE_GRADIENT_H namespace Eigen { namespace internal { /** \internal Low-level conjugate gradient algorithm for least-square problems * \param mat The matrix A * \param rhs The right hand side vector b * \param x On input and initial solution, on output the computed solution. * \param precond A preconditioner being able to efficiently solve for an * approximation of A'Ax=b (regardless of b) * \param iters On input the max number of iteration, on output the number of performed iterations. * \param tol_error On input the tolerance error, on output an estimation of the relative error. */ template EIGEN_DONT_INLINE void least_square_conjugate_gradient(const MatrixType& mat, const Rhs& rhs, Dest& x, const Preconditioner& precond, Index& iters, typename Dest::RealScalar& tol_error) { using std::sqrt; using std::abs; typedef typename Dest::RealScalar RealScalar; typedef typename Dest::Scalar Scalar; typedef Matrix VectorType; RealScalar tol = tol_error; Index maxIters = iters; Index m = mat.rows(), n = mat.cols(); VectorType residual = rhs - mat * x; VectorType normal_residual = mat.adjoint() * residual; RealScalar rhsNorm2 = (mat.adjoint()*rhs).squaredNorm(); if(rhsNorm2 == 0) { x.setZero(); iters = 0; tol_error = 0; return; } RealScalar threshold = tol*tol*rhsNorm2; RealScalar residualNorm2 = normal_residual.squaredNorm(); if (residualNorm2 < threshold) { iters = 0; tol_error = sqrt(residualNorm2 / rhsNorm2); return; } VectorType p(n); p = precond.solve(normal_residual); // initial search direction VectorType z(n), tmp(m); RealScalar absNew = numext::real(normal_residual.dot(p)); // the square of the absolute value of r scaled by invM Index i = 0; while(i < maxIters) { tmp.noalias() = mat * p; Scalar alpha = absNew / tmp.squaredNorm(); // the amount we travel on dir x += alpha * p; // update solution residual -= alpha * tmp; // update residual normal_residual = mat.adjoint() * residual; // update residual of the normal equation residualNorm2 = normal_residual.squaredNorm(); if(residualNorm2 < threshold) break; z = precond.solve(normal_residual); // approximately solve for "A'A z = normal_residual" RealScalar absOld = absNew; absNew = numext::real(normal_residual.dot(z)); // update the absolute value of r RealScalar beta = absNew / absOld; // calculate the Gram-Schmidt value used to create the new search direction p = z + beta * p; // update search direction i++; } tol_error = sqrt(residualNorm2 / rhsNorm2); iters = i; } } template< typename _MatrixType, typename _Preconditioner = LeastSquareDiagonalPreconditioner > class LeastSquaresConjugateGradient; namespace internal { template< typename _MatrixType, typename _Preconditioner> struct traits > { typedef _MatrixType MatrixType; typedef _Preconditioner Preconditioner; }; } /** \ingroup IterativeLinearSolvers_Module * \brief A conjugate gradient solver for sparse (or dense) least-square problems * * This class allows to solve for A x = b linear problems using an iterative conjugate gradient algorithm. * The matrix A can be non symmetric and rectangular, but the matrix A' A should be positive-definite to guaranty stability. * Otherwise, the SparseLU or SparseQR classes might be preferable. * The matrix A and the vectors x and b can be either dense or sparse. * * \tparam _MatrixType the type of the matrix A, can be a dense or a sparse matrix. * \tparam _Preconditioner the type of the preconditioner. Default is LeastSquareDiagonalPreconditioner * * \implsparsesolverconcept * * The maximal number of iterations and tolerance value can be controlled via the setMaxIterations() * and setTolerance() methods. The defaults are the size of the problem for the maximal number of iterations * and NumTraits::epsilon() for the tolerance. * * This class can be used as the direct solver classes. Here is a typical usage example: \code int m=1000000, n = 10000; VectorXd x(n), b(m); SparseMatrix A(m,n); // fill A and b LeastSquaresConjugateGradient > lscg; lscg.compute(A); x = lscg.solve(b); std::cout << "#iterations: " << lscg.iterations() << std::endl; std::cout << "estimated error: " << lscg.error() << std::endl; // update b, and solve again x = lscg.solve(b); \endcode * * By default the iterations start with x=0 as an initial guess of the solution. * One can control the start using the solveWithGuess() method. * * \sa class ConjugateGradient, SparseLU, SparseQR */ template< typename _MatrixType, typename _Preconditioner> class LeastSquaresConjugateGradient : public IterativeSolverBase > { typedef IterativeSolverBase Base; using Base::matrix; using Base::m_error; using Base::m_iterations; using Base::m_info; using Base::m_isInitialized; public: typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef _Preconditioner Preconditioner; public: /** Default constructor. */ LeastSquaresConjugateGradient() : Base() {} /** Initialize the solver with matrix \a A for further \c Ax=b solving. * * This constructor is a shortcut for the default constructor followed * by a call to compute(). * * \warning this class stores a reference to the matrix A as well as some * precomputed values that depend on it. Therefore, if \a A is changed * this class becomes invalid. Call compute() to update it with the new * matrix A, or modify a copy of A. */ template explicit LeastSquaresConjugateGradient(const EigenBase& A) : Base(A.derived()) {} ~LeastSquaresConjugateGradient() {} /** \internal */ template void _solve_vector_with_guess_impl(const Rhs& b, Dest& x) const { m_iterations = Base::maxIterations(); m_error = Base::m_tolerance; internal::least_square_conjugate_gradient(matrix(), b, x, Base::m_preconditioner, m_iterations, m_error); m_info = m_error <= Base::m_tolerance ? Success : NoConvergence; } }; } // end namespace Eigen #endif // EIGEN_LEAST_SQUARE_CONJUGATE_GRADIENT_H piqp-octave/src/eigen/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h0000644000175100001770000001516314107270226027305 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2011-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BASIC_PRECONDITIONERS_H #define EIGEN_BASIC_PRECONDITIONERS_H namespace Eigen { /** \ingroup IterativeLinearSolvers_Module * \brief A preconditioner based on the digonal entries * * This class allows to approximately solve for A.x = b problems assuming A is a diagonal matrix. * In other words, this preconditioner neglects all off diagonal entries and, in Eigen's language, solves for: \code A.diagonal().asDiagonal() . x = b \endcode * * \tparam _Scalar the type of the scalar. * * \implsparsesolverconcept * * This preconditioner is suitable for both selfadjoint and general problems. * The diagonal entries are pre-inverted and stored into a dense vector. * * \note A variant that has yet to be implemented would attempt to preserve the norm of each column. * * \sa class LeastSquareDiagonalPreconditioner, class ConjugateGradient */ template class DiagonalPreconditioner { typedef _Scalar Scalar; typedef Matrix Vector; public: typedef typename Vector::StorageIndex StorageIndex; enum { ColsAtCompileTime = Dynamic, MaxColsAtCompileTime = Dynamic }; DiagonalPreconditioner() : m_isInitialized(false) {} template explicit DiagonalPreconditioner(const MatType& mat) : m_invdiag(mat.cols()) { compute(mat); } EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_invdiag.size(); } EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_invdiag.size(); } template DiagonalPreconditioner& analyzePattern(const MatType& ) { return *this; } template DiagonalPreconditioner& factorize(const MatType& mat) { m_invdiag.resize(mat.cols()); for(int j=0; j DiagonalPreconditioner& compute(const MatType& mat) { return factorize(mat); } /** \internal */ template void _solve_impl(const Rhs& b, Dest& x) const { x = m_invdiag.array() * b.array() ; } template inline const Solve solve(const MatrixBase& b) const { eigen_assert(m_isInitialized && "DiagonalPreconditioner is not initialized."); eigen_assert(m_invdiag.size()==b.rows() && "DiagonalPreconditioner::solve(): invalid number of rows of the right hand side matrix b"); return Solve(*this, b.derived()); } ComputationInfo info() { return Success; } protected: Vector m_invdiag; bool m_isInitialized; }; /** \ingroup IterativeLinearSolvers_Module * \brief Jacobi preconditioner for LeastSquaresConjugateGradient * * This class allows to approximately solve for A' A x = A' b problems assuming A' A is a diagonal matrix. * In other words, this preconditioner neglects all off diagonal entries and, in Eigen's language, solves for: \code (A.adjoint() * A).diagonal().asDiagonal() * x = b \endcode * * \tparam _Scalar the type of the scalar. * * \implsparsesolverconcept * * The diagonal entries are pre-inverted and stored into a dense vector. * * \sa class LeastSquaresConjugateGradient, class DiagonalPreconditioner */ template class LeastSquareDiagonalPreconditioner : public DiagonalPreconditioner<_Scalar> { typedef _Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef DiagonalPreconditioner<_Scalar> Base; using Base::m_invdiag; public: LeastSquareDiagonalPreconditioner() : Base() {} template explicit LeastSquareDiagonalPreconditioner(const MatType& mat) : Base() { compute(mat); } template LeastSquareDiagonalPreconditioner& analyzePattern(const MatType& ) { return *this; } template LeastSquareDiagonalPreconditioner& factorize(const MatType& mat) { // Compute the inverse squared-norm of each column of mat m_invdiag.resize(mat.cols()); if(MatType::IsRowMajor) { m_invdiag.setZero(); for(Index j=0; jRealScalar(0)) m_invdiag(j) = RealScalar(1)/numext::real(m_invdiag(j)); } else { for(Index j=0; jRealScalar(0)) m_invdiag(j) = RealScalar(1)/sum; else m_invdiag(j) = RealScalar(1); } } Base::m_isInitialized = true; return *this; } template LeastSquareDiagonalPreconditioner& compute(const MatType& mat) { return factorize(mat); } ComputationInfo info() { return Success; } protected: }; /** \ingroup IterativeLinearSolvers_Module * \brief A naive preconditioner which approximates any matrix as the identity matrix * * \implsparsesolverconcept * * \sa class DiagonalPreconditioner */ class IdentityPreconditioner { public: IdentityPreconditioner() {} template explicit IdentityPreconditioner(const MatrixType& ) {} template IdentityPreconditioner& analyzePattern(const MatrixType& ) { return *this; } template IdentityPreconditioner& factorize(const MatrixType& ) { return *this; } template IdentityPreconditioner& compute(const MatrixType& ) { return *this; } template inline const Rhs& solve(const Rhs& b) const { return b; } ComputationInfo info() { return Success; } }; } // end namespace Eigen #endif // EIGEN_BASIC_PRECONDITIONERS_H piqp-octave/src/eigen/Eigen/src/SparseQR/0000755000175100001770000000000014107270226020050 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SparseQR/SparseQR.h0000644000175100001770000007075714107270226021741 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012-2013 Desire Nuentsa // Copyright (C) 2012-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SPARSE_QR_H #define EIGEN_SPARSE_QR_H namespace Eigen { template class SparseQR; template struct SparseQRMatrixQReturnType; template struct SparseQRMatrixQTransposeReturnType; template struct SparseQR_QProduct; namespace internal { template struct traits > { typedef typename SparseQRType::MatrixType ReturnType; typedef typename ReturnType::StorageIndex StorageIndex; typedef typename ReturnType::StorageKind StorageKind; enum { RowsAtCompileTime = Dynamic, ColsAtCompileTime = Dynamic }; }; template struct traits > { typedef typename SparseQRType::MatrixType ReturnType; }; template struct traits > { typedef typename Derived::PlainObject ReturnType; }; } // End namespace internal /** * \ingroup SparseQR_Module * \class SparseQR * \brief Sparse left-looking QR factorization with numerical column pivoting * * This class implements a left-looking QR decomposition of sparse matrices * with numerical column pivoting. * When a column has a norm less than a given tolerance * it is implicitly permuted to the end. The QR factorization thus obtained is * given by A*P = Q*R where R is upper triangular or trapezoidal. * * P is the column permutation which is the product of the fill-reducing and the * numerical permutations. Use colsPermutation() to get it. * * Q is the orthogonal matrix represented as products of Householder reflectors. * Use matrixQ() to get an expression and matrixQ().adjoint() to get the adjoint. * You can then apply it to a vector. * * R is the sparse triangular or trapezoidal matrix. The later occurs when A is rank-deficient. * matrixR().topLeftCorner(rank(), rank()) always returns a triangular factor of full rank. * * \tparam _MatrixType The type of the sparse matrix A, must be a column-major SparseMatrix<> * \tparam _OrderingType The fill-reducing ordering method. See the \link OrderingMethods_Module * OrderingMethods \endlink module for the list of built-in and external ordering methods. * * \implsparsesolverconcept * * The numerical pivoting strategy and default threshold are the same as in SuiteSparse QR, and * detailed in the following paper: * * Tim Davis, "Algorithm 915, SuiteSparseQR: Multifrontal Multithreaded Rank-Revealing * Sparse QR Factorization, ACM Trans. on Math. Soft. 38(1), 2011. * * Even though it is qualified as "rank-revealing", this strategy might fail for some * rank deficient problems. When this class is used to solve linear or least-square problems * it is thus strongly recommended to check the accuracy of the computed solution. If it * failed, it usually helps to increase the threshold with setPivotThreshold. * * \warning The input sparse matrix A must be in compressed mode (see SparseMatrix::makeCompressed()). * \warning For complex matrices matrixQ().transpose() will actually return the adjoint matrix. * */ template class SparseQR : public SparseSolverBase > { protected: typedef SparseSolverBase > Base; using Base::m_isInitialized; public: using Base::_solve_impl; typedef _MatrixType MatrixType; typedef _OrderingType OrderingType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef SparseMatrix QRMatrixType; typedef Matrix IndexVector; typedef Matrix ScalarVector; typedef PermutationMatrix PermutationType; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: SparseQR () : m_analysisIsok(false), m_lastError(""), m_useDefaultThreshold(true),m_isQSorted(false),m_isEtreeOk(false) { } /** Construct a QR factorization of the matrix \a mat. * * \warning The matrix \a mat must be in compressed mode (see SparseMatrix::makeCompressed()). * * \sa compute() */ explicit SparseQR(const MatrixType& mat) : m_analysisIsok(false), m_lastError(""), m_useDefaultThreshold(true),m_isQSorted(false),m_isEtreeOk(false) { compute(mat); } /** Computes the QR factorization of the sparse matrix \a mat. * * \warning The matrix \a mat must be in compressed mode (see SparseMatrix::makeCompressed()). * * \sa analyzePattern(), factorize() */ void compute(const MatrixType& mat) { analyzePattern(mat); factorize(mat); } void analyzePattern(const MatrixType& mat); void factorize(const MatrixType& mat); /** \returns the number of rows of the represented matrix. */ inline Index rows() const { return m_pmat.rows(); } /** \returns the number of columns of the represented matrix. */ inline Index cols() const { return m_pmat.cols();} /** \returns a const reference to the \b sparse upper triangular matrix R of the QR factorization. * \warning The entries of the returned matrix are not sorted. This means that using it in algorithms * expecting sorted entries will fail. This include random coefficient accesses (SpaseMatrix::coeff()), * and coefficient-wise operations. Matrix products and triangular solves are fine though. * * To sort the entries, you can assign it to a row-major matrix, and if a column-major matrix * is required, you can copy it again: * \code * SparseMatrix R = qr.matrixR(); // column-major, not sorted! * SparseMatrix Rr = qr.matrixR(); // row-major, sorted * SparseMatrix Rc = Rr; // column-major, sorted * \endcode */ const QRMatrixType& matrixR() const { return m_R; } /** \returns the number of non linearly dependent columns as determined by the pivoting threshold. * * \sa setPivotThreshold() */ Index rank() const { eigen_assert(m_isInitialized && "The factorization should be called first, use compute()"); return m_nonzeropivots; } /** \returns an expression of the matrix Q as products of sparse Householder reflectors. * The common usage of this function is to apply it to a dense matrix or vector * \code * VectorXd B1, B2; * // Initialize B1 * B2 = matrixQ() * B1; * \endcode * * To get a plain SparseMatrix representation of Q: * \code * SparseMatrix Q; * Q = SparseQR >(A).matrixQ(); * \endcode * Internally, this call simply performs a sparse product between the matrix Q * and a sparse identity matrix. However, due to the fact that the sparse * reflectors are stored unsorted, two transpositions are needed to sort * them before performing the product. */ SparseQRMatrixQReturnType matrixQ() const { return SparseQRMatrixQReturnType(*this); } /** \returns a const reference to the column permutation P that was applied to A such that A*P = Q*R * It is the combination of the fill-in reducing permutation and numerical column pivoting. */ const PermutationType& colsPermutation() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_outputPerm_c; } /** \returns A string describing the type of error. * This method is provided to ease debugging, not to handle errors. */ std::string lastErrorMessage() const { return m_lastError; } /** \internal */ template bool _solve_impl(const MatrixBase &B, MatrixBase &dest) const { eigen_assert(m_isInitialized && "The factorization should be called first, use compute()"); eigen_assert(this->rows() == B.rows() && "SparseQR::solve() : invalid number of rows in the right hand side matrix"); Index rank = this->rank(); // Compute Q^* * b; typename Dest::PlainObject y, b; y = this->matrixQ().adjoint() * B; b = y; // Solve with the triangular matrix R y.resize((std::max)(cols(),y.rows()),y.cols()); y.topRows(rank) = this->matrixR().topLeftCorner(rank, rank).template triangularView().solve(b.topRows(rank)); y.bottomRows(y.rows()-rank).setZero(); // Apply the column permutation if (m_perm_c.size()) dest = colsPermutation() * y.topRows(cols()); else dest = y.topRows(cols()); m_info = Success; return true; } /** Sets the threshold that is used to determine linearly dependent columns during the factorization. * * In practice, if during the factorization the norm of the column that has to be eliminated is below * this threshold, then the entire column is treated as zero, and it is moved at the end. */ void setPivotThreshold(const RealScalar& threshold) { m_useDefaultThreshold = false; m_threshold = threshold; } /** \returns the solution X of \f$ A X = B \f$ using the current decomposition of A. * * \sa compute() */ template inline const Solve solve(const MatrixBase& B) const { eigen_assert(m_isInitialized && "The factorization should be called first, use compute()"); eigen_assert(this->rows() == B.rows() && "SparseQR::solve() : invalid number of rows in the right hand side matrix"); return Solve(*this, B.derived()); } template inline const Solve solve(const SparseMatrixBase& B) const { eigen_assert(m_isInitialized && "The factorization should be called first, use compute()"); eigen_assert(this->rows() == B.rows() && "SparseQR::solve() : invalid number of rows in the right hand side matrix"); return Solve(*this, B.derived()); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the QR factorization reports a numerical problem * \c InvalidInput if the input matrix is invalid * * \sa iparm() */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } /** \internal */ inline void _sort_matrix_Q() { if(this->m_isQSorted) return; // The matrix Q is sorted during the transposition SparseMatrix mQrm(this->m_Q); this->m_Q = mQrm; this->m_isQSorted = true; } protected: bool m_analysisIsok; bool m_factorizationIsok; mutable ComputationInfo m_info; std::string m_lastError; QRMatrixType m_pmat; // Temporary matrix QRMatrixType m_R; // The triangular factor matrix QRMatrixType m_Q; // The orthogonal reflectors ScalarVector m_hcoeffs; // The Householder coefficients PermutationType m_perm_c; // Fill-reducing Column permutation PermutationType m_pivotperm; // The permutation for rank revealing PermutationType m_outputPerm_c; // The final column permutation RealScalar m_threshold; // Threshold to determine null Householder reflections bool m_useDefaultThreshold; // Use default threshold Index m_nonzeropivots; // Number of non zero pivots found IndexVector m_etree; // Column elimination tree IndexVector m_firstRowElt; // First element in each row bool m_isQSorted; // whether Q is sorted or not bool m_isEtreeOk; // whether the elimination tree match the initial input matrix template friend struct SparseQR_QProduct; }; /** \brief Preprocessing step of a QR factorization * * \warning The matrix \a mat must be in compressed mode (see SparseMatrix::makeCompressed()). * * In this step, the fill-reducing permutation is computed and applied to the columns of A * and the column elimination tree is computed as well. Only the sparsity pattern of \a mat is exploited. * * \note In this step it is assumed that there is no empty row in the matrix \a mat. */ template void SparseQR::analyzePattern(const MatrixType& mat) { eigen_assert(mat.isCompressed() && "SparseQR requires a sparse matrix in compressed mode. Call .makeCompressed() before passing it to SparseQR"); // Copy to a column major matrix if the input is rowmajor typename internal::conditional::type matCpy(mat); // Compute the column fill reducing ordering OrderingType ord; ord(matCpy, m_perm_c); Index n = mat.cols(); Index m = mat.rows(); Index diagSize = (std::min)(m,n); if (!m_perm_c.size()) { m_perm_c.resize(n); m_perm_c.indices().setLinSpaced(n, 0,StorageIndex(n-1)); } // Compute the column elimination tree of the permuted matrix m_outputPerm_c = m_perm_c.inverse(); internal::coletree(matCpy, m_etree, m_firstRowElt, m_outputPerm_c.indices().data()); m_isEtreeOk = true; m_R.resize(m, n); m_Q.resize(m, diagSize); // Allocate space for nonzero elements: rough estimation m_R.reserve(2*mat.nonZeros()); //FIXME Get a more accurate estimation through symbolic factorization with the etree m_Q.reserve(2*mat.nonZeros()); m_hcoeffs.resize(diagSize); m_analysisIsok = true; } /** \brief Performs the numerical QR factorization of the input matrix * * The function SparseQR::analyzePattern(const MatrixType&) must have been called beforehand with * a matrix having the same sparsity pattern than \a mat. * * \param mat The sparse column-major matrix */ template void SparseQR::factorize(const MatrixType& mat) { using std::abs; eigen_assert(m_analysisIsok && "analyzePattern() should be called before this step"); StorageIndex m = StorageIndex(mat.rows()); StorageIndex n = StorageIndex(mat.cols()); StorageIndex diagSize = (std::min)(m,n); IndexVector mark((std::max)(m,n)); mark.setConstant(-1); // Record the visited nodes IndexVector Ridx(n), Qidx(m); // Store temporarily the row indexes for the current column of R and Q Index nzcolR, nzcolQ; // Number of nonzero for the current column of R and Q ScalarVector tval(m); // The dense vector used to compute the current column RealScalar pivotThreshold = m_threshold; m_R.setZero(); m_Q.setZero(); m_pmat = mat; if(!m_isEtreeOk) { m_outputPerm_c = m_perm_c.inverse(); internal::coletree(m_pmat, m_etree, m_firstRowElt, m_outputPerm_c.indices().data()); m_isEtreeOk = true; } m_pmat.uncompress(); // To have the innerNonZeroPtr allocated // Apply the fill-in reducing permutation lazily: { // If the input is row major, copy the original column indices, // otherwise directly use the input matrix // IndexVector originalOuterIndicesCpy; const StorageIndex *originalOuterIndices = mat.outerIndexPtr(); if(MatrixType::IsRowMajor) { originalOuterIndicesCpy = IndexVector::Map(m_pmat.outerIndexPtr(),n+1); originalOuterIndices = originalOuterIndicesCpy.data(); } for (int i = 0; i < n; i++) { Index p = m_perm_c.size() ? m_perm_c.indices()(i) : i; m_pmat.outerIndexPtr()[p] = originalOuterIndices[i]; m_pmat.innerNonZeroPtr()[p] = originalOuterIndices[i+1] - originalOuterIndices[i]; } } /* Compute the default threshold as in MatLab, see: * Tim Davis, "Algorithm 915, SuiteSparseQR: Multifrontal Multithreaded Rank-Revealing * Sparse QR Factorization, ACM Trans. on Math. Soft. 38(1), 2011, Page 8:3 */ if(m_useDefaultThreshold) { RealScalar max2Norm = 0.0; for (int j = 0; j < n; j++) max2Norm = numext::maxi(max2Norm, m_pmat.col(j).norm()); if(max2Norm==RealScalar(0)) max2Norm = RealScalar(1); pivotThreshold = 20 * (m + n) * max2Norm * NumTraits::epsilon(); } // Initialize the numerical permutation m_pivotperm.setIdentity(n); StorageIndex nonzeroCol = 0; // Record the number of valid pivots m_Q.startVec(0); // Left looking rank-revealing QR factorization: compute a column of R and Q at a time for (StorageIndex col = 0; col < n; ++col) { mark.setConstant(-1); m_R.startVec(col); mark(nonzeroCol) = col; Qidx(0) = nonzeroCol; nzcolR = 0; nzcolQ = 1; bool found_diag = nonzeroCol>=m; tval.setZero(); // Symbolic factorization: find the nonzero locations of the column k of the factors R and Q, i.e., // all the nodes (with indexes lower than rank) reachable through the column elimination tree (etree) rooted at node k. // Note: if the diagonal entry does not exist, then its contribution must be explicitly added, // thus the trick with found_diag that permits to do one more iteration on the diagonal element if this one has not been found. for (typename QRMatrixType::InnerIterator itp(m_pmat, col); itp || !found_diag; ++itp) { StorageIndex curIdx = nonzeroCol; if(itp) curIdx = StorageIndex(itp.row()); if(curIdx == nonzeroCol) found_diag = true; // Get the nonzeros indexes of the current column of R StorageIndex st = m_firstRowElt(curIdx); // The traversal of the etree starts here if (st < 0 ) { m_lastError = "Empty row found during numerical factorization"; m_info = InvalidInput; return; } // Traverse the etree Index bi = nzcolR; for (; mark(st) != col; st = m_etree(st)) { Ridx(nzcolR) = st; // Add this row to the list, mark(st) = col; // and mark this row as visited nzcolR++; } // Reverse the list to get the topological ordering Index nt = nzcolR-bi; for(Index i = 0; i < nt/2; i++) std::swap(Ridx(bi+i), Ridx(nzcolR-i-1)); // Copy the current (curIdx,pcol) value of the input matrix if(itp) tval(curIdx) = itp.value(); else tval(curIdx) = Scalar(0); // Compute the pattern of Q(:,k) if(curIdx > nonzeroCol && mark(curIdx) != col ) { Qidx(nzcolQ) = curIdx; // Add this row to the pattern of Q, mark(curIdx) = col; // and mark it as visited nzcolQ++; } } // Browse all the indexes of R(:,col) in reverse order for (Index i = nzcolR-1; i >= 0; i--) { Index curIdx = Ridx(i); // Apply the curIdx-th householder vector to the current column (temporarily stored into tval) Scalar tdot(0); // First compute q' * tval tdot = m_Q.col(curIdx).dot(tval); tdot *= m_hcoeffs(curIdx); // Then update tval = tval - q * tau // FIXME: tval -= tdot * m_Q.col(curIdx) should amount to the same (need to check/add support for efficient "dense ?= sparse") for (typename QRMatrixType::InnerIterator itq(m_Q, curIdx); itq; ++itq) tval(itq.row()) -= itq.value() * tdot; // Detect fill-in for the current column of Q if(m_etree(Ridx(i)) == nonzeroCol) { for (typename QRMatrixType::InnerIterator itq(m_Q, curIdx); itq; ++itq) { StorageIndex iQ = StorageIndex(itq.row()); if (mark(iQ) != col) { Qidx(nzcolQ++) = iQ; // Add this row to the pattern of Q, mark(iQ) = col; // and mark it as visited } } } } // End update current column Scalar tau = RealScalar(0); RealScalar beta = 0; if(nonzeroCol < diagSize) { // Compute the Householder reflection that eliminate the current column // FIXME this step should call the Householder module. Scalar c0 = nzcolQ ? tval(Qidx(0)) : Scalar(0); // First, the squared norm of Q((col+1):m, col) RealScalar sqrNorm = 0.; for (Index itq = 1; itq < nzcolQ; ++itq) sqrNorm += numext::abs2(tval(Qidx(itq))); if(sqrNorm == RealScalar(0) && numext::imag(c0) == RealScalar(0)) { beta = numext::real(c0); tval(Qidx(0)) = 1; } else { using std::sqrt; beta = sqrt(numext::abs2(c0) + sqrNorm); if(numext::real(c0) >= RealScalar(0)) beta = -beta; tval(Qidx(0)) = 1; for (Index itq = 1; itq < nzcolQ; ++itq) tval(Qidx(itq)) /= (c0 - beta); tau = numext::conj((beta-c0) / beta); } } // Insert values in R for (Index i = nzcolR-1; i >= 0; i--) { Index curIdx = Ridx(i); if(curIdx < nonzeroCol) { m_R.insertBackByOuterInnerUnordered(col, curIdx) = tval(curIdx); tval(curIdx) = Scalar(0.); } } if(nonzeroCol < diagSize && abs(beta) >= pivotThreshold) { m_R.insertBackByOuterInner(col, nonzeroCol) = beta; // The householder coefficient m_hcoeffs(nonzeroCol) = tau; // Record the householder reflections for (Index itq = 0; itq < nzcolQ; ++itq) { Index iQ = Qidx(itq); m_Q.insertBackByOuterInnerUnordered(nonzeroCol,iQ) = tval(iQ); tval(iQ) = Scalar(0.); } nonzeroCol++; if(nonzeroCol struct SparseQR_QProduct : ReturnByValue > { typedef typename SparseQRType::QRMatrixType MatrixType; typedef typename SparseQRType::Scalar Scalar; // Get the references SparseQR_QProduct(const SparseQRType& qr, const Derived& other, bool transpose) : m_qr(qr),m_other(other),m_transpose(transpose) {} inline Index rows() const { return m_qr.matrixQ().rows(); } inline Index cols() const { return m_other.cols(); } // Assign to a vector template void evalTo(DesType& res) const { Index m = m_qr.rows(); Index n = m_qr.cols(); Index diagSize = (std::min)(m,n); res = m_other; if (m_transpose) { eigen_assert(m_qr.m_Q.rows() == m_other.rows() && "Non conforming object sizes"); //Compute res = Q' * other column by column for(Index j = 0; j < res.cols(); j++){ for (Index k = 0; k < diagSize; k++) { Scalar tau = Scalar(0); tau = m_qr.m_Q.col(k).dot(res.col(j)); if(tau==Scalar(0)) continue; tau = tau * m_qr.m_hcoeffs(k); res.col(j) -= tau * m_qr.m_Q.col(k); } } } else { eigen_assert(m_qr.matrixQ().cols() == m_other.rows() && "Non conforming object sizes"); res.conservativeResize(rows(), cols()); // Compute res = Q * other column by column for(Index j = 0; j < res.cols(); j++) { Index start_k = internal::is_identity::value ? numext::mini(j,diagSize-1) : diagSize-1; for (Index k = start_k; k >=0; k--) { Scalar tau = Scalar(0); tau = m_qr.m_Q.col(k).dot(res.col(j)); if(tau==Scalar(0)) continue; tau = tau * numext::conj(m_qr.m_hcoeffs(k)); res.col(j) -= tau * m_qr.m_Q.col(k); } } } } const SparseQRType& m_qr; const Derived& m_other; bool m_transpose; // TODO this actually means adjoint }; template struct SparseQRMatrixQReturnType : public EigenBase > { typedef typename SparseQRType::Scalar Scalar; typedef Matrix DenseMatrix; enum { RowsAtCompileTime = Dynamic, ColsAtCompileTime = Dynamic }; explicit SparseQRMatrixQReturnType(const SparseQRType& qr) : m_qr(qr) {} template SparseQR_QProduct operator*(const MatrixBase& other) { return SparseQR_QProduct(m_qr,other.derived(),false); } // To use for operations with the adjoint of Q SparseQRMatrixQTransposeReturnType adjoint() const { return SparseQRMatrixQTransposeReturnType(m_qr); } inline Index rows() const { return m_qr.rows(); } inline Index cols() const { return m_qr.rows(); } // To use for operations with the transpose of Q FIXME this is the same as adjoint at the moment SparseQRMatrixQTransposeReturnType transpose() const { return SparseQRMatrixQTransposeReturnType(m_qr); } const SparseQRType& m_qr; }; // TODO this actually represents the adjoint of Q template struct SparseQRMatrixQTransposeReturnType { explicit SparseQRMatrixQTransposeReturnType(const SparseQRType& qr) : m_qr(qr) {} template SparseQR_QProduct operator*(const MatrixBase& other) { return SparseQR_QProduct(m_qr,other.derived(), true); } const SparseQRType& m_qr; }; namespace internal { template struct evaluator_traits > { typedef typename SparseQRType::MatrixType MatrixType; typedef typename storage_kind_to_evaluator_kind::Kind Kind; typedef SparseShape Shape; }; template< typename DstXprType, typename SparseQRType> struct Assignment, internal::assign_op, Sparse2Sparse> { typedef SparseQRMatrixQReturnType SrcXprType; typedef typename DstXprType::Scalar Scalar; typedef typename DstXprType::StorageIndex StorageIndex; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &/*func*/) { typename DstXprType::PlainObject idMat(src.rows(), src.cols()); idMat.setIdentity(); // Sort the sparse householder reflectors if needed const_cast(&src.m_qr)->_sort_matrix_Q(); dst = SparseQR_QProduct(src.m_qr, idMat, false); } }; template< typename DstXprType, typename SparseQRType> struct Assignment, internal::assign_op, Sparse2Dense> { typedef SparseQRMatrixQReturnType SrcXprType; typedef typename DstXprType::Scalar Scalar; typedef typename DstXprType::StorageIndex StorageIndex; static void run(DstXprType &dst, const SrcXprType &src, const internal::assign_op &/*func*/) { dst = src.m_qr.matrixQ() * DstXprType::Identity(src.m_qr.rows(), src.m_qr.rows()); } }; } // end namespace internal } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/plugins/0000755000175100001770000000000014107270226020031 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/plugins/IndexedViewMethods.h0000644000175100001770000002777314107270226023761 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2017 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #if !defined(EIGEN_PARSED_BY_DOXYGEN) // This file is automatically included twice to generate const and non-const versions #ifndef EIGEN_INDEXED_VIEW_METHOD_2ND_PASS #define EIGEN_INDEXED_VIEW_METHOD_CONST const #define EIGEN_INDEXED_VIEW_METHOD_TYPE ConstIndexedViewType #else #define EIGEN_INDEXED_VIEW_METHOD_CONST #define EIGEN_INDEXED_VIEW_METHOD_TYPE IndexedViewType #endif #ifndef EIGEN_INDEXED_VIEW_METHOD_2ND_PASS protected: // define some aliases to ease readability template struct IvcRowType : public internal::IndexedViewCompatibleType {}; template struct IvcColType : public internal::IndexedViewCompatibleType {}; template struct IvcType : public internal::IndexedViewCompatibleType {}; typedef typename internal::IndexedViewCompatibleType::type IvcIndex; template typename IvcRowType::type ivcRow(const Indices& indices) const { return internal::makeIndexedViewCompatible(indices, internal::variable_if_dynamic(derived().rows()),Specialized); } template typename IvcColType::type ivcCol(const Indices& indices) const { return internal::makeIndexedViewCompatible(indices, internal::variable_if_dynamic(derived().cols()),Specialized); } template typename IvcColType::type ivcSize(const Indices& indices) const { return internal::makeIndexedViewCompatible(indices, internal::variable_if_dynamic(derived().size()),Specialized); } public: #endif template struct EIGEN_INDEXED_VIEW_METHOD_TYPE { typedef IndexedView::type, typename IvcColType::type> type; }; // This is the generic version template typename internal::enable_if::value && internal::traits::type>::ReturnAsIndexedView, typename EIGEN_INDEXED_VIEW_METHOD_TYPE::type >::type operator()(const RowIndices& rowIndices, const ColIndices& colIndices) EIGEN_INDEXED_VIEW_METHOD_CONST { return typename EIGEN_INDEXED_VIEW_METHOD_TYPE::type (derived(), ivcRow(rowIndices), ivcCol(colIndices)); } // The following overload returns a Block<> object template typename internal::enable_if::value && internal::traits::type>::ReturnAsBlock, typename internal::traits::type>::BlockType>::type operator()(const RowIndices& rowIndices, const ColIndices& colIndices) EIGEN_INDEXED_VIEW_METHOD_CONST { typedef typename internal::traits::type>::BlockType BlockType; typename IvcRowType::type actualRowIndices = ivcRow(rowIndices); typename IvcColType::type actualColIndices = ivcCol(colIndices); return BlockType(derived(), internal::first(actualRowIndices), internal::first(actualColIndices), internal::size(actualRowIndices), internal::size(actualColIndices)); } // The following overload returns a Scalar template typename internal::enable_if::value && internal::traits::type>::ReturnAsScalar, CoeffReturnType >::type operator()(const RowIndices& rowIndices, const ColIndices& colIndices) EIGEN_INDEXED_VIEW_METHOD_CONST { return Base::operator()(internal::eval_expr_given_size(rowIndices,rows()),internal::eval_expr_given_size(colIndices,cols())); } #if EIGEN_HAS_STATIC_ARRAY_TEMPLATE // The following three overloads are needed to handle raw Index[N] arrays. template IndexedView::type> operator()(const RowIndicesT (&rowIndices)[RowIndicesN], const ColIndices& colIndices) EIGEN_INDEXED_VIEW_METHOD_CONST { return IndexedView::type> (derived(), rowIndices, ivcCol(colIndices)); } template IndexedView::type, const ColIndicesT (&)[ColIndicesN]> operator()(const RowIndices& rowIndices, const ColIndicesT (&colIndices)[ColIndicesN]) EIGEN_INDEXED_VIEW_METHOD_CONST { return IndexedView::type,const ColIndicesT (&)[ColIndicesN]> (derived(), ivcRow(rowIndices), colIndices); } template IndexedView operator()(const RowIndicesT (&rowIndices)[RowIndicesN], const ColIndicesT (&colIndices)[ColIndicesN]) EIGEN_INDEXED_VIEW_METHOD_CONST { return IndexedView (derived(), rowIndices, colIndices); } #endif // EIGEN_HAS_STATIC_ARRAY_TEMPLATE // Overloads for 1D vectors/arrays template typename internal::enable_if< IsRowMajor && (!(internal::get_compile_time_incr::type>::value==1 || internal::is_valid_index_type::value)), IndexedView::type> >::type operator()(const Indices& indices) EIGEN_INDEXED_VIEW_METHOD_CONST { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return IndexedView::type> (derived(), IvcIndex(0), ivcCol(indices)); } template typename internal::enable_if< (!IsRowMajor) && (!(internal::get_compile_time_incr::type>::value==1 || internal::is_valid_index_type::value)), IndexedView::type,IvcIndex> >::type operator()(const Indices& indices) EIGEN_INDEXED_VIEW_METHOD_CONST { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return IndexedView::type,IvcIndex> (derived(), ivcRow(indices), IvcIndex(0)); } template typename internal::enable_if< (internal::get_compile_time_incr::type>::value==1) && (!internal::is_valid_index_type::value) && (!symbolic::is_symbolic::value), VectorBlock::value> >::type operator()(const Indices& indices) EIGEN_INDEXED_VIEW_METHOD_CONST { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) typename IvcType::type actualIndices = ivcSize(indices); return VectorBlock::value> (derived(), internal::first(actualIndices), internal::size(actualIndices)); } template typename internal::enable_if::value, CoeffReturnType >::type operator()(const IndexType& id) EIGEN_INDEXED_VIEW_METHOD_CONST { return Base::operator()(internal::eval_expr_given_size(id,size())); } #if EIGEN_HAS_STATIC_ARRAY_TEMPLATE template typename internal::enable_if >::type operator()(const IndicesT (&indices)[IndicesN]) EIGEN_INDEXED_VIEW_METHOD_CONST { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return IndexedView (derived(), IvcIndex(0), indices); } template typename internal::enable_if >::type operator()(const IndicesT (&indices)[IndicesN]) EIGEN_INDEXED_VIEW_METHOD_CONST { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return IndexedView (derived(), indices, IvcIndex(0)); } #endif // EIGEN_HAS_STATIC_ARRAY_TEMPLATE #undef EIGEN_INDEXED_VIEW_METHOD_CONST #undef EIGEN_INDEXED_VIEW_METHOD_TYPE #ifndef EIGEN_INDEXED_VIEW_METHOD_2ND_PASS #define EIGEN_INDEXED_VIEW_METHOD_2ND_PASS #include "IndexedViewMethods.h" #undef EIGEN_INDEXED_VIEW_METHOD_2ND_PASS #endif #else // EIGEN_PARSED_BY_DOXYGEN /** * \returns a generic submatrix view defined by the rows and columns indexed \a rowIndices and \a colIndices respectively. * * Each parameter must either be: * - An integer indexing a single row or column * - Eigen::all indexing the full set of respective rows or columns in increasing order * - An ArithmeticSequence as returned by the Eigen::seq and Eigen::seqN functions * - Any %Eigen's vector/array of integers or expressions * - Plain C arrays: \c int[N] * - And more generally any type exposing the following two member functions: * \code * operator[]() const; * size() const; * \endcode * where \c stands for any integer type compatible with Eigen::Index (i.e. \c std::ptrdiff_t). * * The last statement implies compatibility with \c std::vector, \c std::valarray, \c std::array, many of the Range-v3's ranges, etc. * * If the submatrix can be represented using a starting position \c (i,j) and positive sizes \c (rows,columns), then this * method will returns a Block object after extraction of the relevant information from the passed arguments. This is the case * when all arguments are either: * - An integer * - Eigen::all * - An ArithmeticSequence with compile-time increment strictly equal to 1, as returned by Eigen::seq(a,b), and Eigen::seqN(a,N). * * Otherwise a more general IndexedView object will be returned, after conversion of the inputs * to more suitable types \c RowIndices' and \c ColIndices'. * * For 1D vectors and arrays, you better use the operator()(const Indices&) overload, which behave the same way but taking a single parameter. * * See also this question and its answer for an example of how to duplicate coefficients. * * \sa operator()(const Indices&), class Block, class IndexedView, DenseBase::block(Index,Index,Index,Index) */ template IndexedView_or_Block operator()(const RowIndices& rowIndices, const ColIndices& colIndices); /** This is an overload of operator()(const RowIndices&, const ColIndices&) for 1D vectors or arrays * * \only_for_vectors */ template IndexedView_or_VectorBlock operator()(const Indices& indices); #endif // EIGEN_PARSED_BY_DOXYGEN piqp-octave/src/eigen/Eigen/src/plugins/CommonCwiseUnaryOps.h0000644000175100001770000001371114107270226024131 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // This file is a base class plugin containing common coefficient wise functions. #ifndef EIGEN_PARSED_BY_DOXYGEN /** \internal the return type of conjugate() */ typedef typename internal::conditional::IsComplex, const CwiseUnaryOp, const Derived>, const Derived& >::type ConjugateReturnType; /** \internal the return type of real() const */ typedef typename internal::conditional::IsComplex, const CwiseUnaryOp, const Derived>, const Derived& >::type RealReturnType; /** \internal the return type of real() */ typedef typename internal::conditional::IsComplex, CwiseUnaryView, Derived>, Derived& >::type NonConstRealReturnType; /** \internal the return type of imag() const */ typedef CwiseUnaryOp, const Derived> ImagReturnType; /** \internal the return type of imag() */ typedef CwiseUnaryView, Derived> NonConstImagReturnType; typedef CwiseUnaryOp, const Derived> NegativeReturnType; #endif // not EIGEN_PARSED_BY_DOXYGEN /// \returns an expression of the opposite of \c *this /// EIGEN_DOC_UNARY_ADDONS(operator-,opposite) /// EIGEN_DEVICE_FUNC inline const NegativeReturnType operator-() const { return NegativeReturnType(derived()); } template struct CastXpr { typedef typename internal::cast_return_type, const Derived> >::type Type; }; /// \returns an expression of \c *this with the \a Scalar type casted to /// \a NewScalar. /// /// The template parameter \a NewScalar is the type we are casting the scalars to. /// EIGEN_DOC_UNARY_ADDONS(cast,conversion function) /// /// \sa class CwiseUnaryOp /// template EIGEN_DEVICE_FUNC typename CastXpr::Type cast() const { return typename CastXpr::Type(derived()); } /// \returns an expression of the complex conjugate of \c *this. /// EIGEN_DOC_UNARY_ADDONS(conjugate,complex conjugate) /// /// \sa Math functions, MatrixBase::adjoint() EIGEN_DEVICE_FUNC inline ConjugateReturnType conjugate() const { return ConjugateReturnType(derived()); } /// \returns an expression of the complex conjugate of \c *this if Cond==true, returns derived() otherwise. /// EIGEN_DOC_UNARY_ADDONS(conjugate,complex conjugate) /// /// \sa conjugate() template EIGEN_DEVICE_FUNC inline typename internal::conditional::type conjugateIf() const { typedef typename internal::conditional::type ReturnType; return ReturnType(derived()); } /// \returns a read-only expression of the real part of \c *this. /// EIGEN_DOC_UNARY_ADDONS(real,real part function) /// /// \sa imag() EIGEN_DEVICE_FUNC inline RealReturnType real() const { return RealReturnType(derived()); } /// \returns an read-only expression of the imaginary part of \c *this. /// EIGEN_DOC_UNARY_ADDONS(imag,imaginary part function) /// /// \sa real() EIGEN_DEVICE_FUNC inline const ImagReturnType imag() const { return ImagReturnType(derived()); } /// \brief Apply a unary operator coefficient-wise /// \param[in] func Functor implementing the unary operator /// \tparam CustomUnaryOp Type of \a func /// \returns An expression of a custom coefficient-wise unary operator \a func of *this /// /// The function \c ptr_fun() from the C++ standard library can be used to make functors out of normal functions. /// /// Example: /// \include class_CwiseUnaryOp_ptrfun.cpp /// Output: \verbinclude class_CwiseUnaryOp_ptrfun.out /// /// Genuine functors allow for more possibilities, for instance it may contain a state. /// /// Example: /// \include class_CwiseUnaryOp.cpp /// Output: \verbinclude class_CwiseUnaryOp.out /// EIGEN_DOC_UNARY_ADDONS(unaryExpr,unary function) /// /// \sa unaryViewExpr, binaryExpr, class CwiseUnaryOp /// template EIGEN_DEVICE_FUNC inline const CwiseUnaryOp unaryExpr(const CustomUnaryOp& func = CustomUnaryOp()) const { return CwiseUnaryOp(derived(), func); } /// \returns an expression of a custom coefficient-wise unary operator \a func of *this /// /// The template parameter \a CustomUnaryOp is the type of the functor /// of the custom unary operator. /// /// Example: /// \include class_CwiseUnaryOp.cpp /// Output: \verbinclude class_CwiseUnaryOp.out /// EIGEN_DOC_UNARY_ADDONS(unaryViewExpr,unary function) /// /// \sa unaryExpr, binaryExpr class CwiseUnaryOp /// template EIGEN_DEVICE_FUNC inline const CwiseUnaryView unaryViewExpr(const CustomViewOp& func = CustomViewOp()) const { return CwiseUnaryView(derived(), func); } /// \returns a non const expression of the real part of \c *this. /// EIGEN_DOC_UNARY_ADDONS(real,real part function) /// /// \sa imag() EIGEN_DEVICE_FUNC inline NonConstRealReturnType real() { return NonConstRealReturnType(derived()); } /// \returns a non const expression of the imaginary part of \c *this. /// EIGEN_DOC_UNARY_ADDONS(imag,imaginary part function) /// /// \sa real() EIGEN_DEVICE_FUNC inline NonConstImagReturnType imag() { return NonConstImagReturnType(derived()); } piqp-octave/src/eigen/Eigen/src/plugins/CommonCwiseBinaryOps.h0000644000175100001770000001133414107270226024256 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2016 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // This file is a base class plugin containing common coefficient wise functions. /** \returns an expression of the difference of \c *this and \a other * * \note If you want to substract a given scalar from all coefficients, see Cwise::operator-(). * * \sa class CwiseBinaryOp, operator-=() */ EIGEN_MAKE_CWISE_BINARY_OP(operator-,difference) /** \returns an expression of the sum of \c *this and \a other * * \note If you want to add a given scalar to all coefficients, see Cwise::operator+(). * * \sa class CwiseBinaryOp, operator+=() */ EIGEN_MAKE_CWISE_BINARY_OP(operator+,sum) /** \returns an expression of a custom coefficient-wise operator \a func of *this and \a other * * The template parameter \a CustomBinaryOp is the type of the functor * of the custom operator (see class CwiseBinaryOp for an example) * * Here is an example illustrating the use of custom functors: * \include class_CwiseBinaryOp.cpp * Output: \verbinclude class_CwiseBinaryOp.out * * \sa class CwiseBinaryOp, operator+(), operator-(), cwiseProduct() */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp binaryExpr(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other, const CustomBinaryOp& func = CustomBinaryOp()) const { return CwiseBinaryOp(derived(), other.derived(), func); } #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP(operator*,product) #else /** \returns an expression of \c *this scaled by the scalar factor \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. */ template const CwiseBinaryOp,Derived,Constant > operator*(const T& scalar) const; /** \returns an expression of \a expr scaled by the scalar factor \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. */ template friend const CwiseBinaryOp,Constant,Derived> operator*(const T& scalar, const StorageBaseType& expr); #endif #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP_ONTHERIGHT(operator/,quotient) #else /** \returns an expression of \c *this divided by the scalar value \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. */ template const CwiseBinaryOp,Derived,Constant > operator/(const T& scalar) const; #endif /** \returns an expression of the coefficient-wise boolean \b and operator of \c *this and \a other * * \warning this operator is for expression of bool only. * * Example: \include Cwise_boolean_and.cpp * Output: \verbinclude Cwise_boolean_and.out * * \sa operator||(), select() */ template EIGEN_DEVICE_FUNC inline const CwiseBinaryOp operator&&(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { EIGEN_STATIC_ASSERT((internal::is_same::value && internal::is_same::value), THIS_METHOD_IS_ONLY_FOR_EXPRESSIONS_OF_BOOL); return CwiseBinaryOp(derived(),other.derived()); } /** \returns an expression of the coefficient-wise boolean \b or operator of \c *this and \a other * * \warning this operator is for expression of bool only. * * Example: \include Cwise_boolean_or.cpp * Output: \verbinclude Cwise_boolean_or.out * * \sa operator&&(), select() */ template EIGEN_DEVICE_FUNC inline const CwiseBinaryOp operator||(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { EIGEN_STATIC_ASSERT((internal::is_same::value && internal::is_same::value), THIS_METHOD_IS_ONLY_FOR_EXPRESSIONS_OF_BOOL); return CwiseBinaryOp(derived(),other.derived()); } piqp-octave/src/eigen/Eigen/src/plugins/MatrixCwiseBinaryOps.h0000644000175100001770000001436314107270226024277 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // This file is a base class plugin containing matrix specifics coefficient wise functions. /** \returns an expression of the Schur product (coefficient wise product) of *this and \a other * * Example: \include MatrixBase_cwiseProduct.cpp * Output: \verbinclude MatrixBase_cwiseProduct.out * * \sa class CwiseBinaryOp, cwiseAbs2 */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const EIGEN_CWISE_BINARY_RETURN_TYPE(Derived,OtherDerived,product) cwiseProduct(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return EIGEN_CWISE_BINARY_RETURN_TYPE(Derived,OtherDerived,product)(derived(), other.derived()); } /** \returns an expression of the coefficient-wise == operator of *this and \a other * * \warning this performs an exact comparison, which is generally a bad idea with floating-point types. * In order to check for equality between two vectors or matrices with floating-point coefficients, it is * generally a far better idea to use a fuzzy comparison as provided by isApprox() and * isMuchSmallerThan(). * * Example: \include MatrixBase_cwiseEqual.cpp * Output: \verbinclude MatrixBase_cwiseEqual.out * * \sa cwiseNotEqual(), isApprox(), isMuchSmallerThan() */ template EIGEN_DEVICE_FUNC inline const CwiseBinaryOp, const Derived, const OtherDerived> cwiseEqual(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } /** \returns an expression of the coefficient-wise != operator of *this and \a other * * \warning this performs an exact comparison, which is generally a bad idea with floating-point types. * In order to check for equality between two vectors or matrices with floating-point coefficients, it is * generally a far better idea to use a fuzzy comparison as provided by isApprox() and * isMuchSmallerThan(). * * Example: \include MatrixBase_cwiseNotEqual.cpp * Output: \verbinclude MatrixBase_cwiseNotEqual.out * * \sa cwiseEqual(), isApprox(), isMuchSmallerThan() */ template EIGEN_DEVICE_FUNC inline const CwiseBinaryOp, const Derived, const OtherDerived> cwiseNotEqual(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } /** \returns an expression of the coefficient-wise min of *this and \a other * * Example: \include MatrixBase_cwiseMin.cpp * Output: \verbinclude MatrixBase_cwiseMin.out * * \sa class CwiseBinaryOp, max() */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const OtherDerived> cwiseMin(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } /** \returns an expression of the coefficient-wise min of *this and scalar \a other * * \sa class CwiseBinaryOp, min() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const ConstantReturnType> cwiseMin(const Scalar &other) const { return cwiseMin(Derived::Constant(rows(), cols(), other)); } /** \returns an expression of the coefficient-wise max of *this and \a other * * Example: \include MatrixBase_cwiseMax.cpp * Output: \verbinclude MatrixBase_cwiseMax.out * * \sa class CwiseBinaryOp, min() */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const OtherDerived> cwiseMax(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } /** \returns an expression of the coefficient-wise max of *this and scalar \a other * * \sa class CwiseBinaryOp, min() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const ConstantReturnType> cwiseMax(const Scalar &other) const { return cwiseMax(Derived::Constant(rows(), cols(), other)); } /** \returns an expression of the coefficient-wise quotient of *this and \a other * * Example: \include MatrixBase_cwiseQuotient.cpp * Output: \verbinclude MatrixBase_cwiseQuotient.out * * \sa class CwiseBinaryOp, cwiseProduct(), cwiseInverse() */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const OtherDerived> cwiseQuotient(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } typedef CwiseBinaryOp, const Derived, const ConstantReturnType> CwiseScalarEqualReturnType; /** \returns an expression of the coefficient-wise == operator of \c *this and a scalar \a s * * \warning this performs an exact comparison, which is generally a bad idea with floating-point types. * In order to check for equality between two vectors or matrices with floating-point coefficients, it is * generally a far better idea to use a fuzzy comparison as provided by isApprox() and * isMuchSmallerThan(). * * \sa cwiseEqual(const MatrixBase &) const */ EIGEN_DEVICE_FUNC inline const CwiseScalarEqualReturnType cwiseEqual(const Scalar& s) const { return CwiseScalarEqualReturnType(derived(), Derived::Constant(rows(), cols(), s), internal::scalar_cmp_op()); } piqp-octave/src/eigen/Eigen/src/plugins/MatrixCwiseUnaryOps.h0000644000175100001770000000642614107270226024152 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // This file is included into the body of the base classes supporting matrix specific coefficient-wise functions. // This include MatrixBase and SparseMatrixBase. typedef CwiseUnaryOp, const Derived> CwiseAbsReturnType; typedef CwiseUnaryOp, const Derived> CwiseAbs2ReturnType; typedef CwiseUnaryOp, const Derived> CwiseArgReturnType; typedef CwiseUnaryOp, const Derived> CwiseSqrtReturnType; typedef CwiseUnaryOp, const Derived> CwiseSignReturnType; typedef CwiseUnaryOp, const Derived> CwiseInverseReturnType; /// \returns an expression of the coefficient-wise absolute value of \c *this /// /// Example: \include MatrixBase_cwiseAbs.cpp /// Output: \verbinclude MatrixBase_cwiseAbs.out /// EIGEN_DOC_UNARY_ADDONS(cwiseAbs,absolute value) /// /// \sa cwiseAbs2() /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseAbsReturnType cwiseAbs() const { return CwiseAbsReturnType(derived()); } /// \returns an expression of the coefficient-wise squared absolute value of \c *this /// /// Example: \include MatrixBase_cwiseAbs2.cpp /// Output: \verbinclude MatrixBase_cwiseAbs2.out /// EIGEN_DOC_UNARY_ADDONS(cwiseAbs2,squared absolute value) /// /// \sa cwiseAbs() /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseAbs2ReturnType cwiseAbs2() const { return CwiseAbs2ReturnType(derived()); } /// \returns an expression of the coefficient-wise square root of *this. /// /// Example: \include MatrixBase_cwiseSqrt.cpp /// Output: \verbinclude MatrixBase_cwiseSqrt.out /// EIGEN_DOC_UNARY_ADDONS(cwiseSqrt,square-root) /// /// \sa cwisePow(), cwiseSquare() /// EIGEN_DEVICE_FUNC inline const CwiseSqrtReturnType cwiseSqrt() const { return CwiseSqrtReturnType(derived()); } /// \returns an expression of the coefficient-wise signum of *this. /// /// Example: \include MatrixBase_cwiseSign.cpp /// Output: \verbinclude MatrixBase_cwiseSign.out /// EIGEN_DOC_UNARY_ADDONS(cwiseSign,sign function) /// EIGEN_DEVICE_FUNC inline const CwiseSignReturnType cwiseSign() const { return CwiseSignReturnType(derived()); } /// \returns an expression of the coefficient-wise inverse of *this. /// /// Example: \include MatrixBase_cwiseInverse.cpp /// Output: \verbinclude MatrixBase_cwiseInverse.out /// EIGEN_DOC_UNARY_ADDONS(cwiseInverse,inverse) /// /// \sa cwiseProduct() /// EIGEN_DEVICE_FUNC inline const CwiseInverseReturnType cwiseInverse() const { return CwiseInverseReturnType(derived()); } /// \returns an expression of the coefficient-wise phase angle of \c *this /// /// Example: \include MatrixBase_cwiseArg.cpp /// Output: \verbinclude MatrixBase_cwiseArg.out /// EIGEN_DOC_UNARY_ADDONS(cwiseArg,arg) EIGEN_DEVICE_FUNC inline const CwiseArgReturnType cwiseArg() const { return CwiseArgReturnType(derived()); } piqp-octave/src/eigen/Eigen/src/plugins/ArrayCwiseBinaryOps.h0000644000175100001770000003335414107270226024112 0ustar runnerdocker /** \returns an expression of the coefficient wise product of \c *this and \a other * * \sa MatrixBase::cwiseProduct */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const EIGEN_CWISE_BINARY_RETURN_TYPE(Derived,OtherDerived,product) operator*(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return EIGEN_CWISE_BINARY_RETURN_TYPE(Derived,OtherDerived,product)(derived(), other.derived()); } /** \returns an expression of the coefficient wise quotient of \c *this and \a other * * \sa MatrixBase::cwiseQuotient */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const OtherDerived> operator/(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); } /** \returns an expression of the coefficient-wise min of \c *this and \a other * * Example: \include Cwise_min.cpp * Output: \verbinclude Cwise_min.out * * \sa max() */ EIGEN_MAKE_CWISE_BINARY_OP(min,min) /** \returns an expression of the coefficient-wise min of \c *this and scalar \a other * * \sa max() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const CwiseNullaryOp, PlainObject> > #ifdef EIGEN_PARSED_BY_DOXYGEN min #else (min) #endif (const Scalar &other) const { return (min)(Derived::PlainObject::Constant(rows(), cols(), other)); } /** \returns an expression of the coefficient-wise max of \c *this and \a other * * Example: \include Cwise_max.cpp * Output: \verbinclude Cwise_max.out * * \sa min() */ EIGEN_MAKE_CWISE_BINARY_OP(max,max) /** \returns an expression of the coefficient-wise max of \c *this and scalar \a other * * \sa min() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const CwiseNullaryOp, PlainObject> > #ifdef EIGEN_PARSED_BY_DOXYGEN max #else (max) #endif (const Scalar &other) const { return (max)(Derived::PlainObject::Constant(rows(), cols(), other)); } /** \returns an expression of the coefficient-wise absdiff of \c *this and \a other * * Example: \include Cwise_absolute_difference.cpp * Output: \verbinclude Cwise_absolute_difference.out * * \sa absolute_difference() */ EIGEN_MAKE_CWISE_BINARY_OP(absolute_difference,absolute_difference) /** \returns an expression of the coefficient-wise absolute_difference of \c *this and scalar \a other * * \sa absolute_difference() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const CwiseNullaryOp, PlainObject> > #ifdef EIGEN_PARSED_BY_DOXYGEN absolute_difference #else (absolute_difference) #endif (const Scalar &other) const { return (absolute_difference)(Derived::PlainObject::Constant(rows(), cols(), other)); } /** \returns an expression of the coefficient-wise power of \c *this to the given array of \a exponents. * * This function computes the coefficient-wise power. * * Example: \include Cwise_array_power_array.cpp * Output: \verbinclude Cwise_array_power_array.out */ EIGEN_MAKE_CWISE_BINARY_OP(pow,pow) #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP_ONTHERIGHT(pow,pow) #else /** \returns an expression of the coefficients of \c *this rasied to the constant power \a exponent * * \tparam T is the scalar type of \a exponent. It must be compatible with the scalar type of the given expression. * * This function computes the coefficient-wise power. The function MatrixBase::pow() in the * unsupported module MatrixFunctions computes the matrix power. * * Example: \include Cwise_pow.cpp * Output: \verbinclude Cwise_pow.out * * \sa ArrayBase::pow(ArrayBase), square(), cube(), exp(), log() */ template const CwiseBinaryOp,Derived,Constant > pow(const T& exponent) const; #endif // TODO code generating macros could be moved to Macros.h and could include generation of documentation #define EIGEN_MAKE_CWISE_COMP_OP(OP, COMPARATOR) \ template \ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const Derived, const OtherDerived> \ OP(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const \ { \ return CwiseBinaryOp, const Derived, const OtherDerived>(derived(), other.derived()); \ }\ typedef CwiseBinaryOp, const Derived, const CwiseNullaryOp, PlainObject> > Cmp ## COMPARATOR ## ReturnType; \ typedef CwiseBinaryOp, const CwiseNullaryOp, PlainObject>, const Derived > RCmp ## COMPARATOR ## ReturnType; \ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Cmp ## COMPARATOR ## ReturnType \ OP(const Scalar& s) const { \ return this->OP(Derived::PlainObject::Constant(rows(), cols(), s)); \ } \ EIGEN_DEVICE_FUNC friend EIGEN_STRONG_INLINE const RCmp ## COMPARATOR ## ReturnType \ OP(const Scalar& s, const EIGEN_CURRENT_STORAGE_BASE_CLASS& d) { \ return Derived::PlainObject::Constant(d.rows(), d.cols(), s).OP(d); \ } #define EIGEN_MAKE_CWISE_COMP_R_OP(OP, R_OP, RCOMPARATOR) \ template \ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CwiseBinaryOp, const OtherDerived, const Derived> \ OP(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const \ { \ return CwiseBinaryOp, const OtherDerived, const Derived>(other.derived(), derived()); \ } \ EIGEN_DEVICE_FUNC \ inline const RCmp ## RCOMPARATOR ## ReturnType \ OP(const Scalar& s) const { \ return Derived::PlainObject::Constant(rows(), cols(), s).R_OP(*this); \ } \ friend inline const Cmp ## RCOMPARATOR ## ReturnType \ OP(const Scalar& s, const Derived& d) { \ return d.R_OP(Derived::PlainObject::Constant(d.rows(), d.cols(), s)); \ } /** \returns an expression of the coefficient-wise \< operator of *this and \a other * * Example: \include Cwise_less.cpp * Output: \verbinclude Cwise_less.out * * \sa all(), any(), operator>(), operator<=() */ EIGEN_MAKE_CWISE_COMP_OP(operator<, LT) /** \returns an expression of the coefficient-wise \<= operator of *this and \a other * * Example: \include Cwise_less_equal.cpp * Output: \verbinclude Cwise_less_equal.out * * \sa all(), any(), operator>=(), operator<() */ EIGEN_MAKE_CWISE_COMP_OP(operator<=, LE) /** \returns an expression of the coefficient-wise \> operator of *this and \a other * * Example: \include Cwise_greater.cpp * Output: \verbinclude Cwise_greater.out * * \sa all(), any(), operator>=(), operator<() */ EIGEN_MAKE_CWISE_COMP_R_OP(operator>, operator<, LT) /** \returns an expression of the coefficient-wise \>= operator of *this and \a other * * Example: \include Cwise_greater_equal.cpp * Output: \verbinclude Cwise_greater_equal.out * * \sa all(), any(), operator>(), operator<=() */ EIGEN_MAKE_CWISE_COMP_R_OP(operator>=, operator<=, LE) /** \returns an expression of the coefficient-wise == operator of *this and \a other * * \warning this performs an exact comparison, which is generally a bad idea with floating-point types. * In order to check for equality between two vectors or matrices with floating-point coefficients, it is * generally a far better idea to use a fuzzy comparison as provided by isApprox() and * isMuchSmallerThan(). * * Example: \include Cwise_equal_equal.cpp * Output: \verbinclude Cwise_equal_equal.out * * \sa all(), any(), isApprox(), isMuchSmallerThan() */ EIGEN_MAKE_CWISE_COMP_OP(operator==, EQ) /** \returns an expression of the coefficient-wise != operator of *this and \a other * * \warning this performs an exact comparison, which is generally a bad idea with floating-point types. * In order to check for equality between two vectors or matrices with floating-point coefficients, it is * generally a far better idea to use a fuzzy comparison as provided by isApprox() and * isMuchSmallerThan(). * * Example: \include Cwise_not_equal.cpp * Output: \verbinclude Cwise_not_equal.out * * \sa all(), any(), isApprox(), isMuchSmallerThan() */ EIGEN_MAKE_CWISE_COMP_OP(operator!=, NEQ) #undef EIGEN_MAKE_CWISE_COMP_OP #undef EIGEN_MAKE_CWISE_COMP_R_OP // scalar addition #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP(operator+,sum) #else /** \returns an expression of \c *this with each coeff incremented by the constant \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. * * Example: \include Cwise_plus.cpp * Output: \verbinclude Cwise_plus.out * * \sa operator+=(), operator-() */ template const CwiseBinaryOp,Derived,Constant > operator+(const T& scalar) const; /** \returns an expression of \a expr with each coeff incremented by the constant \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. */ template friend const CwiseBinaryOp,Constant,Derived> operator+(const T& scalar, const StorageBaseType& expr); #endif #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP(operator-,difference) #else /** \returns an expression of \c *this with each coeff decremented by the constant \a scalar * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. * * Example: \include Cwise_minus.cpp * Output: \verbinclude Cwise_minus.out * * \sa operator+=(), operator-() */ template const CwiseBinaryOp,Derived,Constant > operator-(const T& scalar) const; /** \returns an expression of the constant matrix of value \a scalar decremented by the coefficients of \a expr * * \tparam T is the scalar type of \a scalar. It must be compatible with the scalar type of the given expression. */ template friend const CwiseBinaryOp,Constant,Derived> operator-(const T& scalar, const StorageBaseType& expr); #endif #ifndef EIGEN_PARSED_BY_DOXYGEN EIGEN_MAKE_SCALAR_BINARY_OP_ONTHELEFT(operator/,quotient) #else /** * \brief Component-wise division of the scalar \a s by array elements of \a a. * * \tparam Scalar is the scalar type of \a x. It must be compatible with the scalar type of the given array expression (\c Derived::Scalar). */ template friend inline const CwiseBinaryOp,Constant,Derived> operator/(const T& s,const StorageBaseType& a); #endif /** \returns an expression of the coefficient-wise ^ operator of *this and \a other * * \warning this operator is for expression of bool only. * * Example: \include Cwise_boolean_xor.cpp * Output: \verbinclude Cwise_boolean_xor.out * * \sa operator&&(), select() */ template EIGEN_DEVICE_FUNC inline const CwiseBinaryOp operator^(const EIGEN_CURRENT_STORAGE_BASE_CLASS &other) const { EIGEN_STATIC_ASSERT((internal::is_same::value && internal::is_same::value), THIS_METHOD_IS_ONLY_FOR_EXPRESSIONS_OF_BOOL); return CwiseBinaryOp(derived(),other.derived()); } // NOTE disabled until we agree on argument order #if 0 /** \cpp11 \returns an expression of the coefficient-wise polygamma function. * * \specialfunctions_module * * It returns the \a n -th derivative of the digamma(psi) evaluated at \c *this. * * \warning Be careful with the order of the parameters: x.polygamma(n) is equivalent to polygamma(n,x) * * \sa Eigen::polygamma() */ template inline const CwiseBinaryOp, const DerivedN, const Derived> polygamma(const EIGEN_CURRENT_STORAGE_BASE_CLASS &n) const { return CwiseBinaryOp, const DerivedN, const Derived>(n.derived(), this->derived()); } #endif /** \returns an expression of the coefficient-wise zeta function. * * \specialfunctions_module * * It returns the Riemann zeta function of two arguments \c *this and \a q: * * \param q is the shift, it must be > 0 * * \note *this is the exponent, it must be > 1. * \note This function supports only float and double scalar types. To support other scalar types, the user has * to provide implementations of zeta(T,T) for any scalar type T to be supported. * * This method is an alias for zeta(*this,q); * * \sa Eigen::zeta() */ template inline const CwiseBinaryOp, const Derived, const DerivedQ> zeta(const EIGEN_CURRENT_STORAGE_BASE_CLASS &q) const { return CwiseBinaryOp, const Derived, const DerivedQ>(this->derived(), q.derived()); } piqp-octave/src/eigen/Eigen/src/plugins/ArrayCwiseUnaryOps.h0000644000175100001770000005166714107270226023773 0ustar runnerdocker typedef CwiseUnaryOp, const Derived> AbsReturnType; typedef CwiseUnaryOp, const Derived> ArgReturnType; typedef CwiseUnaryOp, const Derived> Abs2ReturnType; typedef CwiseUnaryOp, const Derived> SqrtReturnType; typedef CwiseUnaryOp, const Derived> RsqrtReturnType; typedef CwiseUnaryOp, const Derived> SignReturnType; typedef CwiseUnaryOp, const Derived> InverseReturnType; typedef CwiseUnaryOp, const Derived> BooleanNotReturnType; typedef CwiseUnaryOp, const Derived> ExpReturnType; typedef CwiseUnaryOp, const Derived> Expm1ReturnType; typedef CwiseUnaryOp, const Derived> LogReturnType; typedef CwiseUnaryOp, const Derived> Log1pReturnType; typedef CwiseUnaryOp, const Derived> Log10ReturnType; typedef CwiseUnaryOp, const Derived> Log2ReturnType; typedef CwiseUnaryOp, const Derived> CosReturnType; typedef CwiseUnaryOp, const Derived> SinReturnType; typedef CwiseUnaryOp, const Derived> TanReturnType; typedef CwiseUnaryOp, const Derived> AcosReturnType; typedef CwiseUnaryOp, const Derived> AsinReturnType; typedef CwiseUnaryOp, const Derived> AtanReturnType; typedef CwiseUnaryOp, const Derived> TanhReturnType; typedef CwiseUnaryOp, const Derived> LogisticReturnType; typedef CwiseUnaryOp, const Derived> SinhReturnType; #if EIGEN_HAS_CXX11_MATH typedef CwiseUnaryOp, const Derived> AtanhReturnType; typedef CwiseUnaryOp, const Derived> AsinhReturnType; typedef CwiseUnaryOp, const Derived> AcoshReturnType; #endif typedef CwiseUnaryOp, const Derived> CoshReturnType; typedef CwiseUnaryOp, const Derived> SquareReturnType; typedef CwiseUnaryOp, const Derived> CubeReturnType; typedef CwiseUnaryOp, const Derived> RoundReturnType; typedef CwiseUnaryOp, const Derived> RintReturnType; typedef CwiseUnaryOp, const Derived> FloorReturnType; typedef CwiseUnaryOp, const Derived> CeilReturnType; typedef CwiseUnaryOp, const Derived> IsNaNReturnType; typedef CwiseUnaryOp, const Derived> IsInfReturnType; typedef CwiseUnaryOp, const Derived> IsFiniteReturnType; /** \returns an expression of the coefficient-wise absolute value of \c *this * * Example: \include Cwise_abs.cpp * Output: \verbinclude Cwise_abs.out * * \sa Math functions, abs2() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const AbsReturnType abs() const { return AbsReturnType(derived()); } /** \returns an expression of the coefficient-wise phase angle of \c *this * * Example: \include Cwise_arg.cpp * Output: \verbinclude Cwise_arg.out * * \sa abs() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const ArgReturnType arg() const { return ArgReturnType(derived()); } /** \returns an expression of the coefficient-wise squared absolute value of \c *this * * Example: \include Cwise_abs2.cpp * Output: \verbinclude Cwise_abs2.out * * \sa Math functions, abs(), square() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Abs2ReturnType abs2() const { return Abs2ReturnType(derived()); } /** \returns an expression of the coefficient-wise exponential of *this. * * This function computes the coefficient-wise exponential. The function MatrixBase::exp() in the * unsupported module MatrixFunctions computes the matrix exponential. * * Example: \include Cwise_exp.cpp * Output: \verbinclude Cwise_exp.out * * \sa Math functions, pow(), log(), sin(), cos() */ EIGEN_DEVICE_FUNC inline const ExpReturnType exp() const { return ExpReturnType(derived()); } /** \returns an expression of the coefficient-wise exponential of *this minus 1. * * In exact arithmetic, \c x.expm1() is equivalent to \c x.exp() - 1, * however, with finite precision, this function is much more accurate when \c x is close to zero. * * \sa Math functions, exp() */ EIGEN_DEVICE_FUNC inline const Expm1ReturnType expm1() const { return Expm1ReturnType(derived()); } /** \returns an expression of the coefficient-wise logarithm of *this. * * This function computes the coefficient-wise logarithm. The function MatrixBase::log() in the * unsupported module MatrixFunctions computes the matrix logarithm. * * Example: \include Cwise_log.cpp * Output: \verbinclude Cwise_log.out * * \sa Math functions, log() */ EIGEN_DEVICE_FUNC inline const LogReturnType log() const { return LogReturnType(derived()); } /** \returns an expression of the coefficient-wise logarithm of 1 plus \c *this. * * In exact arithmetic, \c x.log() is equivalent to \c (x+1).log(), * however, with finite precision, this function is much more accurate when \c x is close to zero. * * \sa Math functions, log() */ EIGEN_DEVICE_FUNC inline const Log1pReturnType log1p() const { return Log1pReturnType(derived()); } /** \returns an expression of the coefficient-wise base-10 logarithm of *this. * * This function computes the coefficient-wise base-10 logarithm. * * Example: \include Cwise_log10.cpp * Output: \verbinclude Cwise_log10.out * * \sa Math functions, log() */ EIGEN_DEVICE_FUNC inline const Log10ReturnType log10() const { return Log10ReturnType(derived()); } /** \returns an expression of the coefficient-wise base-2 logarithm of *this. * * This function computes the coefficient-wise base-2 logarithm. * */ EIGEN_DEVICE_FUNC inline const Log2ReturnType log2() const { return Log2ReturnType(derived()); } /** \returns an expression of the coefficient-wise square root of *this. * * This function computes the coefficient-wise square root. The function MatrixBase::sqrt() in the * unsupported module MatrixFunctions computes the matrix square root. * * Example: \include Cwise_sqrt.cpp * Output: \verbinclude Cwise_sqrt.out * * \sa Math functions, pow(), square() */ EIGEN_DEVICE_FUNC inline const SqrtReturnType sqrt() const { return SqrtReturnType(derived()); } /** \returns an expression of the coefficient-wise inverse square root of *this. * * This function computes the coefficient-wise inverse square root. * * Example: \include Cwise_sqrt.cpp * Output: \verbinclude Cwise_sqrt.out * * \sa pow(), square() */ EIGEN_DEVICE_FUNC inline const RsqrtReturnType rsqrt() const { return RsqrtReturnType(derived()); } /** \returns an expression of the coefficient-wise signum of *this. * * This function computes the coefficient-wise signum. * * Example: \include Cwise_sign.cpp * Output: \verbinclude Cwise_sign.out * * \sa pow(), square() */ EIGEN_DEVICE_FUNC inline const SignReturnType sign() const { return SignReturnType(derived()); } /** \returns an expression of the coefficient-wise cosine of *this. * * This function computes the coefficient-wise cosine. The function MatrixBase::cos() in the * unsupported module MatrixFunctions computes the matrix cosine. * * Example: \include Cwise_cos.cpp * Output: \verbinclude Cwise_cos.out * * \sa Math functions, sin(), acos() */ EIGEN_DEVICE_FUNC inline const CosReturnType cos() const { return CosReturnType(derived()); } /** \returns an expression of the coefficient-wise sine of *this. * * This function computes the coefficient-wise sine. The function MatrixBase::sin() in the * unsupported module MatrixFunctions computes the matrix sine. * * Example: \include Cwise_sin.cpp * Output: \verbinclude Cwise_sin.out * * \sa Math functions, cos(), asin() */ EIGEN_DEVICE_FUNC inline const SinReturnType sin() const { return SinReturnType(derived()); } /** \returns an expression of the coefficient-wise tan of *this. * * Example: \include Cwise_tan.cpp * Output: \verbinclude Cwise_tan.out * * \sa Math functions, cos(), sin() */ EIGEN_DEVICE_FUNC inline const TanReturnType tan() const { return TanReturnType(derived()); } /** \returns an expression of the coefficient-wise arc tan of *this. * * Example: \include Cwise_atan.cpp * Output: \verbinclude Cwise_atan.out * * \sa Math functions, tan(), asin(), acos() */ EIGEN_DEVICE_FUNC inline const AtanReturnType atan() const { return AtanReturnType(derived()); } /** \returns an expression of the coefficient-wise arc cosine of *this. * * Example: \include Cwise_acos.cpp * Output: \verbinclude Cwise_acos.out * * \sa Math functions, cos(), asin() */ EIGEN_DEVICE_FUNC inline const AcosReturnType acos() const { return AcosReturnType(derived()); } /** \returns an expression of the coefficient-wise arc sine of *this. * * Example: \include Cwise_asin.cpp * Output: \verbinclude Cwise_asin.out * * \sa Math functions, sin(), acos() */ EIGEN_DEVICE_FUNC inline const AsinReturnType asin() const { return AsinReturnType(derived()); } /** \returns an expression of the coefficient-wise hyperbolic tan of *this. * * Example: \include Cwise_tanh.cpp * Output: \verbinclude Cwise_tanh.out * * \sa Math functions, tan(), sinh(), cosh() */ EIGEN_DEVICE_FUNC inline const TanhReturnType tanh() const { return TanhReturnType(derived()); } /** \returns an expression of the coefficient-wise hyperbolic sin of *this. * * Example: \include Cwise_sinh.cpp * Output: \verbinclude Cwise_sinh.out * * \sa Math functions, sin(), tanh(), cosh() */ EIGEN_DEVICE_FUNC inline const SinhReturnType sinh() const { return SinhReturnType(derived()); } /** \returns an expression of the coefficient-wise hyperbolic cos of *this. * * Example: \include Cwise_cosh.cpp * Output: \verbinclude Cwise_cosh.out * * \sa Math functions, tanh(), sinh(), cosh() */ EIGEN_DEVICE_FUNC inline const CoshReturnType cosh() const { return CoshReturnType(derived()); } #if EIGEN_HAS_CXX11_MATH /** \returns an expression of the coefficient-wise inverse hyperbolic tan of *this. * * \sa Math functions, atanh(), asinh(), acosh() */ EIGEN_DEVICE_FUNC inline const AtanhReturnType atanh() const { return AtanhReturnType(derived()); } /** \returns an expression of the coefficient-wise inverse hyperbolic sin of *this. * * \sa Math functions, atanh(), asinh(), acosh() */ EIGEN_DEVICE_FUNC inline const AsinhReturnType asinh() const { return AsinhReturnType(derived()); } /** \returns an expression of the coefficient-wise inverse hyperbolic cos of *this. * * \sa Math functions, atanh(), asinh(), acosh() */ EIGEN_DEVICE_FUNC inline const AcoshReturnType acosh() const { return AcoshReturnType(derived()); } #endif /** \returns an expression of the coefficient-wise logistic of *this. */ EIGEN_DEVICE_FUNC inline const LogisticReturnType logistic() const { return LogisticReturnType(derived()); } /** \returns an expression of the coefficient-wise inverse of *this. * * Example: \include Cwise_inverse.cpp * Output: \verbinclude Cwise_inverse.out * * \sa operator/(), operator*() */ EIGEN_DEVICE_FUNC inline const InverseReturnType inverse() const { return InverseReturnType(derived()); } /** \returns an expression of the coefficient-wise square of *this. * * Example: \include Cwise_square.cpp * Output: \verbinclude Cwise_square.out * * \sa Math functions, abs2(), cube(), pow() */ EIGEN_DEVICE_FUNC inline const SquareReturnType square() const { return SquareReturnType(derived()); } /** \returns an expression of the coefficient-wise cube of *this. * * Example: \include Cwise_cube.cpp * Output: \verbinclude Cwise_cube.out * * \sa Math functions, square(), pow() */ EIGEN_DEVICE_FUNC inline const CubeReturnType cube() const { return CubeReturnType(derived()); } /** \returns an expression of the coefficient-wise rint of *this. * * Example: \include Cwise_rint.cpp * Output: \verbinclude Cwise_rint.out * * \sa Math functions, ceil(), floor() */ EIGEN_DEVICE_FUNC inline const RintReturnType rint() const { return RintReturnType(derived()); } /** \returns an expression of the coefficient-wise round of *this. * * Example: \include Cwise_round.cpp * Output: \verbinclude Cwise_round.out * * \sa Math functions, ceil(), floor() */ EIGEN_DEVICE_FUNC inline const RoundReturnType round() const { return RoundReturnType(derived()); } /** \returns an expression of the coefficient-wise floor of *this. * * Example: \include Cwise_floor.cpp * Output: \verbinclude Cwise_floor.out * * \sa Math functions, ceil(), round() */ EIGEN_DEVICE_FUNC inline const FloorReturnType floor() const { return FloorReturnType(derived()); } /** \returns an expression of the coefficient-wise ceil of *this. * * Example: \include Cwise_ceil.cpp * Output: \verbinclude Cwise_ceil.out * * \sa Math functions, floor(), round() */ EIGEN_DEVICE_FUNC inline const CeilReturnType ceil() const { return CeilReturnType(derived()); } template struct ShiftRightXpr { typedef CwiseUnaryOp, const Derived> Type; }; /** \returns an expression of \c *this with the \a Scalar type arithmetically * shifted right by \a N bit positions. * * The template parameter \a N specifies the number of bit positions to shift. * * \sa shiftLeft() */ template EIGEN_DEVICE_FUNC typename ShiftRightXpr::Type shiftRight() const { return typename ShiftRightXpr::Type(derived()); } template struct ShiftLeftXpr { typedef CwiseUnaryOp, const Derived> Type; }; /** \returns an expression of \c *this with the \a Scalar type logically * shifted left by \a N bit positions. * * The template parameter \a N specifies the number of bit positions to shift. * * \sa shiftRight() */ template EIGEN_DEVICE_FUNC typename ShiftLeftXpr::Type shiftLeft() const { return typename ShiftLeftXpr::Type(derived()); } /** \returns an expression of the coefficient-wise isnan of *this. * * Example: \include Cwise_isNaN.cpp * Output: \verbinclude Cwise_isNaN.out * * \sa isfinite(), isinf() */ EIGEN_DEVICE_FUNC inline const IsNaNReturnType isNaN() const { return IsNaNReturnType(derived()); } /** \returns an expression of the coefficient-wise isinf of *this. * * Example: \include Cwise_isInf.cpp * Output: \verbinclude Cwise_isInf.out * * \sa isnan(), isfinite() */ EIGEN_DEVICE_FUNC inline const IsInfReturnType isInf() const { return IsInfReturnType(derived()); } /** \returns an expression of the coefficient-wise isfinite of *this. * * Example: \include Cwise_isFinite.cpp * Output: \verbinclude Cwise_isFinite.out * * \sa isnan(), isinf() */ EIGEN_DEVICE_FUNC inline const IsFiniteReturnType isFinite() const { return IsFiniteReturnType(derived()); } /** \returns an expression of the coefficient-wise ! operator of *this * * \warning this operator is for expression of bool only. * * Example: \include Cwise_boolean_not.cpp * Output: \verbinclude Cwise_boolean_not.out * * \sa operator!=() */ EIGEN_DEVICE_FUNC inline const BooleanNotReturnType operator!() const { EIGEN_STATIC_ASSERT((internal::is_same::value), THIS_METHOD_IS_ONLY_FOR_EXPRESSIONS_OF_BOOL); return BooleanNotReturnType(derived()); } // --- SpecialFunctions module --- typedef CwiseUnaryOp, const Derived> LgammaReturnType; typedef CwiseUnaryOp, const Derived> DigammaReturnType; typedef CwiseUnaryOp, const Derived> ErfReturnType; typedef CwiseUnaryOp, const Derived> ErfcReturnType; typedef CwiseUnaryOp, const Derived> NdtriReturnType; /** \cpp11 \returns an expression of the coefficient-wise ln(|gamma(*this)|). * * \specialfunctions_module * * \note This function supports only float and double scalar types in c++11 mode. To support other scalar types, * or float/double in non c++11 mode, the user has to provide implementations of lgamma(T) for any scalar * type T to be supported. * * \sa Math functions, digamma() */ EIGEN_DEVICE_FUNC inline const LgammaReturnType lgamma() const { return LgammaReturnType(derived()); } /** \returns an expression of the coefficient-wise digamma (psi, derivative of lgamma). * * \specialfunctions_module * * \note This function supports only float and double scalar types. To support other scalar types, * the user has to provide implementations of digamma(T) for any scalar * type T to be supported. * * \sa Math functions, Eigen::digamma(), Eigen::polygamma(), lgamma() */ EIGEN_DEVICE_FUNC inline const DigammaReturnType digamma() const { return DigammaReturnType(derived()); } /** \cpp11 \returns an expression of the coefficient-wise Gauss error * function of *this. * * \specialfunctions_module * * \note This function supports only float and double scalar types in c++11 mode. To support other scalar types, * or float/double in non c++11 mode, the user has to provide implementations of erf(T) for any scalar * type T to be supported. * * \sa Math functions, erfc() */ EIGEN_DEVICE_FUNC inline const ErfReturnType erf() const { return ErfReturnType(derived()); } /** \cpp11 \returns an expression of the coefficient-wise Complementary error * function of *this. * * \specialfunctions_module * * \note This function supports only float and double scalar types in c++11 mode. To support other scalar types, * or float/double in non c++11 mode, the user has to provide implementations of erfc(T) for any scalar * type T to be supported. * * \sa Math functions, erf() */ EIGEN_DEVICE_FUNC inline const ErfcReturnType erfc() const { return ErfcReturnType(derived()); } /** \returns an expression of the coefficient-wise inverse of the CDF of the Normal distribution function * function of *this. * * \specialfunctions_module * * In other words, considering `x = ndtri(y)`, it returns the argument, x, for which the area under the * Gaussian probability density function (integrated from minus infinity to x) is equal to y. * * \note This function supports only float and double scalar types. To support other scalar types, * the user has to provide implementations of ndtri(T) for any scalar type T to be supported. * * \sa Math functions */ EIGEN_DEVICE_FUNC inline const NdtriReturnType ndtri() const { return NdtriReturnType(derived()); } piqp-octave/src/eigen/Eigen/src/plugins/BlockMethods.h0000644000175100001770000016321414107270226022567 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // Copyright (C) 2006-2010 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_PARSED_BY_DOXYGEN /// \internal expression type of a column */ typedef Block::RowsAtCompileTime, 1, !IsRowMajor> ColXpr; typedef const Block::RowsAtCompileTime, 1, !IsRowMajor> ConstColXpr; /// \internal expression type of a row */ typedef Block::ColsAtCompileTime, IsRowMajor> RowXpr; typedef const Block::ColsAtCompileTime, IsRowMajor> ConstRowXpr; /// \internal expression type of a block of whole columns */ typedef Block::RowsAtCompileTime, Dynamic, !IsRowMajor> ColsBlockXpr; typedef const Block::RowsAtCompileTime, Dynamic, !IsRowMajor> ConstColsBlockXpr; /// \internal expression type of a block of whole rows */ typedef Block::ColsAtCompileTime, IsRowMajor> RowsBlockXpr; typedef const Block::ColsAtCompileTime, IsRowMajor> ConstRowsBlockXpr; /// \internal expression type of a block of whole columns */ template struct NColsBlockXpr { typedef Block::RowsAtCompileTime, N, !IsRowMajor> Type; }; template struct ConstNColsBlockXpr { typedef const Block::RowsAtCompileTime, N, !IsRowMajor> Type; }; /// \internal expression type of a block of whole rows */ template struct NRowsBlockXpr { typedef Block::ColsAtCompileTime, IsRowMajor> Type; }; template struct ConstNRowsBlockXpr { typedef const Block::ColsAtCompileTime, IsRowMajor> Type; }; /// \internal expression of a block */ typedef Block BlockXpr; typedef const Block ConstBlockXpr; /// \internal expression of a block of fixed sizes */ template struct FixedBlockXpr { typedef Block Type; }; template struct ConstFixedBlockXpr { typedef Block Type; }; typedef VectorBlock SegmentReturnType; typedef const VectorBlock ConstSegmentReturnType; template struct FixedSegmentReturnType { typedef VectorBlock Type; }; template struct ConstFixedSegmentReturnType { typedef const VectorBlock Type; }; /// \internal inner-vector typedef Block InnerVectorReturnType; typedef Block ConstInnerVectorReturnType; /// \internal set of inner-vectors typedef Block InnerVectorsReturnType; typedef Block ConstInnerVectorsReturnType; #endif // not EIGEN_PARSED_BY_DOXYGEN /// \returns an expression of a block in \c *this with either dynamic or fixed sizes. /// /// \param startRow the first row in the block /// \param startCol the first column in the block /// \param blockRows number of rows in the block, specified at either run-time or compile-time /// \param blockCols number of columns in the block, specified at either run-time or compile-time /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example using runtime (aka dynamic) sizes: \include MatrixBase_block_int_int_int_int.cpp /// Output: \verbinclude MatrixBase_block_int_int_int_int.out /// /// \newin{3.4}: /// /// The number of rows \a blockRows and columns \a blockCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. In the later case, \c n plays the role of a runtime fallback value in case \c N equals Eigen::Dynamic. /// Here is an example with a fixed number of rows \c NRows and dynamic number of columns \c cols: /// \code /// mat.block(i,j,fix,cols) /// \endcode /// /// This function thus fully covers the features offered by the following overloads block(Index, Index), /// and block(Index, Index, Index, Index) that are thus obsolete. Indeed, this generic version avoids /// redundancy, it preserves the argument order, and prevents the need to rely on the template keyword in templated code. /// /// but with less redundancy and more consistency as it does not modify the argument order /// and seamlessly enable hybrid fixed/dynamic sizes. /// /// \note Even in the case that the returned expression has dynamic size, in the case /// when it is applied to a fixed-size matrix, it inherits a fixed maximal size, /// which means that evaluating it does not cause a dynamic memory allocation. /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block, fix, fix(int) /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename FixedBlockXpr<...,...>::Type #endif block(Index startRow, Index startCol, NRowsType blockRows, NColsType blockCols) { return typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type( derived(), startRow, startCol, internal::get_runtime_value(blockRows), internal::get_runtime_value(blockCols)); } /// This is the const version of block(Index,Index,NRowsType,NColsType) template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type #else const typename ConstFixedBlockXpr<...,...>::Type #endif block(Index startRow, Index startCol, NRowsType blockRows, NColsType blockCols) const { return typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type( derived(), startRow, startCol, internal::get_runtime_value(blockRows), internal::get_runtime_value(blockCols)); } /// \returns a expression of a top-right corner of \c *this with either dynamic or fixed sizes. /// /// \param cRows the number of rows in the corner /// \param cCols the number of columns in the corner /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example with dynamic sizes: \include MatrixBase_topRightCorner_int_int.cpp /// Output: \verbinclude MatrixBase_topRightCorner_int_int.out /// /// The number of rows \a blockRows and columns \a blockCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename FixedBlockXpr<...,...>::Type #endif topRightCorner(NRowsType cRows, NColsType cCols) { return typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), 0, cols() - internal::get_runtime_value(cCols), internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// This is the const version of topRightCorner(NRowsType, NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type #else const typename ConstFixedBlockXpr<...,...>::Type #endif topRightCorner(NRowsType cRows, NColsType cCols) const { return typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), 0, cols() - internal::get_runtime_value(cCols), internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// \returns an expression of a fixed-size top-right corner of \c *this. /// /// \tparam CRows the number of rows in the corner /// \tparam CCols the number of columns in the corner /// /// Example: \include MatrixBase_template_int_int_topRightCorner.cpp /// Output: \verbinclude MatrixBase_template_int_int_topRightCorner.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block, block(Index,Index) /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type topRightCorner() { return typename FixedBlockXpr::Type(derived(), 0, cols() - CCols); } /// This is the const version of topRightCorner(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type topRightCorner() const { return typename ConstFixedBlockXpr::Type(derived(), 0, cols() - CCols); } /// \returns an expression of a top-right corner of \c *this. /// /// \tparam CRows number of rows in corner as specified at compile-time /// \tparam CCols number of columns in corner as specified at compile-time /// \param cRows number of rows in corner as specified at run-time /// \param cCols number of columns in corner as specified at run-time /// /// This function is mainly useful for corners where the number of rows is specified at compile-time /// and the number of columns is specified at run-time, or vice versa. The compile-time and run-time /// information should not contradict. In other words, \a cRows should equal \a CRows unless /// \a CRows is \a Dynamic, and the same for the number of columns. /// /// Example: \include MatrixBase_template_int_int_topRightCorner_int_int.cpp /// Output: \verbinclude MatrixBase_template_int_int_topRightCorner_int_int.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type topRightCorner(Index cRows, Index cCols) { return typename FixedBlockXpr::Type(derived(), 0, cols() - cCols, cRows, cCols); } /// This is the const version of topRightCorner(Index, Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type topRightCorner(Index cRows, Index cCols) const { return typename ConstFixedBlockXpr::Type(derived(), 0, cols() - cCols, cRows, cCols); } /// \returns an expression of a top-left corner of \c *this with either dynamic or fixed sizes. /// /// \param cRows the number of rows in the corner /// \param cCols the number of columns in the corner /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include MatrixBase_topLeftCorner_int_int.cpp /// Output: \verbinclude MatrixBase_topLeftCorner_int_int.out /// /// The number of rows \a blockRows and columns \a blockCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename FixedBlockXpr<...,...>::Type #endif topLeftCorner(NRowsType cRows, NColsType cCols) { return typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), 0, 0, internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// This is the const version of topLeftCorner(Index, Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type #else const typename ConstFixedBlockXpr<...,...>::Type #endif topLeftCorner(NRowsType cRows, NColsType cCols) const { return typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), 0, 0, internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// \returns an expression of a fixed-size top-left corner of \c *this. /// /// The template parameters CRows and CCols are the number of rows and columns in the corner. /// /// Example: \include MatrixBase_template_int_int_topLeftCorner.cpp /// Output: \verbinclude MatrixBase_template_int_int_topLeftCorner.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type topLeftCorner() { return typename FixedBlockXpr::Type(derived(), 0, 0); } /// This is the const version of topLeftCorner(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type topLeftCorner() const { return typename ConstFixedBlockXpr::Type(derived(), 0, 0); } /// \returns an expression of a top-left corner of \c *this. /// /// \tparam CRows number of rows in corner as specified at compile-time /// \tparam CCols number of columns in corner as specified at compile-time /// \param cRows number of rows in corner as specified at run-time /// \param cCols number of columns in corner as specified at run-time /// /// This function is mainly useful for corners where the number of rows is specified at compile-time /// and the number of columns is specified at run-time, or vice versa. The compile-time and run-time /// information should not contradict. In other words, \a cRows should equal \a CRows unless /// \a CRows is \a Dynamic, and the same for the number of columns. /// /// Example: \include MatrixBase_template_int_int_topLeftCorner_int_int.cpp /// Output: \verbinclude MatrixBase_template_int_int_topLeftCorner_int_int.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type topLeftCorner(Index cRows, Index cCols) { return typename FixedBlockXpr::Type(derived(), 0, 0, cRows, cCols); } /// This is the const version of topLeftCorner(Index, Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type topLeftCorner(Index cRows, Index cCols) const { return typename ConstFixedBlockXpr::Type(derived(), 0, 0, cRows, cCols); } /// \returns an expression of a bottom-right corner of \c *this with either dynamic or fixed sizes. /// /// \param cRows the number of rows in the corner /// \param cCols the number of columns in the corner /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include MatrixBase_bottomRightCorner_int_int.cpp /// Output: \verbinclude MatrixBase_bottomRightCorner_int_int.out /// /// The number of rows \a blockRows and columns \a blockCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename FixedBlockXpr<...,...>::Type #endif bottomRightCorner(NRowsType cRows, NColsType cCols) { return typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), rows() - internal::get_runtime_value(cRows), cols() - internal::get_runtime_value(cCols), internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// This is the const version of bottomRightCorner(NRowsType, NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type #else const typename ConstFixedBlockXpr<...,...>::Type #endif bottomRightCorner(NRowsType cRows, NColsType cCols) const { return typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), rows() - internal::get_runtime_value(cRows), cols() - internal::get_runtime_value(cCols), internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// \returns an expression of a fixed-size bottom-right corner of \c *this. /// /// The template parameters CRows and CCols are the number of rows and columns in the corner. /// /// Example: \include MatrixBase_template_int_int_bottomRightCorner.cpp /// Output: \verbinclude MatrixBase_template_int_int_bottomRightCorner.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type bottomRightCorner() { return typename FixedBlockXpr::Type(derived(), rows() - CRows, cols() - CCols); } /// This is the const version of bottomRightCorner(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type bottomRightCorner() const { return typename ConstFixedBlockXpr::Type(derived(), rows() - CRows, cols() - CCols); } /// \returns an expression of a bottom-right corner of \c *this. /// /// \tparam CRows number of rows in corner as specified at compile-time /// \tparam CCols number of columns in corner as specified at compile-time /// \param cRows number of rows in corner as specified at run-time /// \param cCols number of columns in corner as specified at run-time /// /// This function is mainly useful for corners where the number of rows is specified at compile-time /// and the number of columns is specified at run-time, or vice versa. The compile-time and run-time /// information should not contradict. In other words, \a cRows should equal \a CRows unless /// \a CRows is \a Dynamic, and the same for the number of columns. /// /// Example: \include MatrixBase_template_int_int_bottomRightCorner_int_int.cpp /// Output: \verbinclude MatrixBase_template_int_int_bottomRightCorner_int_int.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type bottomRightCorner(Index cRows, Index cCols) { return typename FixedBlockXpr::Type(derived(), rows() - cRows, cols() - cCols, cRows, cCols); } /// This is the const version of bottomRightCorner(Index, Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type bottomRightCorner(Index cRows, Index cCols) const { return typename ConstFixedBlockXpr::Type(derived(), rows() - cRows, cols() - cCols, cRows, cCols); } /// \returns an expression of a bottom-left corner of \c *this with either dynamic or fixed sizes. /// /// \param cRows the number of rows in the corner /// \param cCols the number of columns in the corner /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include MatrixBase_bottomLeftCorner_int_int.cpp /// Output: \verbinclude MatrixBase_bottomLeftCorner_int_int.out /// /// The number of rows \a blockRows and columns \a blockCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename FixedBlockXpr<...,...>::Type #endif bottomLeftCorner(NRowsType cRows, NColsType cCols) { return typename FixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), rows() - internal::get_runtime_value(cRows), 0, internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// This is the const version of bottomLeftCorner(NRowsType, NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type #else typename ConstFixedBlockXpr<...,...>::Type #endif bottomLeftCorner(NRowsType cRows, NColsType cCols) const { return typename ConstFixedBlockXpr::value,internal::get_fixed_value::value>::Type (derived(), rows() - internal::get_runtime_value(cRows), 0, internal::get_runtime_value(cRows), internal::get_runtime_value(cCols)); } /// \returns an expression of a fixed-size bottom-left corner of \c *this. /// /// The template parameters CRows and CCols are the number of rows and columns in the corner. /// /// Example: \include MatrixBase_template_int_int_bottomLeftCorner.cpp /// Output: \verbinclude MatrixBase_template_int_int_bottomLeftCorner.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type bottomLeftCorner() { return typename FixedBlockXpr::Type(derived(), rows() - CRows, 0); } /// This is the const version of bottomLeftCorner(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type bottomLeftCorner() const { return typename ConstFixedBlockXpr::Type(derived(), rows() - CRows, 0); } /// \returns an expression of a bottom-left corner of \c *this. /// /// \tparam CRows number of rows in corner as specified at compile-time /// \tparam CCols number of columns in corner as specified at compile-time /// \param cRows number of rows in corner as specified at run-time /// \param cCols number of columns in corner as specified at run-time /// /// This function is mainly useful for corners where the number of rows is specified at compile-time /// and the number of columns is specified at run-time, or vice versa. The compile-time and run-time /// information should not contradict. In other words, \a cRows should equal \a CRows unless /// \a CRows is \a Dynamic, and the same for the number of columns. /// /// Example: \include MatrixBase_template_int_int_bottomLeftCorner_int_int.cpp /// Output: \verbinclude MatrixBase_template_int_int_bottomLeftCorner_int_int.out /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa class Block /// template EIGEN_STRONG_INLINE typename FixedBlockXpr::Type bottomLeftCorner(Index cRows, Index cCols) { return typename FixedBlockXpr::Type(derived(), rows() - cRows, 0, cRows, cCols); } /// This is the const version of bottomLeftCorner(Index, Index). template EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type bottomLeftCorner(Index cRows, Index cCols) const { return typename ConstFixedBlockXpr::Type(derived(), rows() - cRows, 0, cRows, cCols); } /// \returns a block consisting of the top rows of \c *this. /// /// \param n the number of rows in the block /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// /// Example: \include MatrixBase_topRows_int.cpp /// Output: \verbinclude MatrixBase_topRows_int.out /// /// The number of rows \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NRowsBlockXpr::value>::Type #else typename NRowsBlockXpr<...>::Type #endif topRows(NRowsType n) { return typename NRowsBlockXpr::value>::Type (derived(), 0, 0, internal::get_runtime_value(n), cols()); } /// This is the const version of topRows(NRowsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNRowsBlockXpr::value>::Type #else const typename ConstNRowsBlockXpr<...>::Type #endif topRows(NRowsType n) const { return typename ConstNRowsBlockXpr::value>::Type (derived(), 0, 0, internal::get_runtime_value(n), cols()); } /// \returns a block consisting of the top rows of \c *this. /// /// \tparam N the number of rows in the block as specified at compile-time /// \param n the number of rows in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_topRows.cpp /// Output: \verbinclude MatrixBase_template_int_topRows.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NRowsBlockXpr::Type topRows(Index n = N) { return typename NRowsBlockXpr::Type(derived(), 0, 0, n, cols()); } /// This is the const version of topRows(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNRowsBlockXpr::Type topRows(Index n = N) const { return typename ConstNRowsBlockXpr::Type(derived(), 0, 0, n, cols()); } /// \returns a block consisting of the bottom rows of \c *this. /// /// \param n the number of rows in the block /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// /// Example: \include MatrixBase_bottomRows_int.cpp /// Output: \verbinclude MatrixBase_bottomRows_int.out /// /// The number of rows \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NRowsBlockXpr::value>::Type #else typename NRowsBlockXpr<...>::Type #endif bottomRows(NRowsType n) { return typename NRowsBlockXpr::value>::Type (derived(), rows() - internal::get_runtime_value(n), 0, internal::get_runtime_value(n), cols()); } /// This is the const version of bottomRows(NRowsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNRowsBlockXpr::value>::Type #else const typename ConstNRowsBlockXpr<...>::Type #endif bottomRows(NRowsType n) const { return typename ConstNRowsBlockXpr::value>::Type (derived(), rows() - internal::get_runtime_value(n), 0, internal::get_runtime_value(n), cols()); } /// \returns a block consisting of the bottom rows of \c *this. /// /// \tparam N the number of rows in the block as specified at compile-time /// \param n the number of rows in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_bottomRows.cpp /// Output: \verbinclude MatrixBase_template_int_bottomRows.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NRowsBlockXpr::Type bottomRows(Index n = N) { return typename NRowsBlockXpr::Type(derived(), rows() - n, 0, n, cols()); } /// This is the const version of bottomRows(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNRowsBlockXpr::Type bottomRows(Index n = N) const { return typename ConstNRowsBlockXpr::Type(derived(), rows() - n, 0, n, cols()); } /// \returns a block consisting of a range of rows of \c *this. /// /// \param startRow the index of the first row in the block /// \param n the number of rows in the block /// \tparam NRowsType the type of the value handling the number of rows in the block, typically Index. /// /// Example: \include DenseBase_middleRows_int.cpp /// Output: \verbinclude DenseBase_middleRows_int.out /// /// The number of rows \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NRowsBlockXpr::value>::Type #else typename NRowsBlockXpr<...>::Type #endif middleRows(Index startRow, NRowsType n) { return typename NRowsBlockXpr::value>::Type (derived(), startRow, 0, internal::get_runtime_value(n), cols()); } /// This is the const version of middleRows(Index,NRowsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNRowsBlockXpr::value>::Type #else const typename ConstNRowsBlockXpr<...>::Type #endif middleRows(Index startRow, NRowsType n) const { return typename ConstNRowsBlockXpr::value>::Type (derived(), startRow, 0, internal::get_runtime_value(n), cols()); } /// \returns a block consisting of a range of rows of \c *this. /// /// \tparam N the number of rows in the block as specified at compile-time /// \param startRow the index of the first row in the block /// \param n the number of rows in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include DenseBase_template_int_middleRows.cpp /// Output: \verbinclude DenseBase_template_int_middleRows.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NRowsBlockXpr::Type middleRows(Index startRow, Index n = N) { return typename NRowsBlockXpr::Type(derived(), startRow, 0, n, cols()); } /// This is the const version of middleRows(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNRowsBlockXpr::Type middleRows(Index startRow, Index n = N) const { return typename ConstNRowsBlockXpr::Type(derived(), startRow, 0, n, cols()); } /// \returns a block consisting of the left columns of \c *this. /// /// \param n the number of columns in the block /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include MatrixBase_leftCols_int.cpp /// Output: \verbinclude MatrixBase_leftCols_int.out /// /// The number of columns \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NColsBlockXpr::value>::Type #else typename NColsBlockXpr<...>::Type #endif leftCols(NColsType n) { return typename NColsBlockXpr::value>::Type (derived(), 0, 0, rows(), internal::get_runtime_value(n)); } /// This is the const version of leftCols(NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNColsBlockXpr::value>::Type #else const typename ConstNColsBlockXpr<...>::Type #endif leftCols(NColsType n) const { return typename ConstNColsBlockXpr::value>::Type (derived(), 0, 0, rows(), internal::get_runtime_value(n)); } /// \returns a block consisting of the left columns of \c *this. /// /// \tparam N the number of columns in the block as specified at compile-time /// \param n the number of columns in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_leftCols.cpp /// Output: \verbinclude MatrixBase_template_int_leftCols.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NColsBlockXpr::Type leftCols(Index n = N) { return typename NColsBlockXpr::Type(derived(), 0, 0, rows(), n); } /// This is the const version of leftCols(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNColsBlockXpr::Type leftCols(Index n = N) const { return typename ConstNColsBlockXpr::Type(derived(), 0, 0, rows(), n); } /// \returns a block consisting of the right columns of \c *this. /// /// \param n the number of columns in the block /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include MatrixBase_rightCols_int.cpp /// Output: \verbinclude MatrixBase_rightCols_int.out /// /// The number of columns \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NColsBlockXpr::value>::Type #else typename NColsBlockXpr<...>::Type #endif rightCols(NColsType n) { return typename NColsBlockXpr::value>::Type (derived(), 0, cols() - internal::get_runtime_value(n), rows(), internal::get_runtime_value(n)); } /// This is the const version of rightCols(NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNColsBlockXpr::value>::Type #else const typename ConstNColsBlockXpr<...>::Type #endif rightCols(NColsType n) const { return typename ConstNColsBlockXpr::value>::Type (derived(), 0, cols() - internal::get_runtime_value(n), rows(), internal::get_runtime_value(n)); } /// \returns a block consisting of the right columns of \c *this. /// /// \tparam N the number of columns in the block as specified at compile-time /// \param n the number of columns in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_rightCols.cpp /// Output: \verbinclude MatrixBase_template_int_rightCols.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NColsBlockXpr::Type rightCols(Index n = N) { return typename NColsBlockXpr::Type(derived(), 0, cols() - n, rows(), n); } /// This is the const version of rightCols(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNColsBlockXpr::Type rightCols(Index n = N) const { return typename ConstNColsBlockXpr::Type(derived(), 0, cols() - n, rows(), n); } /// \returns a block consisting of a range of columns of \c *this. /// /// \param startCol the index of the first column in the block /// \param numCols the number of columns in the block /// \tparam NColsType the type of the value handling the number of columns in the block, typically Index. /// /// Example: \include DenseBase_middleCols_int.cpp /// Output: \verbinclude DenseBase_middleCols_int.out /// /// The number of columns \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename NColsBlockXpr::value>::Type #else typename NColsBlockXpr<...>::Type #endif middleCols(Index startCol, NColsType numCols) { return typename NColsBlockXpr::value>::Type (derived(), 0, startCol, rows(), internal::get_runtime_value(numCols)); } /// This is the const version of middleCols(Index,NColsType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstNColsBlockXpr::value>::Type #else const typename ConstNColsBlockXpr<...>::Type #endif middleCols(Index startCol, NColsType numCols) const { return typename ConstNColsBlockXpr::value>::Type (derived(), 0, startCol, rows(), internal::get_runtime_value(numCols)); } /// \returns a block consisting of a range of columns of \c *this. /// /// \tparam N the number of columns in the block as specified at compile-time /// \param startCol the index of the first column in the block /// \param n the number of columns in the block as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include DenseBase_template_int_middleCols.cpp /// Output: \verbinclude DenseBase_template_int_middleCols.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename NColsBlockXpr::Type middleCols(Index startCol, Index n = N) { return typename NColsBlockXpr::Type(derived(), 0, startCol, rows(), n); } /// This is the const version of middleCols(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstNColsBlockXpr::Type middleCols(Index startCol, Index n = N) const { return typename ConstNColsBlockXpr::Type(derived(), 0, startCol, rows(), n); } /// \returns a fixed-size expression of a block of \c *this. /// /// The template parameters \a NRows and \a NCols are the number of /// rows and columns in the block. /// /// \param startRow the first row in the block /// \param startCol the first column in the block /// /// Example: \include MatrixBase_block_int_int.cpp /// Output: \verbinclude MatrixBase_block_int_int.out /// /// \note The usage of of this overload is discouraged from %Eigen 3.4, better used the generic /// block(Index,Index,NRowsType,NColsType), here is the one-to-one equivalence: /// \code /// mat.template block(i,j) <--> mat.block(i,j,fix,fix) /// \endcode /// /// \note since block is a templated member, the keyword template has to be used /// if the matrix type is also a template parameter: \code m.template block<3,3>(1,1); \endcode /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type block(Index startRow, Index startCol) { return typename FixedBlockXpr::Type(derived(), startRow, startCol); } /// This is the const version of block<>(Index, Index). */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type block(Index startRow, Index startCol) const { return typename ConstFixedBlockXpr::Type(derived(), startRow, startCol); } /// \returns an expression of a block of \c *this. /// /// \tparam NRows number of rows in block as specified at compile-time /// \tparam NCols number of columns in block as specified at compile-time /// \param startRow the first row in the block /// \param startCol the first column in the block /// \param blockRows number of rows in block as specified at run-time /// \param blockCols number of columns in block as specified at run-time /// /// This function is mainly useful for blocks where the number of rows is specified at compile-time /// and the number of columns is specified at run-time, or vice versa. The compile-time and run-time /// information should not contradict. In other words, \a blockRows should equal \a NRows unless /// \a NRows is \a Dynamic, and the same for the number of columns. /// /// Example: \include MatrixBase_template_int_int_block_int_int_int_int.cpp /// Output: \verbinclude MatrixBase_template_int_int_block_int_int_int_int.out /// /// \note The usage of of this overload is discouraged from %Eigen 3.4, better used the generic /// block(Index,Index,NRowsType,NColsType), here is the one-to-one complete equivalence: /// \code /// mat.template block(i,j,rows,cols) <--> mat.block(i,j,fix(rows),fix(cols)) /// \endcode /// If we known that, e.g., NRows==Dynamic and NCols!=Dynamic, then the equivalence becomes: /// \code /// mat.template block(i,j,rows,NCols) <--> mat.block(i,j,rows,fix) /// \endcode /// EIGEN_DOC_BLOCK_ADDONS_NOT_INNER_PANEL /// /// \sa block(Index,Index,NRowsType,NColsType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedBlockXpr::Type block(Index startRow, Index startCol, Index blockRows, Index blockCols) { return typename FixedBlockXpr::Type(derived(), startRow, startCol, blockRows, blockCols); } /// This is the const version of block<>(Index, Index, Index, Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const typename ConstFixedBlockXpr::Type block(Index startRow, Index startCol, Index blockRows, Index blockCols) const { return typename ConstFixedBlockXpr::Type(derived(), startRow, startCol, blockRows, blockCols); } /// \returns an expression of the \a i-th column of \c *this. Note that the numbering starts at 0. /// /// Example: \include MatrixBase_col.cpp /// Output: \verbinclude MatrixBase_col.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(column-major) /** * \sa row(), class Block */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ColXpr col(Index i) { return ColXpr(derived(), i); } /// This is the const version of col(). EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ConstColXpr col(Index i) const { return ConstColXpr(derived(), i); } /// \returns an expression of the \a i-th row of \c *this. Note that the numbering starts at 0. /// /// Example: \include MatrixBase_row.cpp /// Output: \verbinclude MatrixBase_row.out /// EIGEN_DOC_BLOCK_ADDONS_INNER_PANEL_IF(row-major) /** * \sa col(), class Block */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE RowXpr row(Index i) { return RowXpr(derived(), i); } /// This is the const version of row(). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ConstRowXpr row(Index i) const { return ConstRowXpr(derived(), i); } /// \returns an expression of a segment (i.e. a vector block) in \c *this with either dynamic or fixed sizes. /// /// \only_for_vectors /// /// \param start the first coefficient in the segment /// \param n the number of coefficients in the segment /// \tparam NType the type of the value handling the number of coefficients in the segment, typically Index. /// /// Example: \include MatrixBase_segment_int_int.cpp /// Output: \verbinclude MatrixBase_segment_int_int.out /// /// The number of coefficients \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// /// \note Even in the case that the returned expression has dynamic size, in the case /// when it is applied to a fixed-size vector, it inherits a fixed maximal size, /// which means that evaluating it does not cause a dynamic memory allocation. /// /// \sa block(Index,Index,NRowsType,NColsType), fix, fix(int), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedSegmentReturnType::value>::Type #else typename FixedSegmentReturnType<...>::Type #endif segment(Index start, NType n) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::value>::Type (derived(), start, internal::get_runtime_value(n)); } /// This is the const version of segment(Index,NType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedSegmentReturnType::value>::Type #else const typename ConstFixedSegmentReturnType<...>::Type #endif segment(Index start, NType n) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::value>::Type (derived(), start, internal::get_runtime_value(n)); } /// \returns an expression of the first coefficients of \c *this with either dynamic or fixed sizes. /// /// \only_for_vectors /// /// \param n the number of coefficients in the segment /// \tparam NType the type of the value handling the number of coefficients in the segment, typically Index. /// /// Example: \include MatrixBase_start_int.cpp /// Output: \verbinclude MatrixBase_start_int.out /// /// The number of coefficients \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// /// \note Even in the case that the returned expression has dynamic size, in the case /// when it is applied to a fixed-size vector, it inherits a fixed maximal size, /// which means that evaluating it does not cause a dynamic memory allocation. /// /// \sa class Block, block(Index,Index) /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedSegmentReturnType::value>::Type #else typename FixedSegmentReturnType<...>::Type #endif head(NType n) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::value>::Type (derived(), 0, internal::get_runtime_value(n)); } /// This is the const version of head(NType). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedSegmentReturnType::value>::Type #else const typename ConstFixedSegmentReturnType<...>::Type #endif head(NType n) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::value>::Type (derived(), 0, internal::get_runtime_value(n)); } /// \returns an expression of a last coefficients of \c *this with either dynamic or fixed sizes. /// /// \only_for_vectors /// /// \param n the number of coefficients in the segment /// \tparam NType the type of the value handling the number of coefficients in the segment, typically Index. /// /// Example: \include MatrixBase_end_int.cpp /// Output: \verbinclude MatrixBase_end_int.out /// /// The number of coefficients \a n can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. /// See \link block(Index,Index,NRowsType,NColsType) block() \endlink for the details. /// /// \note Even in the case that the returned expression has dynamic size, in the case /// when it is applied to a fixed-size vector, it inherits a fixed maximal size, /// which means that evaluating it does not cause a dynamic memory allocation. /// /// \sa class Block, block(Index,Index) /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN typename FixedSegmentReturnType::value>::Type #else typename FixedSegmentReturnType<...>::Type #endif tail(NType n) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::value>::Type (derived(), this->size() - internal::get_runtime_value(n), internal::get_runtime_value(n)); } /// This is the const version of tail(Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE #ifndef EIGEN_PARSED_BY_DOXYGEN const typename ConstFixedSegmentReturnType::value>::Type #else const typename ConstFixedSegmentReturnType<...>::Type #endif tail(NType n) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::value>::Type (derived(), this->size() - internal::get_runtime_value(n), internal::get_runtime_value(n)); } /// \returns a fixed-size expression of a segment (i.e. a vector block) in \c *this /// /// \only_for_vectors /// /// \tparam N the number of coefficients in the segment as specified at compile-time /// \param start the index of the first element in the segment /// \param n the number of coefficients in the segment as specified at compile-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_segment.cpp /// Output: \verbinclude MatrixBase_template_int_segment.out /// /// \sa segment(Index,NType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedSegmentReturnType::Type segment(Index start, Index n = N) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::Type(derived(), start, n); } /// This is the const version of segment(Index). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstFixedSegmentReturnType::Type segment(Index start, Index n = N) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::Type(derived(), start, n); } /// \returns a fixed-size expression of the first coefficients of \c *this. /// /// \only_for_vectors /// /// \tparam N the number of coefficients in the segment as specified at compile-time /// \param n the number of coefficients in the segment as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_start.cpp /// Output: \verbinclude MatrixBase_template_int_start.out /// /// \sa head(NType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedSegmentReturnType::Type head(Index n = N) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::Type(derived(), 0, n); } /// This is the const version of head(). template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstFixedSegmentReturnType::Type head(Index n = N) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::Type(derived(), 0, n); } /// \returns a fixed-size expression of the last coefficients of \c *this. /// /// \only_for_vectors /// /// \tparam N the number of coefficients in the segment as specified at compile-time /// \param n the number of coefficients in the segment as specified at run-time /// /// The compile-time and run-time information should not contradict. In other words, /// \a n should equal \a N unless \a N is \a Dynamic. /// /// Example: \include MatrixBase_template_int_end.cpp /// Output: \verbinclude MatrixBase_template_int_end.out /// /// \sa tail(NType), class Block /// template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename FixedSegmentReturnType::Type tail(Index n = N) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename FixedSegmentReturnType::Type(derived(), size() - n); } /// This is the const version of tail. template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename ConstFixedSegmentReturnType::Type tail(Index n = N) const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived) return typename ConstFixedSegmentReturnType::Type(derived(), size() - n); } /// \returns the \a outer -th column (resp. row) of the matrix \c *this if \c *this /// is col-major (resp. row-major). /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE InnerVectorReturnType innerVector(Index outer) { return InnerVectorReturnType(derived(), outer); } /// \returns the \a outer -th column (resp. row) of the matrix \c *this if \c *this /// is col-major (resp. row-major). Read-only. /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const ConstInnerVectorReturnType innerVector(Index outer) const { return ConstInnerVectorReturnType(derived(), outer); } /// \returns the \a outer -th column (resp. row) of the matrix \c *this if \c *this /// is col-major (resp. row-major). /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE InnerVectorsReturnType innerVectors(Index outerStart, Index outerSize) { return Block(derived(), IsRowMajor ? outerStart : 0, IsRowMajor ? 0 : outerStart, IsRowMajor ? outerSize : rows(), IsRowMajor ? cols() : outerSize); } /// \returns the \a outer -th column (resp. row) of the matrix \c *this if \c *this /// is col-major (resp. row-major). Read-only. /// EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const ConstInnerVectorsReturnType innerVectors(Index outerStart, Index outerSize) const { return Block(derived(), IsRowMajor ? outerStart : 0, IsRowMajor ? 0 : outerStart, IsRowMajor ? outerSize : rows(), IsRowMajor ? cols() : outerSize); } /** \returns the i-th subvector (column or vector) according to the \c Direction * \sa subVectors() */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::conditional::type subVector(Index i) { return typename internal::conditional::type(derived(),i); } /** This is the const version of subVector(Index) */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename internal::conditional::type subVector(Index i) const { return typename internal::conditional::type(derived(),i); } /** \returns the number of subvectors (rows or columns) in the direction \c Direction * \sa subVector(Index) */ template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE EIGEN_CONSTEXPR Index subVectors() const { return (Direction==Vertical)?cols():rows(); } piqp-octave/src/eigen/Eigen/src/plugins/ReshapedMethods.h0000644000175100001770000001540314107270226023264 0ustar runnerdocker #ifdef EIGEN_PARSED_BY_DOXYGEN /// \returns an expression of \c *this with reshaped sizes. /// /// \param nRows the number of rows in the reshaped expression, specified at either run-time or compile-time, or AutoSize /// \param nCols the number of columns in the reshaped expression, specified at either run-time or compile-time, or AutoSize /// \tparam Order specifies whether the coefficients should be processed in column-major-order (ColMajor), in row-major-order (RowMajor), /// or follows the \em natural order of the nested expression (AutoOrder). The default is ColMajor. /// \tparam NRowsType the type of the value handling the number of rows, typically Index. /// \tparam NColsType the type of the value handling the number of columns, typically Index. /// /// Dynamic size example: \include MatrixBase_reshaped_int_int.cpp /// Output: \verbinclude MatrixBase_reshaped_int_int.out /// /// The number of rows \a nRows and columns \a nCols can also be specified at compile-time by passing Eigen::fix, /// or Eigen::fix(n) as arguments. In the later case, \c n plays the role of a runtime fallback value in case \c N equals Eigen::Dynamic. /// Here is an example with a fixed number of rows and columns: /// \include MatrixBase_reshaped_fixed.cpp /// Output: \verbinclude MatrixBase_reshaped_fixed.out /// /// Finally, one of the sizes parameter can be automatically deduced from the other one by passing AutoSize as in the following example: /// \include MatrixBase_reshaped_auto.cpp /// Output: \verbinclude MatrixBase_reshaped_auto.out /// AutoSize does preserve compile-time sizes when possible, i.e., when the sizes of the input are known at compile time \b and /// that the other size is passed at compile-time using Eigen::fix as above. /// /// \sa class Reshaped, fix, fix(int) /// template EIGEN_DEVICE_FUNC inline Reshaped reshaped(NRowsType nRows, NColsType nCols); /// This is the const version of reshaped(NRowsType,NColsType). template EIGEN_DEVICE_FUNC inline const Reshaped reshaped(NRowsType nRows, NColsType nCols) const; /// \returns an expression of \c *this with columns (or rows) stacked to a linear column vector /// /// \tparam Order specifies whether the coefficients should be processed in column-major-order (ColMajor), in row-major-order (RowMajor), /// or follows the \em natural order of the nested expression (AutoOrder). The default is ColMajor. /// /// This overloads is essentially a shortcut for `A.reshaped(AutoSize,fix<1>)`. /// /// - If `Order==ColMajor` (the default), then it returns a column-vector from the stacked columns of \c *this. /// - If `Order==RowMajor`, then it returns a column-vector from the stacked rows of \c *this. /// - If `Order==AutoOrder`, then it returns a column-vector with elements stacked following the storage order of \c *this. /// This mode is the recommended one when the particular ordering of the element is not relevant. /// /// Example: /// \include MatrixBase_reshaped_to_vector.cpp /// Output: \verbinclude MatrixBase_reshaped_to_vector.out /// /// If you want more control, you can still fall back to reshaped(NRowsType,NColsType). /// /// \sa reshaped(NRowsType,NColsType), class Reshaped /// template EIGEN_DEVICE_FUNC inline Reshaped reshaped(); /// This is the const version of reshaped(). template EIGEN_DEVICE_FUNC inline const Reshaped reshaped() const; #else // This file is automatically included twice to generate const and non-const versions #ifndef EIGEN_RESHAPED_METHOD_2ND_PASS #define EIGEN_RESHAPED_METHOD_CONST const #else #define EIGEN_RESHAPED_METHOD_CONST #endif #ifndef EIGEN_RESHAPED_METHOD_2ND_PASS // This part is included once #endif template EIGEN_DEVICE_FUNC inline Reshaped::value, internal::get_compiletime_reshape_size::value> reshaped(NRowsType nRows, NColsType nCols) EIGEN_RESHAPED_METHOD_CONST { return Reshaped::value, internal::get_compiletime_reshape_size::value> (derived(), internal::get_runtime_reshape_size(nRows,internal::get_runtime_value(nCols),size()), internal::get_runtime_reshape_size(nCols,internal::get_runtime_value(nRows),size())); } template EIGEN_DEVICE_FUNC inline Reshaped::value, internal::get_compiletime_reshape_size::value, internal::get_compiletime_reshape_order::value> reshaped(NRowsType nRows, NColsType nCols) EIGEN_RESHAPED_METHOD_CONST { return Reshaped::value, internal::get_compiletime_reshape_size::value, internal::get_compiletime_reshape_order::value> (derived(), internal::get_runtime_reshape_size(nRows,internal::get_runtime_value(nCols),size()), internal::get_runtime_reshape_size(nCols,internal::get_runtime_value(nRows),size())); } // Views as linear vectors EIGEN_DEVICE_FUNC inline Reshaped reshaped() EIGEN_RESHAPED_METHOD_CONST { return Reshaped(derived(),size(),1); } template EIGEN_DEVICE_FUNC inline Reshaped::value> reshaped() EIGEN_RESHAPED_METHOD_CONST { EIGEN_STATIC_ASSERT(Order==RowMajor || Order==ColMajor || Order==AutoOrder, INVALID_TEMPLATE_PARAMETER); return Reshaped::value> (derived(), size(), 1); } #undef EIGEN_RESHAPED_METHOD_CONST #ifndef EIGEN_RESHAPED_METHOD_2ND_PASS #define EIGEN_RESHAPED_METHOD_2ND_PASS #include "ReshapedMethods.h" #undef EIGEN_RESHAPED_METHOD_2ND_PASS #endif #endif // EIGEN_PARSED_BY_DOXYGEN piqp-octave/src/eigen/Eigen/src/OrderingMethods/0000755000175100001770000000000014107270226021445 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/OrderingMethods/Eigen_Colamd.h0000644000175100001770000017036114107270226024134 0ustar runnerdocker// // This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Desire Nuentsa Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. // This file is modified from the colamd/symamd library. The copyright is below // The authors of the code itself are Stefan I. Larimore and Timothy A. // Davis (davis@cise.ufl.edu), University of Florida. The algorithm was // developed in collaboration with John Gilbert, Xerox PARC, and Esmond // Ng, Oak Ridge National Laboratory. // // Date: // // September 8, 2003. Version 2.3. // // Acknowledgements: // // This work was supported by the National Science Foundation, under // grants DMS-9504974 and DMS-9803599. // // Notice: // // Copyright (c) 1998-2003 by the University of Florida. // All Rights Reserved. // // THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY // EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. // // Permission is hereby granted to use, copy, modify, and/or distribute // this program, provided that the Copyright, this License, and the // Availability of the original version is retained on all copies and made // accessible to the end-user of any code or package that includes COLAMD // or any modified version of COLAMD. // // Availability: // // The colamd/symamd library is available at // // http://www.suitesparse.com #ifndef EIGEN_COLAMD_H #define EIGEN_COLAMD_H namespace internal { namespace Colamd { /* Ensure that debugging is turned off: */ #ifndef COLAMD_NDEBUG #define COLAMD_NDEBUG #endif /* NDEBUG */ /* ========================================================================== */ /* === Knob and statistics definitions ====================================== */ /* ========================================================================== */ /* size of the knobs [ ] array. Only knobs [0..1] are currently used. */ const int NKnobs = 20; /* number of output statistics. Only stats [0..6] are currently used. */ const int NStats = 20; /* Indices into knobs and stats array. */ enum KnobsStatsIndex { /* knobs [0] and stats [0]: dense row knob and output statistic. */ DenseRow = 0, /* knobs [1] and stats [1]: dense column knob and output statistic. */ DenseCol = 1, /* stats [2]: memory defragmentation count output statistic */ DefragCount = 2, /* stats [3]: colamd status: zero OK, > 0 warning or notice, < 0 error */ Status = 3, /* stats [4..6]: error info, or info on jumbled columns */ Info1 = 4, Info2 = 5, Info3 = 6 }; /* error codes returned in stats [3]: */ enum Status { Ok = 0, OkButJumbled = 1, ErrorANotPresent = -1, ErrorPNotPresent = -2, ErrorNrowNegative = -3, ErrorNcolNegative = -4, ErrorNnzNegative = -5, ErrorP0Nonzero = -6, ErrorATooSmall = -7, ErrorColLengthNegative = -8, ErrorRowIndexOutOfBounds = -9, ErrorOutOfMemory = -10, ErrorInternalError = -999 }; /* ========================================================================== */ /* === Definitions ========================================================== */ /* ========================================================================== */ template IndexType ones_complement(const IndexType r) { return (-(r)-1); } /* -------------------------------------------------------------------------- */ const int Empty = -1; /* Row and column status */ enum RowColumnStatus { Alive = 0, Dead = -1 }; /* Column status */ enum ColumnStatus { DeadPrincipal = -1, DeadNonPrincipal = -2 }; /* ========================================================================== */ /* === Colamd reporting mechanism =========================================== */ /* ========================================================================== */ // == Row and Column structures == template struct ColStructure { IndexType start ; /* index for A of first row in this column, or Dead */ /* if column is dead */ IndexType length ; /* number of rows in this column */ union { IndexType thickness ; /* number of original columns represented by this */ /* col, if the column is alive */ IndexType parent ; /* parent in parent tree super-column structure, if */ /* the column is dead */ } shared1 ; union { IndexType score ; /* the score used to maintain heap, if col is alive */ IndexType order ; /* pivot ordering of this column, if col is dead */ } shared2 ; union { IndexType headhash ; /* head of a hash bucket, if col is at the head of */ /* a degree list */ IndexType hash ; /* hash value, if col is not in a degree list */ IndexType prev ; /* previous column in degree list, if col is in a */ /* degree list (but not at the head of a degree list) */ } shared3 ; union { IndexType degree_next ; /* next column, if col is in a degree list */ IndexType hash_next ; /* next column, if col is in a hash list */ } shared4 ; inline bool is_dead() const { return start < Alive; } inline bool is_alive() const { return start >= Alive; } inline bool is_dead_principal() const { return start == DeadPrincipal; } inline void kill_principal() { start = DeadPrincipal; } inline void kill_non_principal() { start = DeadNonPrincipal; } }; template struct RowStructure { IndexType start ; /* index for A of first col in this row */ IndexType length ; /* number of principal columns in this row */ union { IndexType degree ; /* number of principal & non-principal columns in row */ IndexType p ; /* used as a row pointer in init_rows_cols () */ } shared1 ; union { IndexType mark ; /* for computing set differences and marking dead rows*/ IndexType first_column ;/* first column in row (used in garbage collection) */ } shared2 ; inline bool is_dead() const { return shared2.mark < Alive; } inline bool is_alive() const { return shared2.mark >= Alive; } inline void kill() { shared2.mark = Dead; } }; /* ========================================================================== */ /* === Colamd recommended memory size ======================================= */ /* ========================================================================== */ /* The recommended length Alen of the array A passed to colamd is given by the COLAMD_RECOMMENDED (nnz, n_row, n_col) macro. It returns -1 if any argument is negative. 2*nnz space is required for the row and column indices of the matrix. colamd_c (n_col) + colamd_r (n_row) space is required for the Col and Row arrays, respectively, which are internal to colamd. An additional n_col space is the minimal amount of "elbow room", and nnz/5 more space is recommended for run time efficiency. This macro is not needed when using symamd. Explicit typecast to IndexType added Sept. 23, 2002, COLAMD version 2.2, to avoid gcc -pedantic warning messages. */ template inline IndexType colamd_c(IndexType n_col) { return IndexType( ((n_col) + 1) * sizeof (ColStructure) / sizeof (IndexType) ) ; } template inline IndexType colamd_r(IndexType n_row) { return IndexType(((n_row) + 1) * sizeof (RowStructure) / sizeof (IndexType)); } // Prototypes of non-user callable routines template static IndexType init_rows_cols (IndexType n_row, IndexType n_col, RowStructure Row [], ColStructure col [], IndexType A [], IndexType p [], IndexType stats[NStats] ); template static void init_scoring (IndexType n_row, IndexType n_col, RowStructure Row [], ColStructure Col [], IndexType A [], IndexType head [], double knobs[NKnobs], IndexType *p_n_row2, IndexType *p_n_col2, IndexType *p_max_deg); template static IndexType find_ordering (IndexType n_row, IndexType n_col, IndexType Alen, RowStructure Row [], ColStructure Col [], IndexType A [], IndexType head [], IndexType n_col2, IndexType max_deg, IndexType pfree); template static void order_children (IndexType n_col, ColStructure Col [], IndexType p []); template static void detect_super_cols (ColStructure Col [], IndexType A [], IndexType head [], IndexType row_start, IndexType row_length ) ; template static IndexType garbage_collection (IndexType n_row, IndexType n_col, RowStructure Row [], ColStructure Col [], IndexType A [], IndexType *pfree) ; template static inline IndexType clear_mark (IndexType n_row, RowStructure Row [] ) ; /* === No debugging ========================================================= */ #define COLAMD_DEBUG0(params) ; #define COLAMD_DEBUG1(params) ; #define COLAMD_DEBUG2(params) ; #define COLAMD_DEBUG3(params) ; #define COLAMD_DEBUG4(params) ; #define COLAMD_ASSERT(expression) ((void) 0) /** * \brief Returns the recommended value of Alen * * Returns recommended value of Alen for use by colamd. * Returns -1 if any input argument is negative. * The use of this routine or macro is optional. * Note that the macro uses its arguments more than once, * so be careful for side effects, if you pass expressions as arguments to COLAMD_RECOMMENDED. * * \param nnz nonzeros in A * \param n_row number of rows in A * \param n_col number of columns in A * \return recommended value of Alen for use by colamd */ template inline IndexType recommended ( IndexType nnz, IndexType n_row, IndexType n_col) { if ((nnz) < 0 || (n_row) < 0 || (n_col) < 0) return (-1); else return (2 * (nnz) + colamd_c (n_col) + colamd_r (n_row) + (n_col) + ((nnz) / 5)); } /** * \brief set default parameters The use of this routine is optional. * * Colamd: rows with more than (knobs [DenseRow] * n_col) * entries are removed prior to ordering. Columns with more than * (knobs [DenseCol] * n_row) entries are removed prior to * ordering, and placed last in the output column ordering. * * DenseRow and DenseCol are defined as 0 and 1, * respectively, in colamd.h. Default values of these two knobs * are both 0.5. Currently, only knobs [0] and knobs [1] are * used, but future versions may use more knobs. If so, they will * be properly set to their defaults by the future version of * colamd_set_defaults, so that the code that calls colamd will * not need to change, assuming that you either use * colamd_set_defaults, or pass a (double *) NULL pointer as the * knobs array to colamd or symamd. * * \param knobs parameter settings for colamd */ static inline void set_defaults(double knobs[NKnobs]) { /* === Local variables ================================================== */ int i ; if (!knobs) { return ; /* no knobs to initialize */ } for (i = 0 ; i < NKnobs ; i++) { knobs [i] = 0 ; } knobs [Colamd::DenseRow] = 0.5 ; /* ignore rows over 50% dense */ knobs [Colamd::DenseCol] = 0.5 ; /* ignore columns over 50% dense */ } /** * \brief Computes a column ordering using the column approximate minimum degree ordering * * Computes a column ordering (Q) of A such that P(AQ)=LU or * (AQ)'AQ=LL' have less fill-in and require fewer floating point * operations than factorizing the unpermuted matrix A or A'A, * respectively. * * * \param n_row number of rows in A * \param n_col number of columns in A * \param Alen, size of the array A * \param A row indices of the matrix, of size ALen * \param p column pointers of A, of size n_col+1 * \param knobs parameter settings for colamd * \param stats colamd output statistics and error codes */ template static bool compute_ordering(IndexType n_row, IndexType n_col, IndexType Alen, IndexType *A, IndexType *p, double knobs[NKnobs], IndexType stats[NStats]) { /* === Local variables ================================================== */ IndexType i ; /* loop index */ IndexType nnz ; /* nonzeros in A */ IndexType Row_size ; /* size of Row [], in integers */ IndexType Col_size ; /* size of Col [], in integers */ IndexType need ; /* minimum required length of A */ Colamd::RowStructure *Row ; /* pointer into A of Row [0..n_row] array */ Colamd::ColStructure *Col ; /* pointer into A of Col [0..n_col] array */ IndexType n_col2 ; /* number of non-dense, non-empty columns */ IndexType n_row2 ; /* number of non-dense, non-empty rows */ IndexType ngarbage ; /* number of garbage collections performed */ IndexType max_deg ; /* maximum row degree */ double default_knobs [NKnobs] ; /* default knobs array */ /* === Check the input arguments ======================================== */ if (!stats) { COLAMD_DEBUG0 (("colamd: stats not present\n")) ; return (false) ; } for (i = 0 ; i < NStats ; i++) { stats [i] = 0 ; } stats [Colamd::Status] = Colamd::Ok ; stats [Colamd::Info1] = -1 ; stats [Colamd::Info2] = -1 ; if (!A) /* A is not present */ { stats [Colamd::Status] = Colamd::ErrorANotPresent ; COLAMD_DEBUG0 (("colamd: A not present\n")) ; return (false) ; } if (!p) /* p is not present */ { stats [Colamd::Status] = Colamd::ErrorPNotPresent ; COLAMD_DEBUG0 (("colamd: p not present\n")) ; return (false) ; } if (n_row < 0) /* n_row must be >= 0 */ { stats [Colamd::Status] = Colamd::ErrorNrowNegative ; stats [Colamd::Info1] = n_row ; COLAMD_DEBUG0 (("colamd: nrow negative %d\n", n_row)) ; return (false) ; } if (n_col < 0) /* n_col must be >= 0 */ { stats [Colamd::Status] = Colamd::ErrorNcolNegative ; stats [Colamd::Info1] = n_col ; COLAMD_DEBUG0 (("colamd: ncol negative %d\n", n_col)) ; return (false) ; } nnz = p [n_col] ; if (nnz < 0) /* nnz must be >= 0 */ { stats [Colamd::Status] = Colamd::ErrorNnzNegative ; stats [Colamd::Info1] = nnz ; COLAMD_DEBUG0 (("colamd: number of entries negative %d\n", nnz)) ; return (false) ; } if (p [0] != 0) { stats [Colamd::Status] = Colamd::ErrorP0Nonzero ; stats [Colamd::Info1] = p [0] ; COLAMD_DEBUG0 (("colamd: p[0] not zero %d\n", p [0])) ; return (false) ; } /* === If no knobs, set default knobs =================================== */ if (!knobs) { set_defaults (default_knobs) ; knobs = default_knobs ; } /* === Allocate the Row and Col arrays from array A ===================== */ Col_size = colamd_c (n_col) ; Row_size = colamd_r (n_row) ; need = 2*nnz + n_col + Col_size + Row_size ; if (need > Alen) { /* not enough space in array A to perform the ordering */ stats [Colamd::Status] = Colamd::ErrorATooSmall ; stats [Colamd::Info1] = need ; stats [Colamd::Info2] = Alen ; COLAMD_DEBUG0 (("colamd: Need Alen >= %d, given only Alen = %d\n", need,Alen)); return (false) ; } Alen -= Col_size + Row_size ; Col = (ColStructure *) &A [Alen] ; Row = (RowStructure *) &A [Alen + Col_size] ; /* === Construct the row and column data structures ===================== */ if (!Colamd::init_rows_cols (n_row, n_col, Row, Col, A, p, stats)) { /* input matrix is invalid */ COLAMD_DEBUG0 (("colamd: Matrix invalid\n")) ; return (false) ; } /* === Initialize scores, kill dense rows/columns ======================= */ Colamd::init_scoring (n_row, n_col, Row, Col, A, p, knobs, &n_row2, &n_col2, &max_deg) ; /* === Order the supercolumns =========================================== */ ngarbage = Colamd::find_ordering (n_row, n_col, Alen, Row, Col, A, p, n_col2, max_deg, 2*nnz) ; /* === Order the non-principal columns ================================== */ Colamd::order_children (n_col, Col, p) ; /* === Return statistics in stats ======================================= */ stats [Colamd::DenseRow] = n_row - n_row2 ; stats [Colamd::DenseCol] = n_col - n_col2 ; stats [Colamd::DefragCount] = ngarbage ; COLAMD_DEBUG0 (("colamd: done.\n")) ; return (true) ; } /* ========================================================================== */ /* === NON-USER-CALLABLE ROUTINES: ========================================== */ /* ========================================================================== */ /* There are no user-callable routines beyond this point in the file */ /* ========================================================================== */ /* === init_rows_cols ======================================================= */ /* ========================================================================== */ /* Takes the column form of the matrix in A and creates the row form of the matrix. Also, row and column attributes are stored in the Col and Row structs. If the columns are un-sorted or contain duplicate row indices, this routine will also sort and remove duplicate row indices from the column form of the matrix. Returns false if the matrix is invalid, true otherwise. Not user-callable. */ template static IndexType init_rows_cols /* returns true if OK, or false otherwise */ ( /* === Parameters ======================================================= */ IndexType n_row, /* number of rows of A */ IndexType n_col, /* number of columns of A */ RowStructure Row [], /* of size n_row+1 */ ColStructure Col [], /* of size n_col+1 */ IndexType A [], /* row indices of A, of size Alen */ IndexType p [], /* pointers to columns in A, of size n_col+1 */ IndexType stats [NStats] /* colamd statistics */ ) { /* === Local variables ================================================== */ IndexType col ; /* a column index */ IndexType row ; /* a row index */ IndexType *cp ; /* a column pointer */ IndexType *cp_end ; /* a pointer to the end of a column */ IndexType *rp ; /* a row pointer */ IndexType *rp_end ; /* a pointer to the end of a row */ IndexType last_row ; /* previous row */ /* === Initialize columns, and check column pointers ==================== */ for (col = 0 ; col < n_col ; col++) { Col [col].start = p [col] ; Col [col].length = p [col+1] - p [col] ; if ((Col [col].length) < 0) // extra parentheses to work-around gcc bug 10200 { /* column pointers must be non-decreasing */ stats [Colamd::Status] = Colamd::ErrorColLengthNegative ; stats [Colamd::Info1] = col ; stats [Colamd::Info2] = Col [col].length ; COLAMD_DEBUG0 (("colamd: col %d length %d < 0\n", col, Col [col].length)) ; return (false) ; } Col [col].shared1.thickness = 1 ; Col [col].shared2.score = 0 ; Col [col].shared3.prev = Empty ; Col [col].shared4.degree_next = Empty ; } /* p [0..n_col] no longer needed, used as "head" in subsequent routines */ /* === Scan columns, compute row degrees, and check row indices ========= */ stats [Info3] = 0 ; /* number of duplicate or unsorted row indices*/ for (row = 0 ; row < n_row ; row++) { Row [row].length = 0 ; Row [row].shared2.mark = -1 ; } for (col = 0 ; col < n_col ; col++) { last_row = -1 ; cp = &A [p [col]] ; cp_end = &A [p [col+1]] ; while (cp < cp_end) { row = *cp++ ; /* make sure row indices within range */ if (row < 0 || row >= n_row) { stats [Colamd::Status] = Colamd::ErrorRowIndexOutOfBounds ; stats [Colamd::Info1] = col ; stats [Colamd::Info2] = row ; stats [Colamd::Info3] = n_row ; COLAMD_DEBUG0 (("colamd: row %d col %d out of bounds\n", row, col)) ; return (false) ; } if (row <= last_row || Row [row].shared2.mark == col) { /* row index are unsorted or repeated (or both), thus col */ /* is jumbled. This is a notice, not an error condition. */ stats [Colamd::Status] = Colamd::OkButJumbled ; stats [Colamd::Info1] = col ; stats [Colamd::Info2] = row ; (stats [Colamd::Info3]) ++ ; COLAMD_DEBUG1 (("colamd: row %d col %d unsorted/duplicate\n",row,col)); } if (Row [row].shared2.mark != col) { Row [row].length++ ; } else { /* this is a repeated entry in the column, */ /* it will be removed */ Col [col].length-- ; } /* mark the row as having been seen in this column */ Row [row].shared2.mark = col ; last_row = row ; } } /* === Compute row pointers ============================================= */ /* row form of the matrix starts directly after the column */ /* form of matrix in A */ Row [0].start = p [n_col] ; Row [0].shared1.p = Row [0].start ; Row [0].shared2.mark = -1 ; for (row = 1 ; row < n_row ; row++) { Row [row].start = Row [row-1].start + Row [row-1].length ; Row [row].shared1.p = Row [row].start ; Row [row].shared2.mark = -1 ; } /* === Create row form ================================================== */ if (stats [Status] == OkButJumbled) { /* if cols jumbled, watch for repeated row indices */ for (col = 0 ; col < n_col ; col++) { cp = &A [p [col]] ; cp_end = &A [p [col+1]] ; while (cp < cp_end) { row = *cp++ ; if (Row [row].shared2.mark != col) { A [(Row [row].shared1.p)++] = col ; Row [row].shared2.mark = col ; } } } } else { /* if cols not jumbled, we don't need the mark (this is faster) */ for (col = 0 ; col < n_col ; col++) { cp = &A [p [col]] ; cp_end = &A [p [col+1]] ; while (cp < cp_end) { A [(Row [*cp++].shared1.p)++] = col ; } } } /* === Clear the row marks and set row degrees ========================== */ for (row = 0 ; row < n_row ; row++) { Row [row].shared2.mark = 0 ; Row [row].shared1.degree = Row [row].length ; } /* === See if we need to re-create columns ============================== */ if (stats [Status] == OkButJumbled) { COLAMD_DEBUG0 (("colamd: reconstructing column form, matrix jumbled\n")) ; /* === Compute col pointers ========================================= */ /* col form of the matrix starts at A [0]. */ /* Note, we may have a gap between the col form and the row */ /* form if there were duplicate entries, if so, it will be */ /* removed upon the first garbage collection */ Col [0].start = 0 ; p [0] = Col [0].start ; for (col = 1 ; col < n_col ; col++) { /* note that the lengths here are for pruned columns, i.e. */ /* no duplicate row indices will exist for these columns */ Col [col].start = Col [col-1].start + Col [col-1].length ; p [col] = Col [col].start ; } /* === Re-create col form =========================================== */ for (row = 0 ; row < n_row ; row++) { rp = &A [Row [row].start] ; rp_end = rp + Row [row].length ; while (rp < rp_end) { A [(p [*rp++])++] = row ; } } } /* === Done. Matrix is not (or no longer) jumbled ====================== */ return (true) ; } /* ========================================================================== */ /* === init_scoring ========================================================= */ /* ========================================================================== */ /* Kills dense or empty columns and rows, calculates an initial score for each column, and places all columns in the degree lists. Not user-callable. */ template static void init_scoring ( /* === Parameters ======================================================= */ IndexType n_row, /* number of rows of A */ IndexType n_col, /* number of columns of A */ RowStructure Row [], /* of size n_row+1 */ ColStructure Col [], /* of size n_col+1 */ IndexType A [], /* column form and row form of A */ IndexType head [], /* of size n_col+1 */ double knobs [NKnobs],/* parameters */ IndexType *p_n_row2, /* number of non-dense, non-empty rows */ IndexType *p_n_col2, /* number of non-dense, non-empty columns */ IndexType *p_max_deg /* maximum row degree */ ) { /* === Local variables ================================================== */ IndexType c ; /* a column index */ IndexType r, row ; /* a row index */ IndexType *cp ; /* a column pointer */ IndexType deg ; /* degree of a row or column */ IndexType *cp_end ; /* a pointer to the end of a column */ IndexType *new_cp ; /* new column pointer */ IndexType col_length ; /* length of pruned column */ IndexType score ; /* current column score */ IndexType n_col2 ; /* number of non-dense, non-empty columns */ IndexType n_row2 ; /* number of non-dense, non-empty rows */ IndexType dense_row_count ; /* remove rows with more entries than this */ IndexType dense_col_count ; /* remove cols with more entries than this */ IndexType min_score ; /* smallest column score */ IndexType max_deg ; /* maximum row degree */ IndexType next_col ; /* Used to add to degree list.*/ /* === Extract knobs ==================================================== */ dense_row_count = numext::maxi(IndexType(0), numext::mini(IndexType(knobs [Colamd::DenseRow] * n_col), n_col)) ; dense_col_count = numext::maxi(IndexType(0), numext::mini(IndexType(knobs [Colamd::DenseCol] * n_row), n_row)) ; COLAMD_DEBUG1 (("colamd: densecount: %d %d\n", dense_row_count, dense_col_count)) ; max_deg = 0 ; n_col2 = n_col ; n_row2 = n_row ; /* === Kill empty columns =============================================== */ /* Put the empty columns at the end in their natural order, so that LU */ /* factorization can proceed as far as possible. */ for (c = n_col-1 ; c >= 0 ; c--) { deg = Col [c].length ; if (deg == 0) { /* this is a empty column, kill and order it last */ Col [c].shared2.order = --n_col2 ; Col[c].kill_principal() ; } } COLAMD_DEBUG1 (("colamd: null columns killed: %d\n", n_col - n_col2)) ; /* === Kill dense columns =============================================== */ /* Put the dense columns at the end, in their natural order */ for (c = n_col-1 ; c >= 0 ; c--) { /* skip any dead columns */ if (Col[c].is_dead()) { continue ; } deg = Col [c].length ; if (deg > dense_col_count) { /* this is a dense column, kill and order it last */ Col [c].shared2.order = --n_col2 ; /* decrement the row degrees */ cp = &A [Col [c].start] ; cp_end = cp + Col [c].length ; while (cp < cp_end) { Row [*cp++].shared1.degree-- ; } Col[c].kill_principal() ; } } COLAMD_DEBUG1 (("colamd: Dense and null columns killed: %d\n", n_col - n_col2)) ; /* === Kill dense and empty rows ======================================== */ for (r = 0 ; r < n_row ; r++) { deg = Row [r].shared1.degree ; COLAMD_ASSERT (deg >= 0 && deg <= n_col) ; if (deg > dense_row_count || deg == 0) { /* kill a dense or empty row */ Row[r].kill() ; --n_row2 ; } else { /* keep track of max degree of remaining rows */ max_deg = numext::maxi(max_deg, deg) ; } } COLAMD_DEBUG1 (("colamd: Dense and null rows killed: %d\n", n_row - n_row2)) ; /* === Compute initial column scores ==================================== */ /* At this point the row degrees are accurate. They reflect the number */ /* of "live" (non-dense) columns in each row. No empty rows exist. */ /* Some "live" columns may contain only dead rows, however. These are */ /* pruned in the code below. */ /* now find the initial matlab score for each column */ for (c = n_col-1 ; c >= 0 ; c--) { /* skip dead column */ if (Col[c].is_dead()) { continue ; } score = 0 ; cp = &A [Col [c].start] ; new_cp = cp ; cp_end = cp + Col [c].length ; while (cp < cp_end) { /* get a row */ row = *cp++ ; /* skip if dead */ if (Row[row].is_dead()) { continue ; } /* compact the column */ *new_cp++ = row ; /* add row's external degree */ score += Row [row].shared1.degree - 1 ; /* guard against integer overflow */ score = numext::mini(score, n_col) ; } /* determine pruned column length */ col_length = (IndexType) (new_cp - &A [Col [c].start]) ; if (col_length == 0) { /* a newly-made null column (all rows in this col are "dense" */ /* and have already been killed) */ COLAMD_DEBUG2 (("Newly null killed: %d\n", c)) ; Col [c].shared2.order = --n_col2 ; Col[c].kill_principal() ; } else { /* set column length and set score */ COLAMD_ASSERT (score >= 0) ; COLAMD_ASSERT (score <= n_col) ; Col [c].length = col_length ; Col [c].shared2.score = score ; } } COLAMD_DEBUG1 (("colamd: Dense, null, and newly-null columns killed: %d\n", n_col-n_col2)) ; /* At this point, all empty rows and columns are dead. All live columns */ /* are "clean" (containing no dead rows) and simplicial (no supercolumns */ /* yet). Rows may contain dead columns, but all live rows contain at */ /* least one live column. */ /* === Initialize degree lists ========================================== */ /* clear the hash buckets */ for (c = 0 ; c <= n_col ; c++) { head [c] = Empty ; } min_score = n_col ; /* place in reverse order, so low column indices are at the front */ /* of the lists. This is to encourage natural tie-breaking */ for (c = n_col-1 ; c >= 0 ; c--) { /* only add principal columns to degree lists */ if (Col[c].is_alive()) { COLAMD_DEBUG4 (("place %d score %d minscore %d ncol %d\n", c, Col [c].shared2.score, min_score, n_col)) ; /* === Add columns score to DList =============================== */ score = Col [c].shared2.score ; COLAMD_ASSERT (min_score >= 0) ; COLAMD_ASSERT (min_score <= n_col) ; COLAMD_ASSERT (score >= 0) ; COLAMD_ASSERT (score <= n_col) ; COLAMD_ASSERT (head [score] >= Empty) ; /* now add this column to dList at proper score location */ next_col = head [score] ; Col [c].shared3.prev = Empty ; Col [c].shared4.degree_next = next_col ; /* if there already was a column with the same score, set its */ /* previous pointer to this new column */ if (next_col != Empty) { Col [next_col].shared3.prev = c ; } head [score] = c ; /* see if this score is less than current min */ min_score = numext::mini(min_score, score) ; } } /* === Return number of remaining columns, and max row degree =========== */ *p_n_col2 = n_col2 ; *p_n_row2 = n_row2 ; *p_max_deg = max_deg ; } /* ========================================================================== */ /* === find_ordering ======================================================== */ /* ========================================================================== */ /* Order the principal columns of the supercolumn form of the matrix (no supercolumns on input). Uses a minimum approximate column minimum degree ordering method. Not user-callable. */ template static IndexType find_ordering /* return the number of garbage collections */ ( /* === Parameters ======================================================= */ IndexType n_row, /* number of rows of A */ IndexType n_col, /* number of columns of A */ IndexType Alen, /* size of A, 2*nnz + n_col or larger */ RowStructure Row [], /* of size n_row+1 */ ColStructure Col [], /* of size n_col+1 */ IndexType A [], /* column form and row form of A */ IndexType head [], /* of size n_col+1 */ IndexType n_col2, /* Remaining columns to order */ IndexType max_deg, /* Maximum row degree */ IndexType pfree /* index of first free slot (2*nnz on entry) */ ) { /* === Local variables ================================================== */ IndexType k ; /* current pivot ordering step */ IndexType pivot_col ; /* current pivot column */ IndexType *cp ; /* a column pointer */ IndexType *rp ; /* a row pointer */ IndexType pivot_row ; /* current pivot row */ IndexType *new_cp ; /* modified column pointer */ IndexType *new_rp ; /* modified row pointer */ IndexType pivot_row_start ; /* pointer to start of pivot row */ IndexType pivot_row_degree ; /* number of columns in pivot row */ IndexType pivot_row_length ; /* number of supercolumns in pivot row */ IndexType pivot_col_score ; /* score of pivot column */ IndexType needed_memory ; /* free space needed for pivot row */ IndexType *cp_end ; /* pointer to the end of a column */ IndexType *rp_end ; /* pointer to the end of a row */ IndexType row ; /* a row index */ IndexType col ; /* a column index */ IndexType max_score ; /* maximum possible score */ IndexType cur_score ; /* score of current column */ unsigned int hash ; /* hash value for supernode detection */ IndexType head_column ; /* head of hash bucket */ IndexType first_col ; /* first column in hash bucket */ IndexType tag_mark ; /* marker value for mark array */ IndexType row_mark ; /* Row [row].shared2.mark */ IndexType set_difference ; /* set difference size of row with pivot row */ IndexType min_score ; /* smallest column score */ IndexType col_thickness ; /* "thickness" (no. of columns in a supercol) */ IndexType max_mark ; /* maximum value of tag_mark */ IndexType pivot_col_thickness ; /* number of columns represented by pivot col */ IndexType prev_col ; /* Used by Dlist operations. */ IndexType next_col ; /* Used by Dlist operations. */ IndexType ngarbage ; /* number of garbage collections performed */ /* === Initialization and clear mark ==================================== */ max_mark = INT_MAX - n_col ; /* INT_MAX defined in */ tag_mark = Colamd::clear_mark (n_row, Row) ; min_score = 0 ; ngarbage = 0 ; COLAMD_DEBUG1 (("colamd: Ordering, n_col2=%d\n", n_col2)) ; /* === Order the columns ================================================ */ for (k = 0 ; k < n_col2 ; /* 'k' is incremented below */) { /* === Select pivot column, and order it ============================ */ /* make sure degree list isn't empty */ COLAMD_ASSERT (min_score >= 0) ; COLAMD_ASSERT (min_score <= n_col) ; COLAMD_ASSERT (head [min_score] >= Empty) ; /* get pivot column from head of minimum degree list */ while (min_score < n_col && head [min_score] == Empty) { min_score++ ; } pivot_col = head [min_score] ; COLAMD_ASSERT (pivot_col >= 0 && pivot_col <= n_col) ; next_col = Col [pivot_col].shared4.degree_next ; head [min_score] = next_col ; if (next_col != Empty) { Col [next_col].shared3.prev = Empty ; } COLAMD_ASSERT (Col[pivot_col].is_alive()) ; COLAMD_DEBUG3 (("Pivot col: %d\n", pivot_col)) ; /* remember score for defrag check */ pivot_col_score = Col [pivot_col].shared2.score ; /* the pivot column is the kth column in the pivot order */ Col [pivot_col].shared2.order = k ; /* increment order count by column thickness */ pivot_col_thickness = Col [pivot_col].shared1.thickness ; k += pivot_col_thickness ; COLAMD_ASSERT (pivot_col_thickness > 0) ; /* === Garbage_collection, if necessary ============================= */ needed_memory = numext::mini(pivot_col_score, n_col - k) ; if (pfree + needed_memory >= Alen) { pfree = Colamd::garbage_collection (n_row, n_col, Row, Col, A, &A [pfree]) ; ngarbage++ ; /* after garbage collection we will have enough */ COLAMD_ASSERT (pfree + needed_memory < Alen) ; /* garbage collection has wiped out the Row[].shared2.mark array */ tag_mark = Colamd::clear_mark (n_row, Row) ; } /* === Compute pivot row pattern ==================================== */ /* get starting location for this new merged row */ pivot_row_start = pfree ; /* initialize new row counts to zero */ pivot_row_degree = 0 ; /* tag pivot column as having been visited so it isn't included */ /* in merged pivot row */ Col [pivot_col].shared1.thickness = -pivot_col_thickness ; /* pivot row is the union of all rows in the pivot column pattern */ cp = &A [Col [pivot_col].start] ; cp_end = cp + Col [pivot_col].length ; while (cp < cp_end) { /* get a row */ row = *cp++ ; COLAMD_DEBUG4 (("Pivot col pattern %d %d\n", Row[row].is_alive(), row)) ; /* skip if row is dead */ if (Row[row].is_dead()) { continue ; } rp = &A [Row [row].start] ; rp_end = rp + Row [row].length ; while (rp < rp_end) { /* get a column */ col = *rp++ ; /* add the column, if alive and untagged */ col_thickness = Col [col].shared1.thickness ; if (col_thickness > 0 && Col[col].is_alive()) { /* tag column in pivot row */ Col [col].shared1.thickness = -col_thickness ; COLAMD_ASSERT (pfree < Alen) ; /* place column in pivot row */ A [pfree++] = col ; pivot_row_degree += col_thickness ; } } } /* clear tag on pivot column */ Col [pivot_col].shared1.thickness = pivot_col_thickness ; max_deg = numext::maxi(max_deg, pivot_row_degree) ; /* === Kill all rows used to construct pivot row ==================== */ /* also kill pivot row, temporarily */ cp = &A [Col [pivot_col].start] ; cp_end = cp + Col [pivot_col].length ; while (cp < cp_end) { /* may be killing an already dead row */ row = *cp++ ; COLAMD_DEBUG3 (("Kill row in pivot col: %d\n", row)) ; Row[row].kill() ; } /* === Select a row index to use as the new pivot row =============== */ pivot_row_length = pfree - pivot_row_start ; if (pivot_row_length > 0) { /* pick the "pivot" row arbitrarily (first row in col) */ pivot_row = A [Col [pivot_col].start] ; COLAMD_DEBUG3 (("Pivotal row is %d\n", pivot_row)) ; } else { /* there is no pivot row, since it is of zero length */ pivot_row = Empty ; COLAMD_ASSERT (pivot_row_length == 0) ; } COLAMD_ASSERT (Col [pivot_col].length > 0 || pivot_row_length == 0) ; /* === Approximate degree computation =============================== */ /* Here begins the computation of the approximate degree. The column */ /* score is the sum of the pivot row "length", plus the size of the */ /* set differences of each row in the column minus the pattern of the */ /* pivot row itself. The column ("thickness") itself is also */ /* excluded from the column score (we thus use an approximate */ /* external degree). */ /* The time taken by the following code (compute set differences, and */ /* add them up) is proportional to the size of the data structure */ /* being scanned - that is, the sum of the sizes of each column in */ /* the pivot row. Thus, the amortized time to compute a column score */ /* is proportional to the size of that column (where size, in this */ /* context, is the column "length", or the number of row indices */ /* in that column). The number of row indices in a column is */ /* monotonically non-decreasing, from the length of the original */ /* column on input to colamd. */ /* === Compute set differences ====================================== */ COLAMD_DEBUG3 (("** Computing set differences phase. **\n")) ; /* pivot row is currently dead - it will be revived later. */ COLAMD_DEBUG3 (("Pivot row: ")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; rp_end = rp + pivot_row_length ; while (rp < rp_end) { col = *rp++ ; COLAMD_ASSERT (Col[col].is_alive() && col != pivot_col) ; COLAMD_DEBUG3 (("Col: %d\n", col)) ; /* clear tags used to construct pivot row pattern */ col_thickness = -Col [col].shared1.thickness ; COLAMD_ASSERT (col_thickness > 0) ; Col [col].shared1.thickness = col_thickness ; /* === Remove column from degree list =========================== */ cur_score = Col [col].shared2.score ; prev_col = Col [col].shared3.prev ; next_col = Col [col].shared4.degree_next ; COLAMD_ASSERT (cur_score >= 0) ; COLAMD_ASSERT (cur_score <= n_col) ; COLAMD_ASSERT (cur_score >= Empty) ; if (prev_col == Empty) { head [cur_score] = next_col ; } else { Col [prev_col].shared4.degree_next = next_col ; } if (next_col != Empty) { Col [next_col].shared3.prev = prev_col ; } /* === Scan the column ========================================== */ cp = &A [Col [col].start] ; cp_end = cp + Col [col].length ; while (cp < cp_end) { /* get a row */ row = *cp++ ; /* skip if dead */ if (Row[row].is_dead()) { continue ; } row_mark = Row [row].shared2.mark ; COLAMD_ASSERT (row != pivot_row) ; set_difference = row_mark - tag_mark ; /* check if the row has been seen yet */ if (set_difference < 0) { COLAMD_ASSERT (Row [row].shared1.degree <= max_deg) ; set_difference = Row [row].shared1.degree ; } /* subtract column thickness from this row's set difference */ set_difference -= col_thickness ; COLAMD_ASSERT (set_difference >= 0) ; /* absorb this row if the set difference becomes zero */ if (set_difference == 0) { COLAMD_DEBUG3 (("aggressive absorption. Row: %d\n", row)) ; Row[row].kill() ; } else { /* save the new mark */ Row [row].shared2.mark = set_difference + tag_mark ; } } } /* === Add up set differences for each column ======================= */ COLAMD_DEBUG3 (("** Adding set differences phase. **\n")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; rp_end = rp + pivot_row_length ; while (rp < rp_end) { /* get a column */ col = *rp++ ; COLAMD_ASSERT (Col[col].is_alive() && col != pivot_col) ; hash = 0 ; cur_score = 0 ; cp = &A [Col [col].start] ; /* compact the column */ new_cp = cp ; cp_end = cp + Col [col].length ; COLAMD_DEBUG4 (("Adding set diffs for Col: %d.\n", col)) ; while (cp < cp_end) { /* get a row */ row = *cp++ ; COLAMD_ASSERT(row >= 0 && row < n_row) ; /* skip if dead */ if (Row [row].is_dead()) { continue ; } row_mark = Row [row].shared2.mark ; COLAMD_ASSERT (row_mark > tag_mark) ; /* compact the column */ *new_cp++ = row ; /* compute hash function */ hash += row ; /* add set difference */ cur_score += row_mark - tag_mark ; /* integer overflow... */ cur_score = numext::mini(cur_score, n_col) ; } /* recompute the column's length */ Col [col].length = (IndexType) (new_cp - &A [Col [col].start]) ; /* === Further mass elimination ================================= */ if (Col [col].length == 0) { COLAMD_DEBUG4 (("further mass elimination. Col: %d\n", col)) ; /* nothing left but the pivot row in this column */ Col[col].kill_principal() ; pivot_row_degree -= Col [col].shared1.thickness ; COLAMD_ASSERT (pivot_row_degree >= 0) ; /* order it */ Col [col].shared2.order = k ; /* increment order count by column thickness */ k += Col [col].shared1.thickness ; } else { /* === Prepare for supercolumn detection ==================== */ COLAMD_DEBUG4 (("Preparing supercol detection for Col: %d.\n", col)) ; /* save score so far */ Col [col].shared2.score = cur_score ; /* add column to hash table, for supercolumn detection */ hash %= n_col + 1 ; COLAMD_DEBUG4 ((" Hash = %d, n_col = %d.\n", hash, n_col)) ; COLAMD_ASSERT (hash <= n_col) ; head_column = head [hash] ; if (head_column > Empty) { /* degree list "hash" is non-empty, use prev (shared3) of */ /* first column in degree list as head of hash bucket */ first_col = Col [head_column].shared3.headhash ; Col [head_column].shared3.headhash = col ; } else { /* degree list "hash" is empty, use head as hash bucket */ first_col = - (head_column + 2) ; head [hash] = - (col + 2) ; } Col [col].shared4.hash_next = first_col ; /* save hash function in Col [col].shared3.hash */ Col [col].shared3.hash = (IndexType) hash ; COLAMD_ASSERT (Col[col].is_alive()) ; } } /* The approximate external column degree is now computed. */ /* === Supercolumn detection ======================================== */ COLAMD_DEBUG3 (("** Supercolumn detection phase. **\n")) ; Colamd::detect_super_cols (Col, A, head, pivot_row_start, pivot_row_length) ; /* === Kill the pivotal column ====================================== */ Col[pivot_col].kill_principal() ; /* === Clear mark =================================================== */ tag_mark += (max_deg + 1) ; if (tag_mark >= max_mark) { COLAMD_DEBUG2 (("clearing tag_mark\n")) ; tag_mark = Colamd::clear_mark (n_row, Row) ; } /* === Finalize the new pivot row, and column scores ================ */ COLAMD_DEBUG3 (("** Finalize scores phase. **\n")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; /* compact the pivot row */ new_rp = rp ; rp_end = rp + pivot_row_length ; while (rp < rp_end) { col = *rp++ ; /* skip dead columns */ if (Col[col].is_dead()) { continue ; } *new_rp++ = col ; /* add new pivot row to column */ A [Col [col].start + (Col [col].length++)] = pivot_row ; /* retrieve score so far and add on pivot row's degree. */ /* (we wait until here for this in case the pivot */ /* row's degree was reduced due to mass elimination). */ cur_score = Col [col].shared2.score + pivot_row_degree ; /* calculate the max possible score as the number of */ /* external columns minus the 'k' value minus the */ /* columns thickness */ max_score = n_col - k - Col [col].shared1.thickness ; /* make the score the external degree of the union-of-rows */ cur_score -= Col [col].shared1.thickness ; /* make sure score is less or equal than the max score */ cur_score = numext::mini(cur_score, max_score) ; COLAMD_ASSERT (cur_score >= 0) ; /* store updated score */ Col [col].shared2.score = cur_score ; /* === Place column back in degree list ========================= */ COLAMD_ASSERT (min_score >= 0) ; COLAMD_ASSERT (min_score <= n_col) ; COLAMD_ASSERT (cur_score >= 0) ; COLAMD_ASSERT (cur_score <= n_col) ; COLAMD_ASSERT (head [cur_score] >= Empty) ; next_col = head [cur_score] ; Col [col].shared4.degree_next = next_col ; Col [col].shared3.prev = Empty ; if (next_col != Empty) { Col [next_col].shared3.prev = col ; } head [cur_score] = col ; /* see if this score is less than current min */ min_score = numext::mini(min_score, cur_score) ; } /* === Resurrect the new pivot row ================================== */ if (pivot_row_degree > 0) { /* update pivot row length to reflect any cols that were killed */ /* during super-col detection and mass elimination */ Row [pivot_row].start = pivot_row_start ; Row [pivot_row].length = (IndexType) (new_rp - &A[pivot_row_start]) ; Row [pivot_row].shared1.degree = pivot_row_degree ; Row [pivot_row].shared2.mark = 0 ; /* pivot row is no longer dead */ } } /* === All principal columns have now been ordered ====================== */ return (ngarbage) ; } /* ========================================================================== */ /* === order_children ======================================================= */ /* ========================================================================== */ /* The find_ordering routine has ordered all of the principal columns (the representatives of the supercolumns). The non-principal columns have not yet been ordered. This routine orders those columns by walking up the parent tree (a column is a child of the column which absorbed it). The final permutation vector is then placed in p [0 ... n_col-1], with p [0] being the first column, and p [n_col-1] being the last. It doesn't look like it at first glance, but be assured that this routine takes time linear in the number of columns. Although not immediately obvious, the time taken by this routine is O (n_col), that is, linear in the number of columns. Not user-callable. */ template static inline void order_children ( /* === Parameters ======================================================= */ IndexType n_col, /* number of columns of A */ ColStructure Col [], /* of size n_col+1 */ IndexType p [] /* p [0 ... n_col-1] is the column permutation*/ ) { /* === Local variables ================================================== */ IndexType i ; /* loop counter for all columns */ IndexType c ; /* column index */ IndexType parent ; /* index of column's parent */ IndexType order ; /* column's order */ /* === Order each non-principal column ================================== */ for (i = 0 ; i < n_col ; i++) { /* find an un-ordered non-principal column */ COLAMD_ASSERT (col_is_dead(Col, i)) ; if (!Col[i].is_dead_principal() && Col [i].shared2.order == Empty) { parent = i ; /* once found, find its principal parent */ do { parent = Col [parent].shared1.parent ; } while (!Col[parent].is_dead_principal()) ; /* now, order all un-ordered non-principal columns along path */ /* to this parent. collapse tree at the same time */ c = i ; /* get order of parent */ order = Col [parent].shared2.order ; do { COLAMD_ASSERT (Col [c].shared2.order == Empty) ; /* order this column */ Col [c].shared2.order = order++ ; /* collaps tree */ Col [c].shared1.parent = parent ; /* get immediate parent of this column */ c = Col [c].shared1.parent ; /* continue until we hit an ordered column. There are */ /* guaranteed not to be anymore unordered columns */ /* above an ordered column */ } while (Col [c].shared2.order == Empty) ; /* re-order the super_col parent to largest order for this group */ Col [parent].shared2.order = order ; } } /* === Generate the permutation ========================================= */ for (c = 0 ; c < n_col ; c++) { p [Col [c].shared2.order] = c ; } } /* ========================================================================== */ /* === detect_super_cols ==================================================== */ /* ========================================================================== */ /* Detects supercolumns by finding matches between columns in the hash buckets. Check amongst columns in the set A [row_start ... row_start + row_length-1]. The columns under consideration are currently *not* in the degree lists, and have already been placed in the hash buckets. The hash bucket for columns whose hash function is equal to h is stored as follows: if head [h] is >= 0, then head [h] contains a degree list, so: head [h] is the first column in degree bucket h. Col [head [h]].headhash gives the first column in hash bucket h. otherwise, the degree list is empty, and: -(head [h] + 2) is the first column in hash bucket h. For a column c in a hash bucket, Col [c].shared3.prev is NOT a "previous column" pointer. Col [c].shared3.hash is used instead as the hash number for that column. The value of Col [c].shared4.hash_next is the next column in the same hash bucket. Assuming no, or "few" hash collisions, the time taken by this routine is linear in the sum of the sizes (lengths) of each column whose score has just been computed in the approximate degree computation. Not user-callable. */ template static void detect_super_cols ( /* === Parameters ======================================================= */ ColStructure Col [], /* of size n_col+1 */ IndexType A [], /* row indices of A */ IndexType head [], /* head of degree lists and hash buckets */ IndexType row_start, /* pointer to set of columns to check */ IndexType row_length /* number of columns to check */ ) { /* === Local variables ================================================== */ IndexType hash ; /* hash value for a column */ IndexType *rp ; /* pointer to a row */ IndexType c ; /* a column index */ IndexType super_c ; /* column index of the column to absorb into */ IndexType *cp1 ; /* column pointer for column super_c */ IndexType *cp2 ; /* column pointer for column c */ IndexType length ; /* length of column super_c */ IndexType prev_c ; /* column preceding c in hash bucket */ IndexType i ; /* loop counter */ IndexType *rp_end ; /* pointer to the end of the row */ IndexType col ; /* a column index in the row to check */ IndexType head_column ; /* first column in hash bucket or degree list */ IndexType first_col ; /* first column in hash bucket */ /* === Consider each column in the row ================================== */ rp = &A [row_start] ; rp_end = rp + row_length ; while (rp < rp_end) { col = *rp++ ; if (Col[col].is_dead()) { continue ; } /* get hash number for this column */ hash = Col [col].shared3.hash ; COLAMD_ASSERT (hash <= n_col) ; /* === Get the first column in this hash bucket ===================== */ head_column = head [hash] ; if (head_column > Empty) { first_col = Col [head_column].shared3.headhash ; } else { first_col = - (head_column + 2) ; } /* === Consider each column in the hash bucket ====================== */ for (super_c = first_col ; super_c != Empty ; super_c = Col [super_c].shared4.hash_next) { COLAMD_ASSERT (Col [super_c].is_alive()) ; COLAMD_ASSERT (Col [super_c].shared3.hash == hash) ; length = Col [super_c].length ; /* prev_c is the column preceding column c in the hash bucket */ prev_c = super_c ; /* === Compare super_c with all columns after it ================ */ for (c = Col [super_c].shared4.hash_next ; c != Empty ; c = Col [c].shared4.hash_next) { COLAMD_ASSERT (c != super_c) ; COLAMD_ASSERT (Col[c].is_alive()) ; COLAMD_ASSERT (Col [c].shared3.hash == hash) ; /* not identical if lengths or scores are different */ if (Col [c].length != length || Col [c].shared2.score != Col [super_c].shared2.score) { prev_c = c ; continue ; } /* compare the two columns */ cp1 = &A [Col [super_c].start] ; cp2 = &A [Col [c].start] ; for (i = 0 ; i < length ; i++) { /* the columns are "clean" (no dead rows) */ COLAMD_ASSERT ( cp1->is_alive() ); COLAMD_ASSERT ( cp2->is_alive() ); /* row indices will same order for both supercols, */ /* no gather scatter necessary */ if (*cp1++ != *cp2++) { break ; } } /* the two columns are different if the for-loop "broke" */ if (i != length) { prev_c = c ; continue ; } /* === Got it! two columns are identical =================== */ COLAMD_ASSERT (Col [c].shared2.score == Col [super_c].shared2.score) ; Col [super_c].shared1.thickness += Col [c].shared1.thickness ; Col [c].shared1.parent = super_c ; Col[c].kill_non_principal() ; /* order c later, in order_children() */ Col [c].shared2.order = Empty ; /* remove c from hash bucket */ Col [prev_c].shared4.hash_next = Col [c].shared4.hash_next ; } } /* === Empty this hash bucket ======================================= */ if (head_column > Empty) { /* corresponding degree list "hash" is not empty */ Col [head_column].shared3.headhash = Empty ; } else { /* corresponding degree list "hash" is empty */ head [hash] = Empty ; } } } /* ========================================================================== */ /* === garbage_collection =================================================== */ /* ========================================================================== */ /* Defragments and compacts columns and rows in the workspace A. Used when all available memory has been used while performing row merging. Returns the index of the first free position in A, after garbage collection. The time taken by this routine is linear is the size of the array A, which is itself linear in the number of nonzeros in the input matrix. Not user-callable. */ template static IndexType garbage_collection /* returns the new value of pfree */ ( /* === Parameters ======================================================= */ IndexType n_row, /* number of rows */ IndexType n_col, /* number of columns */ RowStructure Row [], /* row info */ ColStructure Col [], /* column info */ IndexType A [], /* A [0 ... Alen-1] holds the matrix */ IndexType *pfree /* &A [0] ... pfree is in use */ ) { /* === Local variables ================================================== */ IndexType *psrc ; /* source pointer */ IndexType *pdest ; /* destination pointer */ IndexType j ; /* counter */ IndexType r ; /* a row index */ IndexType c ; /* a column index */ IndexType length ; /* length of a row or column */ /* === Defragment the columns =========================================== */ pdest = &A[0] ; for (c = 0 ; c < n_col ; c++) { if (Col[c].is_alive()) { psrc = &A [Col [c].start] ; /* move and compact the column */ COLAMD_ASSERT (pdest <= psrc) ; Col [c].start = (IndexType) (pdest - &A [0]) ; length = Col [c].length ; for (j = 0 ; j < length ; j++) { r = *psrc++ ; if (Row[r].is_alive()) { *pdest++ = r ; } } Col [c].length = (IndexType) (pdest - &A [Col [c].start]) ; } } /* === Prepare to defragment the rows =================================== */ for (r = 0 ; r < n_row ; r++) { if (Row[r].is_alive()) { if (Row [r].length == 0) { /* this row is of zero length. cannot compact it, so kill it */ COLAMD_DEBUG3 (("Defrag row kill\n")) ; Row[r].kill() ; } else { /* save first column index in Row [r].shared2.first_column */ psrc = &A [Row [r].start] ; Row [r].shared2.first_column = *psrc ; COLAMD_ASSERT (Row[r].is_alive()) ; /* flag the start of the row with the one's complement of row */ *psrc = ones_complement(r) ; } } } /* === Defragment the rows ============================================== */ psrc = pdest ; while (psrc < pfree) { /* find a negative number ... the start of a row */ if (*psrc++ < 0) { psrc-- ; /* get the row index */ r = ones_complement(*psrc) ; COLAMD_ASSERT (r >= 0 && r < n_row) ; /* restore first column index */ *psrc = Row [r].shared2.first_column ; COLAMD_ASSERT (Row[r].is_alive()) ; /* move and compact the row */ COLAMD_ASSERT (pdest <= psrc) ; Row [r].start = (IndexType) (pdest - &A [0]) ; length = Row [r].length ; for (j = 0 ; j < length ; j++) { c = *psrc++ ; if (Col[c].is_alive()) { *pdest++ = c ; } } Row [r].length = (IndexType) (pdest - &A [Row [r].start]) ; } } /* ensure we found all the rows */ COLAMD_ASSERT (debug_rows == 0) ; /* === Return the new value of pfree ==================================== */ return ((IndexType) (pdest - &A [0])) ; } /* ========================================================================== */ /* === clear_mark =========================================================== */ /* ========================================================================== */ /* Clears the Row [].shared2.mark array, and returns the new tag_mark. Return value is the new tag_mark. Not user-callable. */ template static inline IndexType clear_mark /* return the new value for tag_mark */ ( /* === Parameters ======================================================= */ IndexType n_row, /* number of rows in A */ RowStructure Row [] /* Row [0 ... n_row-1].shared2.mark is set to zero */ ) { /* === Local variables ================================================== */ IndexType r ; for (r = 0 ; r < n_row ; r++) { if (Row[r].is_alive()) { Row [r].shared2.mark = 0 ; } } return (1) ; } } // namespace Colamd } // namespace internal #endif piqp-octave/src/eigen/Eigen/src/OrderingMethods/Ordering.h0000644000175100001770000001220014107270226023362 0ustar runnerdocker // This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ORDERING_H #define EIGEN_ORDERING_H namespace Eigen { #include "Eigen_Colamd.h" namespace internal { /** \internal * \ingroup OrderingMethods_Module * \param[in] A the input non-symmetric matrix * \param[out] symmat the symmetric pattern A^T+A from the input matrix \a A. * FIXME: The values should not be considered here */ template void ordering_helper_at_plus_a(const MatrixType& A, MatrixType& symmat) { MatrixType C; C = A.transpose(); // NOTE: Could be costly for (int i = 0; i < C.rows(); i++) { for (typename MatrixType::InnerIterator it(C, i); it; ++it) it.valueRef() = typename MatrixType::Scalar(0); } symmat = C + A; } } /** \ingroup OrderingMethods_Module * \class AMDOrdering * * Functor computing the \em approximate \em minimum \em degree ordering * If the matrix is not structurally symmetric, an ordering of A^T+A is computed * \tparam StorageIndex The type of indices of the matrix * \sa COLAMDOrdering */ template class AMDOrdering { public: typedef PermutationMatrix PermutationType; /** Compute the permutation vector from a sparse matrix * This routine is much faster if the input matrix is column-major */ template void operator()(const MatrixType& mat, PermutationType& perm) { // Compute the symmetric pattern SparseMatrix symm; internal::ordering_helper_at_plus_a(mat,symm); // Call the AMD routine //m_mat.prune(keep_diag()); internal::minimum_degree_ordering(symm, perm); } /** Compute the permutation with a selfadjoint matrix */ template void operator()(const SparseSelfAdjointView& mat, PermutationType& perm) { SparseMatrix C; C = mat; // Call the AMD routine // m_mat.prune(keep_diag()); //Remove the diagonal elements internal::minimum_degree_ordering(C, perm); } }; /** \ingroup OrderingMethods_Module * \class NaturalOrdering * * Functor computing the natural ordering (identity) * * \note Returns an empty permutation matrix * \tparam StorageIndex The type of indices of the matrix */ template class NaturalOrdering { public: typedef PermutationMatrix PermutationType; /** Compute the permutation vector from a column-major sparse matrix */ template void operator()(const MatrixType& /*mat*/, PermutationType& perm) { perm.resize(0); } }; /** \ingroup OrderingMethods_Module * \class COLAMDOrdering * * \tparam StorageIndex The type of indices of the matrix * * Functor computing the \em column \em approximate \em minimum \em degree ordering * The matrix should be in column-major and \b compressed format (see SparseMatrix::makeCompressed()). */ template class COLAMDOrdering { public: typedef PermutationMatrix PermutationType; typedef Matrix IndexVector; /** Compute the permutation vector \a perm form the sparse matrix \a mat * \warning The input sparse matrix \a mat must be in compressed mode (see SparseMatrix::makeCompressed()). */ template void operator() (const MatrixType& mat, PermutationType& perm) { eigen_assert(mat.isCompressed() && "COLAMDOrdering requires a sparse matrix in compressed mode. Call .makeCompressed() before passing it to COLAMDOrdering"); StorageIndex m = StorageIndex(mat.rows()); StorageIndex n = StorageIndex(mat.cols()); StorageIndex nnz = StorageIndex(mat.nonZeros()); // Get the recommended value of Alen to be used by colamd StorageIndex Alen = internal::Colamd::recommended(nnz, m, n); // Set the default parameters double knobs [internal::Colamd::NKnobs]; StorageIndex stats [internal::Colamd::NStats]; internal::Colamd::set_defaults(knobs); IndexVector p(n+1), A(Alen); for(StorageIndex i=0; i <= n; i++) p(i) = mat.outerIndexPtr()[i]; for(StorageIndex i=0; i < nnz; i++) A(i) = mat.innerIndexPtr()[i]; // Call Colamd routine to compute the ordering StorageIndex info = internal::Colamd::compute_ordering(m, n, Alen, A.data(), p.data(), knobs, stats); EIGEN_UNUSED_VARIABLE(info); eigen_assert( info && "COLAMD failed " ); perm.resize(n); for (StorageIndex i = 0; i < n; i++) perm.indices()(p(i)) = i; } }; } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/OrderingMethods/Amd.h0000644000175100001770000003735114107270226022330 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. /* NOTE: this routine has been adapted from the CSparse library: Copyright (c) 2006, Timothy A. Davis. http://www.suitesparse.com The author of CSparse, Timothy A. Davis., has executed a license with Google LLC to permit distribution of this code and derivative works as part of Eigen under the Mozilla Public License v. 2.0, as stated at the top of this file. */ #ifndef EIGEN_SPARSE_AMD_H #define EIGEN_SPARSE_AMD_H namespace Eigen { namespace internal { template inline T amd_flip(const T& i) { return -i-2; } template inline T amd_unflip(const T& i) { return i<0 ? amd_flip(i) : i; } template inline bool amd_marked(const T0* w, const T1& j) { return w[j]<0; } template inline void amd_mark(const T0* w, const T1& j) { return w[j] = amd_flip(w[j]); } /* clear w */ template static StorageIndex cs_wclear (StorageIndex mark, StorageIndex lemax, StorageIndex *w, StorageIndex n) { StorageIndex k; if(mark < 2 || (mark + lemax < 0)) { for(k = 0; k < n; k++) if(w[k] != 0) w[k] = 1; mark = 2; } return (mark); /* at this point, w[0..n-1] < mark holds */ } /* depth-first search and postorder of a tree rooted at node j */ template StorageIndex cs_tdfs(StorageIndex j, StorageIndex k, StorageIndex *head, const StorageIndex *next, StorageIndex *post, StorageIndex *stack) { StorageIndex i, p, top = 0; if(!head || !next || !post || !stack) return (-1); /* check inputs */ stack[0] = j; /* place j on the stack */ while (top >= 0) /* while (stack is not empty) */ { p = stack[top]; /* p = top of stack */ i = head[p]; /* i = youngest child of p */ if(i == -1) { top--; /* p has no unordered children left */ post[k++] = p; /* node p is the kth postordered node */ } else { head[p] = next[i]; /* remove i from children of p */ stack[++top] = i; /* start dfs on child node i */ } } return k; } /** \internal * \ingroup OrderingMethods_Module * Approximate minimum degree ordering algorithm. * * \param[in] C the input selfadjoint matrix stored in compressed column major format. * \param[out] perm the permutation P reducing the fill-in of the input matrix \a C * * Note that the input matrix \a C must be complete, that is both the upper and lower parts have to be stored, as well as the diagonal entries. * On exit the values of C are destroyed */ template void minimum_degree_ordering(SparseMatrix& C, PermutationMatrix& perm) { using std::sqrt; StorageIndex d, dk, dext, lemax = 0, e, elenk, eln, i, j, k, k1, k2, k3, jlast, ln, dense, nzmax, mindeg = 0, nvi, nvj, nvk, mark, wnvi, ok, nel = 0, p, p1, p2, p3, p4, pj, pk, pk1, pk2, pn, q, t, h; StorageIndex n = StorageIndex(C.cols()); dense = std::max (16, StorageIndex(10 * sqrt(double(n)))); /* find dense threshold */ dense = (std::min)(n-2, dense); StorageIndex cnz = StorageIndex(C.nonZeros()); perm.resize(n+1); t = cnz + cnz/5 + 2*n; /* add elbow room to C */ C.resizeNonZeros(t); // get workspace ei_declare_aligned_stack_constructed_variable(StorageIndex,W,8*(n+1),0); StorageIndex* len = W; StorageIndex* nv = W + (n+1); StorageIndex* next = W + 2*(n+1); StorageIndex* head = W + 3*(n+1); StorageIndex* elen = W + 4*(n+1); StorageIndex* degree = W + 5*(n+1); StorageIndex* w = W + 6*(n+1); StorageIndex* hhead = W + 7*(n+1); StorageIndex* last = perm.indices().data(); /* use P as workspace for last */ /* --- Initialize quotient graph ---------------------------------------- */ StorageIndex* Cp = C.outerIndexPtr(); StorageIndex* Ci = C.innerIndexPtr(); for(k = 0; k < n; k++) len[k] = Cp[k+1] - Cp[k]; len[n] = 0; nzmax = t; for(i = 0; i <= n; i++) { head[i] = -1; // degree list i is empty last[i] = -1; next[i] = -1; hhead[i] = -1; // hash list i is empty nv[i] = 1; // node i is just one node w[i] = 1; // node i is alive elen[i] = 0; // Ek of node i is empty degree[i] = len[i]; // degree of node i } mark = internal::cs_wclear(0, 0, w, n); /* clear w */ /* --- Initialize degree lists ------------------------------------------ */ for(i = 0; i < n; i++) { bool has_diag = false; for(p = Cp[i]; p dense || !has_diag) /* node i is dense or has no structural diagonal element */ { nv[i] = 0; /* absorb i into element n */ elen[i] = -1; /* node i is dead */ nel++; Cp[i] = amd_flip (n); nv[n]++; } else { if(head[d] != -1) last[head[d]] = i; next[i] = head[d]; /* put node i in degree list d */ head[d] = i; } } elen[n] = -2; /* n is a dead element */ Cp[n] = -1; /* n is a root of assembly tree */ w[n] = 0; /* n is a dead element */ while (nel < n) /* while (selecting pivots) do */ { /* --- Select node of minimum approximate degree -------------------- */ for(k = -1; mindeg < n && (k = head[mindeg]) == -1; mindeg++) {} if(next[k] != -1) last[next[k]] = -1; head[mindeg] = next[k]; /* remove k from degree list */ elenk = elen[k]; /* elenk = |Ek| */ nvk = nv[k]; /* # of nodes k represents */ nel += nvk; /* nv[k] nodes of A eliminated */ /* --- Garbage collection ------------------------------------------- */ if(elenk > 0 && cnz + mindeg >= nzmax) { for(j = 0; j < n; j++) { if((p = Cp[j]) >= 0) /* j is a live node or element */ { Cp[j] = Ci[p]; /* save first entry of object */ Ci[p] = amd_flip (j); /* first entry is now amd_flip(j) */ } } for(q = 0, p = 0; p < cnz; ) /* scan all of memory */ { if((j = amd_flip (Ci[p++])) >= 0) /* found object j */ { Ci[q] = Cp[j]; /* restore first entry of object */ Cp[j] = q++; /* new pointer to object j */ for(k3 = 0; k3 < len[j]-1; k3++) Ci[q++] = Ci[p++]; } } cnz = q; /* Ci[cnz...nzmax-1] now free */ } /* --- Construct new element ---------------------------------------- */ dk = 0; nv[k] = -nvk; /* flag k as in Lk */ p = Cp[k]; pk1 = (elenk == 0) ? p : cnz; /* do in place if elen[k] == 0 */ pk2 = pk1; for(k1 = 1; k1 <= elenk + 1; k1++) { if(k1 > elenk) { e = k; /* search the nodes in k */ pj = p; /* list of nodes starts at Ci[pj]*/ ln = len[k] - elenk; /* length of list of nodes in k */ } else { e = Ci[p++]; /* search the nodes in e */ pj = Cp[e]; ln = len[e]; /* length of list of nodes in e */ } for(k2 = 1; k2 <= ln; k2++) { i = Ci[pj++]; if((nvi = nv[i]) <= 0) continue; /* node i dead, or seen */ dk += nvi; /* degree[Lk] += size of node i */ nv[i] = -nvi; /* negate nv[i] to denote i in Lk*/ Ci[pk2++] = i; /* place i in Lk */ if(next[i] != -1) last[next[i]] = last[i]; if(last[i] != -1) /* remove i from degree list */ { next[last[i]] = next[i]; } else { head[degree[i]] = next[i]; } } if(e != k) { Cp[e] = amd_flip (k); /* absorb e into k */ w[e] = 0; /* e is now a dead element */ } } if(elenk != 0) cnz = pk2; /* Ci[cnz...nzmax] is free */ degree[k] = dk; /* external degree of k - |Lk\i| */ Cp[k] = pk1; /* element k is in Ci[pk1..pk2-1] */ len[k] = pk2 - pk1; elen[k] = -2; /* k is now an element */ /* --- Find set differences ----------------------------------------- */ mark = internal::cs_wclear(mark, lemax, w, n); /* clear w if necessary */ for(pk = pk1; pk < pk2; pk++) /* scan 1: find |Le\Lk| */ { i = Ci[pk]; if((eln = elen[i]) <= 0) continue;/* skip if elen[i] empty */ nvi = -nv[i]; /* nv[i] was negated */ wnvi = mark - nvi; for(p = Cp[i]; p <= Cp[i] + eln - 1; p++) /* scan Ei */ { e = Ci[p]; if(w[e] >= mark) { w[e] -= nvi; /* decrement |Le\Lk| */ } else if(w[e] != 0) /* ensure e is a live element */ { w[e] = degree[e] + wnvi; /* 1st time e seen in scan 1 */ } } } /* --- Degree update ------------------------------------------------ */ for(pk = pk1; pk < pk2; pk++) /* scan2: degree update */ { i = Ci[pk]; /* consider node i in Lk */ p1 = Cp[i]; p2 = p1 + elen[i] - 1; pn = p1; for(h = 0, d = 0, p = p1; p <= p2; p++) /* scan Ei */ { e = Ci[p]; if(w[e] != 0) /* e is an unabsorbed element */ { dext = w[e] - mark; /* dext = |Le\Lk| */ if(dext > 0) { d += dext; /* sum up the set differences */ Ci[pn++] = e; /* keep e in Ei */ h += e; /* compute the hash of node i */ } else { Cp[e] = amd_flip (k); /* aggressive absorb. e->k */ w[e] = 0; /* e is a dead element */ } } } elen[i] = pn - p1 + 1; /* elen[i] = |Ei| */ p3 = pn; p4 = p1 + len[i]; for(p = p2 + 1; p < p4; p++) /* prune edges in Ai */ { j = Ci[p]; if((nvj = nv[j]) <= 0) continue; /* node j dead or in Lk */ d += nvj; /* degree(i) += |j| */ Ci[pn++] = j; /* place j in node list of i */ h += j; /* compute hash for node i */ } if(d == 0) /* check for mass elimination */ { Cp[i] = amd_flip (k); /* absorb i into k */ nvi = -nv[i]; dk -= nvi; /* |Lk| -= |i| */ nvk += nvi; /* |k| += nv[i] */ nel += nvi; nv[i] = 0; elen[i] = -1; /* node i is dead */ } else { degree[i] = std::min (degree[i], d); /* update degree(i) */ Ci[pn] = Ci[p3]; /* move first node to end */ Ci[p3] = Ci[p1]; /* move 1st el. to end of Ei */ Ci[p1] = k; /* add k as 1st element in of Ei */ len[i] = pn - p1 + 1; /* new len of adj. list of node i */ h %= n; /* finalize hash of i */ next[i] = hhead[h]; /* place i in hash bucket */ hhead[h] = i; last[i] = h; /* save hash of i in last[i] */ } } /* scan2 is done */ degree[k] = dk; /* finalize |Lk| */ lemax = std::max(lemax, dk); mark = internal::cs_wclear(mark+lemax, lemax, w, n); /* clear w */ /* --- Supernode detection ------------------------------------------ */ for(pk = pk1; pk < pk2; pk++) { i = Ci[pk]; if(nv[i] >= 0) continue; /* skip if i is dead */ h = last[i]; /* scan hash bucket of node i */ i = hhead[h]; hhead[h] = -1; /* hash bucket will be empty */ for(; i != -1 && next[i] != -1; i = next[i], mark++) { ln = len[i]; eln = elen[i]; for(p = Cp[i]+1; p <= Cp[i] + ln-1; p++) w[Ci[p]] = mark; jlast = i; for(j = next[i]; j != -1; ) /* compare i with all j */ { ok = (len[j] == ln) && (elen[j] == eln); for(p = Cp[j] + 1; ok && p <= Cp[j] + ln - 1; p++) { if(w[Ci[p]] != mark) ok = 0; /* compare i and j*/ } if(ok) /* i and j are identical */ { Cp[j] = amd_flip (i); /* absorb j into i */ nv[i] += nv[j]; nv[j] = 0; elen[j] = -1; /* node j is dead */ j = next[j]; /* delete j from hash bucket */ next[jlast] = j; } else { jlast = j; /* j and i are different */ j = next[j]; } } } } /* --- Finalize new element------------------------------------------ */ for(p = pk1, pk = pk1; pk < pk2; pk++) /* finalize Lk */ { i = Ci[pk]; if((nvi = -nv[i]) <= 0) continue;/* skip if i is dead */ nv[i] = nvi; /* restore nv[i] */ d = degree[i] + dk - nvi; /* compute external degree(i) */ d = std::min (d, n - nel - nvi); if(head[d] != -1) last[head[d]] = i; next[i] = head[d]; /* put i back in degree list */ last[i] = -1; head[d] = i; mindeg = std::min (mindeg, d); /* find new minimum degree */ degree[i] = d; Ci[p++] = i; /* place i in Lk */ } nv[k] = nvk; /* # nodes absorbed into k */ if((len[k] = p-pk1) == 0) /* length of adj list of element k*/ { Cp[k] = -1; /* k is a root of the tree */ w[k] = 0; /* k is now a dead element */ } if(elenk != 0) cnz = p; /* free unused space in Lk */ } /* --- Postordering ----------------------------------------------------- */ for(i = 0; i < n; i++) Cp[i] = amd_flip (Cp[i]);/* fix assembly tree */ for(j = 0; j <= n; j++) head[j] = -1; for(j = n; j >= 0; j--) /* place unordered nodes in lists */ { if(nv[j] > 0) continue; /* skip if j is an element */ next[j] = head[Cp[j]]; /* place j in list of its parent */ head[Cp[j]] = j; } for(e = n; e >= 0; e--) /* place elements in lists */ { if(nv[e] <= 0) continue; /* skip unless e is an element */ if(Cp[e] != -1) { next[e] = head[Cp[e]]; /* place e in list of its parent */ head[Cp[e]] = e; } } for(k = 0, i = 0; i <= n; i++) /* postorder the assembly tree */ { if(Cp[i] == -1) k = internal::cs_tdfs(i, k, head, next, perm.indices().data(), w); } perm.indices().conservativeResize(n); } } // namespace internal } // end namespace Eigen #endif // EIGEN_SPARSE_AMD_H piqp-octave/src/eigen/Eigen/src/SVD/0000755000175100001770000000000014107270226017004 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/SVD/BDCSVD.h0000644000175100001770000015170614107270226020134 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // We used the "A Divide-And-Conquer Algorithm for the Bidiagonal SVD" // research report written by Ming Gu and Stanley C.Eisenstat // The code variable names correspond to the names they used in their // report // // Copyright (C) 2013 Gauthier Brun // Copyright (C) 2013 Nicolas Carre // Copyright (C) 2013 Jean Ceccato // Copyright (C) 2013 Pierre Zoppitelli // Copyright (C) 2013 Jitse Niesen // Copyright (C) 2014-2017 Gael Guennebaud // // Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BDCSVD_H #define EIGEN_BDCSVD_H // #define EIGEN_BDCSVD_DEBUG_VERBOSE // #define EIGEN_BDCSVD_SANITY_CHECKS #ifdef EIGEN_BDCSVD_SANITY_CHECKS #undef eigen_internal_assert #define eigen_internal_assert(X) assert(X); #endif namespace Eigen { #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE IOFormat bdcsvdfmt(8, 0, ", ", "\n", " [", "]"); #endif template class BDCSVD; namespace internal { template struct traits > : traits<_MatrixType> { typedef _MatrixType MatrixType; }; } // end namespace internal /** \ingroup SVD_Module * * * \class BDCSVD * * \brief class Bidiagonal Divide and Conquer SVD * * \tparam _MatrixType the type of the matrix of which we are computing the SVD decomposition * * This class first reduces the input matrix to bi-diagonal form using class UpperBidiagonalization, * and then performs a divide-and-conquer diagonalization. Small blocks are diagonalized using class JacobiSVD. * You can control the switching size with the setSwitchSize() method, default is 16. * For small matrice (<16), it is thus preferable to directly use JacobiSVD. For larger ones, BDCSVD is highly * recommended and can several order of magnitude faster. * * \warning this algorithm is unlikely to provide accurate result when compiled with unsafe math optimizations. * For instance, this concerns Intel's compiler (ICC), which performs such optimization by default unless * you compile with the \c -fp-model \c precise option. Likewise, the \c -ffast-math option of GCC or clang will * significantly degrade the accuracy. * * \sa class JacobiSVD */ template class BDCSVD : public SVDBase > { typedef SVDBase Base; public: using Base::rows; using Base::cols; using Base::computeU; using Base::computeV; typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef typename NumTraits::Literal Literal; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, DiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_DYNAMIC(RowsAtCompileTime, ColsAtCompileTime), MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, MaxDiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_FIXED(MaxRowsAtCompileTime, MaxColsAtCompileTime), MatrixOptions = MatrixType::Options }; typedef typename Base::MatrixUType MatrixUType; typedef typename Base::MatrixVType MatrixVType; typedef typename Base::SingularValuesType SingularValuesType; typedef Matrix MatrixX; typedef Matrix MatrixXr; typedef Matrix VectorType; typedef Array ArrayXr; typedef Array ArrayXi; typedef Ref ArrayRef; typedef Ref IndicesRef; /** \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via BDCSVD::compute(const MatrixType&). */ BDCSVD() : m_algoswap(16), m_isTranspose(false), m_compU(false), m_compV(false), m_numIters(0) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem size. * \sa BDCSVD() */ BDCSVD(Index rows, Index cols, unsigned int computationOptions = 0) : m_algoswap(16), m_numIters(0) { allocate(rows, cols, computationOptions); } /** \brief Constructor performing the decomposition of given matrix. * * \param matrix the matrix to decompose * \param computationOptions optional parameter allowing to specify if you want full or thin U or V unitaries to be computed. * By default, none is computed. This is a bit - field, the possible bits are #ComputeFullU, #ComputeThinU, * #ComputeFullV, #ComputeThinV. * * Thin unitaries are only available if your matrix type has a Dynamic number of columns (for example MatrixXf). They also are not * available with the (non - default) FullPivHouseholderQR preconditioner. */ BDCSVD(const MatrixType& matrix, unsigned int computationOptions = 0) : m_algoswap(16), m_numIters(0) { compute(matrix, computationOptions); } ~BDCSVD() { } /** \brief Method performing the decomposition of given matrix using custom options. * * \param matrix the matrix to decompose * \param computationOptions optional parameter allowing to specify if you want full or thin U or V unitaries to be computed. * By default, none is computed. This is a bit - field, the possible bits are #ComputeFullU, #ComputeThinU, * #ComputeFullV, #ComputeThinV. * * Thin unitaries are only available if your matrix type has a Dynamic number of columns (for example MatrixXf). They also are not * available with the (non - default) FullPivHouseholderQR preconditioner. */ BDCSVD& compute(const MatrixType& matrix, unsigned int computationOptions); /** \brief Method performing the decomposition of given matrix using current options. * * \param matrix the matrix to decompose * * This method uses the current \a computationOptions, as already passed to the constructor or to compute(const MatrixType&, unsigned int). */ BDCSVD& compute(const MatrixType& matrix) { return compute(matrix, this->m_computationOptions); } void setSwitchSize(int s) { eigen_assert(s>3 && "BDCSVD the size of the algo switch has to be greater than 3"); m_algoswap = s; } private: void allocate(Index rows, Index cols, unsigned int computationOptions); void divide(Index firstCol, Index lastCol, Index firstRowW, Index firstColW, Index shift); void computeSVDofM(Index firstCol, Index n, MatrixXr& U, VectorType& singVals, MatrixXr& V); void computeSingVals(const ArrayRef& col0, const ArrayRef& diag, const IndicesRef& perm, VectorType& singVals, ArrayRef shifts, ArrayRef mus); void perturbCol0(const ArrayRef& col0, const ArrayRef& diag, const IndicesRef& perm, const VectorType& singVals, const ArrayRef& shifts, const ArrayRef& mus, ArrayRef zhat); void computeSingVecs(const ArrayRef& zhat, const ArrayRef& diag, const IndicesRef& perm, const VectorType& singVals, const ArrayRef& shifts, const ArrayRef& mus, MatrixXr& U, MatrixXr& V); void deflation43(Index firstCol, Index shift, Index i, Index size); void deflation44(Index firstColu , Index firstColm, Index firstRowW, Index firstColW, Index i, Index j, Index size); void deflation(Index firstCol, Index lastCol, Index k, Index firstRowW, Index firstColW, Index shift); template void copyUV(const HouseholderU &householderU, const HouseholderV &householderV, const NaiveU &naiveU, const NaiveV &naivev); void structured_update(Block A, const MatrixXr &B, Index n1); static RealScalar secularEq(RealScalar x, const ArrayRef& col0, const ArrayRef& diag, const IndicesRef &perm, const ArrayRef& diagShifted, RealScalar shift); protected: MatrixXr m_naiveU, m_naiveV; MatrixXr m_computed; Index m_nRec; ArrayXr m_workspace; ArrayXi m_workspaceI; int m_algoswap; bool m_isTranspose, m_compU, m_compV; using Base::m_singularValues; using Base::m_diagSize; using Base::m_computeFullU; using Base::m_computeFullV; using Base::m_computeThinU; using Base::m_computeThinV; using Base::m_matrixU; using Base::m_matrixV; using Base::m_info; using Base::m_isInitialized; using Base::m_nonzeroSingularValues; public: int m_numIters; }; //end class BDCSVD // Method to allocate and initialize matrix and attributes template void BDCSVD::allocate(Eigen::Index rows, Eigen::Index cols, unsigned int computationOptions) { m_isTranspose = (cols > rows); if (Base::allocate(rows, cols, computationOptions)) return; m_computed = MatrixXr::Zero(m_diagSize + 1, m_diagSize ); m_compU = computeV(); m_compV = computeU(); if (m_isTranspose) std::swap(m_compU, m_compV); if (m_compU) m_naiveU = MatrixXr::Zero(m_diagSize + 1, m_diagSize + 1 ); else m_naiveU = MatrixXr::Zero(2, m_diagSize + 1 ); if (m_compV) m_naiveV = MatrixXr::Zero(m_diagSize, m_diagSize); m_workspace.resize((m_diagSize+1)*(m_diagSize+1)*3); m_workspaceI.resize(3*m_diagSize); }// end allocate template BDCSVD& BDCSVD::compute(const MatrixType& matrix, unsigned int computationOptions) { #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "\n\n\n======================================================================================================================\n\n\n"; #endif allocate(matrix.rows(), matrix.cols(), computationOptions); using std::abs; const RealScalar considerZero = (std::numeric_limits::min)(); //**** step -1 - If the problem is too small, directly falls back to JacobiSVD and return if(matrix.cols() < m_algoswap) { // FIXME this line involves temporaries JacobiSVD jsvd(matrix,computationOptions); m_isInitialized = true; m_info = jsvd.info(); if (m_info == Success || m_info == NoConvergence) { if(computeU()) m_matrixU = jsvd.matrixU(); if(computeV()) m_matrixV = jsvd.matrixV(); m_singularValues = jsvd.singularValues(); m_nonzeroSingularValues = jsvd.nonzeroSingularValues(); } return *this; } //**** step 0 - Copy the input matrix and apply scaling to reduce over/under-flows RealScalar scale = matrix.cwiseAbs().template maxCoeff(); if (!(numext::isfinite)(scale)) { m_isInitialized = true; m_info = InvalidInput; return *this; } if(scale==Literal(0)) scale = Literal(1); MatrixX copy; if (m_isTranspose) copy = matrix.adjoint()/scale; else copy = matrix/scale; //**** step 1 - Bidiagonalization // FIXME this line involves temporaries internal::UpperBidiagonalization bid(copy); //**** step 2 - Divide & Conquer m_naiveU.setZero(); m_naiveV.setZero(); // FIXME this line involves a temporary matrix m_computed.topRows(m_diagSize) = bid.bidiagonal().toDenseMatrix().transpose(); m_computed.template bottomRows<1>().setZero(); divide(0, m_diagSize - 1, 0, 0, 0); if (m_info != Success && m_info != NoConvergence) { m_isInitialized = true; return *this; } //**** step 3 - Copy singular values and vectors for (int i=0; i template void BDCSVD::copyUV(const HouseholderU &householderU, const HouseholderV &householderV, const NaiveU &naiveU, const NaiveV &naiveV) { // Note exchange of U and V: m_matrixU is set from m_naiveV and vice versa if (computeU()) { Index Ucols = m_computeThinU ? m_diagSize : householderU.cols(); m_matrixU = MatrixX::Identity(householderU.cols(), Ucols); m_matrixU.topLeftCorner(m_diagSize, m_diagSize) = naiveV.template cast().topLeftCorner(m_diagSize, m_diagSize); householderU.applyThisOnTheLeft(m_matrixU); // FIXME this line involves a temporary buffer } if (computeV()) { Index Vcols = m_computeThinV ? m_diagSize : householderV.cols(); m_matrixV = MatrixX::Identity(householderV.cols(), Vcols); m_matrixV.topLeftCorner(m_diagSize, m_diagSize) = naiveU.template cast().topLeftCorner(m_diagSize, m_diagSize); householderV.applyThisOnTheLeft(m_matrixV); // FIXME this line involves a temporary buffer } } /** \internal * Performs A = A * B exploiting the special structure of the matrix A. Splitting A as: * A = [A1] * [A2] * such that A1.rows()==n1, then we assume that at least half of the columns of A1 and A2 are zeros. * We can thus pack them prior to the the matrix product. However, this is only worth the effort if the matrix is large * enough. */ template void BDCSVD::structured_update(Block A, const MatrixXr &B, Index n1) { Index n = A.rows(); if(n>100) { // If the matrices are large enough, let's exploit the sparse structure of A by // splitting it in half (wrt n1), and packing the non-zero columns. Index n2 = n - n1; Map A1(m_workspace.data() , n1, n); Map A2(m_workspace.data()+ n1*n, n2, n); Map B1(m_workspace.data()+ n*n, n, n); Map B2(m_workspace.data()+2*n*n, n, n); Index k1=0, k2=0; for(Index j=0; j tmp(m_workspace.data(),n,n); tmp.noalias() = A*B; A = tmp; } } // The divide algorithm is done "in place", we are always working on subsets of the same matrix. The divide methods takes as argument the // place of the submatrix we are currently working on. //@param firstCol : The Index of the first column of the submatrix of m_computed and for m_naiveU; //@param lastCol : The Index of the last column of the submatrix of m_computed and for m_naiveU; // lastCol + 1 - firstCol is the size of the submatrix. //@param firstRowW : The Index of the first row of the matrix W that we are to change. (see the reference paper section 1 for more information on W) //@param firstRowW : Same as firstRowW with the column. //@param shift : Each time one takes the left submatrix, one must add 1 to the shift. Why? Because! We actually want the last column of the U submatrix // to become the first column (*coeff) and to shift all the other columns to the right. There are more details on the reference paper. template void BDCSVD::divide(Eigen::Index firstCol, Eigen::Index lastCol, Eigen::Index firstRowW, Eigen::Index firstColW, Eigen::Index shift) { // requires rows = cols + 1; using std::pow; using std::sqrt; using std::abs; const Index n = lastCol - firstCol + 1; const Index k = n/2; const RealScalar considerZero = (std::numeric_limits::min)(); RealScalar alphaK; RealScalar betaK; RealScalar r0; RealScalar lambda, phi, c0, s0; VectorType l, f; // We use the other algorithm which is more efficient for small // matrices. if (n < m_algoswap) { // FIXME this line involves temporaries JacobiSVD b(m_computed.block(firstCol, firstCol, n + 1, n), ComputeFullU | (m_compV ? ComputeFullV : 0)); m_info = b.info(); if (m_info != Success && m_info != NoConvergence) return; if (m_compU) m_naiveU.block(firstCol, firstCol, n + 1, n + 1).real() = b.matrixU(); else { m_naiveU.row(0).segment(firstCol, n + 1).real() = b.matrixU().row(0); m_naiveU.row(1).segment(firstCol, n + 1).real() = b.matrixU().row(n); } if (m_compV) m_naiveV.block(firstRowW, firstColW, n, n).real() = b.matrixV(); m_computed.block(firstCol + shift, firstCol + shift, n + 1, n).setZero(); m_computed.diagonal().segment(firstCol + shift, n) = b.singularValues().head(n); return; } // We use the divide and conquer algorithm alphaK = m_computed(firstCol + k, firstCol + k); betaK = m_computed(firstCol + k + 1, firstCol + k); // The divide must be done in that order in order to have good results. Divide change the data inside the submatrices // and the divide of the right submatrice reads one column of the left submatrice. That's why we need to treat the // right submatrix before the left one. divide(k + 1 + firstCol, lastCol, k + 1 + firstRowW, k + 1 + firstColW, shift); if (m_info != Success && m_info != NoConvergence) return; divide(firstCol, k - 1 + firstCol, firstRowW, firstColW + 1, shift + 1); if (m_info != Success && m_info != NoConvergence) return; if (m_compU) { lambda = m_naiveU(firstCol + k, firstCol + k); phi = m_naiveU(firstCol + k + 1, lastCol + 1); } else { lambda = m_naiveU(1, firstCol + k); phi = m_naiveU(0, lastCol + 1); } r0 = sqrt((abs(alphaK * lambda) * abs(alphaK * lambda)) + abs(betaK * phi) * abs(betaK * phi)); if (m_compU) { l = m_naiveU.row(firstCol + k).segment(firstCol, k); f = m_naiveU.row(firstCol + k + 1).segment(firstCol + k + 1, n - k - 1); } else { l = m_naiveU.row(1).segment(firstCol, k); f = m_naiveU.row(0).segment(firstCol + k + 1, n - k - 1); } if (m_compV) m_naiveV(firstRowW+k, firstColW) = Literal(1); if (r0= firstCol; i--) m_naiveU.col(i + 1).segment(firstCol, k + 1) = m_naiveU.col(i).segment(firstCol, k + 1); // we shift q1 at the left with a factor c0 m_naiveU.col(firstCol).segment( firstCol, k + 1) = (q1 * c0); // last column = q1 * - s0 m_naiveU.col(lastCol + 1).segment(firstCol, k + 1) = (q1 * ( - s0)); // first column = q2 * s0 m_naiveU.col(firstCol).segment(firstCol + k + 1, n - k) = m_naiveU.col(lastCol + 1).segment(firstCol + k + 1, n - k) * s0; // q2 *= c0 m_naiveU.col(lastCol + 1).segment(firstCol + k + 1, n - k) *= c0; } else { RealScalar q1 = m_naiveU(0, firstCol + k); // we shift Q1 to the right for (Index i = firstCol + k - 1; i >= firstCol; i--) m_naiveU(0, i + 1) = m_naiveU(0, i); // we shift q1 at the left with a factor c0 m_naiveU(0, firstCol) = (q1 * c0); // last column = q1 * - s0 m_naiveU(0, lastCol + 1) = (q1 * ( - s0)); // first column = q2 * s0 m_naiveU(1, firstCol) = m_naiveU(1, lastCol + 1) *s0; // q2 *= c0 m_naiveU(1, lastCol + 1) *= c0; m_naiveU.row(1).segment(firstCol + 1, k).setZero(); m_naiveU.row(0).segment(firstCol + k + 1, n - k - 1).setZero(); } #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(m_naiveU.allFinite()); assert(m_naiveV.allFinite()); assert(m_computed.allFinite()); #endif m_computed(firstCol + shift, firstCol + shift) = r0; m_computed.col(firstCol + shift).segment(firstCol + shift + 1, k) = alphaK * l.transpose().real(); m_computed.col(firstCol + shift).segment(firstCol + shift + k + 1, n - k - 1) = betaK * f.transpose().real(); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE ArrayXr tmp1 = (m_computed.block(firstCol+shift, firstCol+shift, n, n)).jacobiSvd().singularValues(); #endif // Second part: try to deflate singular values in combined matrix deflation(firstCol, lastCol, k, firstRowW, firstColW, shift); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE ArrayXr tmp2 = (m_computed.block(firstCol+shift, firstCol+shift, n, n)).jacobiSvd().singularValues(); std::cout << "\n\nj1 = " << tmp1.transpose().format(bdcsvdfmt) << "\n"; std::cout << "j2 = " << tmp2.transpose().format(bdcsvdfmt) << "\n\n"; std::cout << "err: " << ((tmp1-tmp2).abs()>1e-12*tmp2.abs()).transpose() << "\n"; static int count = 0; std::cout << "# " << ++count << "\n\n"; assert((tmp1-tmp2).matrix().norm() < 1e-14*tmp2.matrix().norm()); // assert(count<681); // assert(((tmp1-tmp2).abs()<1e-13*tmp2.abs()).all()); #endif // Third part: compute SVD of combined matrix MatrixXr UofSVD, VofSVD; VectorType singVals; computeSVDofM(firstCol + shift, n, UofSVD, singVals, VofSVD); #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(UofSVD.allFinite()); assert(VofSVD.allFinite()); #endif if (m_compU) structured_update(m_naiveU.block(firstCol, firstCol, n + 1, n + 1), UofSVD, (n+2)/2); else { Map,Aligned> tmp(m_workspace.data(),2,n+1); tmp.noalias() = m_naiveU.middleCols(firstCol, n+1) * UofSVD; m_naiveU.middleCols(firstCol, n + 1) = tmp; } if (m_compV) structured_update(m_naiveV.block(firstRowW, firstColW, n, n), VofSVD, (n+1)/2); #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(m_naiveU.allFinite()); assert(m_naiveV.allFinite()); assert(m_computed.allFinite()); #endif m_computed.block(firstCol + shift, firstCol + shift, n, n).setZero(); m_computed.block(firstCol + shift, firstCol + shift, n, n).diagonal() = singVals; }// end divide // Compute SVD of m_computed.block(firstCol, firstCol, n + 1, n); this block only has non-zeros in // the first column and on the diagonal and has undergone deflation, so diagonal is in increasing // order except for possibly the (0,0) entry. The computed SVD is stored U, singVals and V, except // that if m_compV is false, then V is not computed. Singular values are sorted in decreasing order. // // TODO Opportunities for optimization: better root finding algo, better stopping criterion, better // handling of round-off errors, be consistent in ordering // For instance, to solve the secular equation using FMM, see http://www.stat.uchicago.edu/~lekheng/courses/302/classics/greengard-rokhlin.pdf template void BDCSVD::computeSVDofM(Eigen::Index firstCol, Eigen::Index n, MatrixXr& U, VectorType& singVals, MatrixXr& V) { const RealScalar considerZero = (std::numeric_limits::min)(); using std::abs; ArrayRef col0 = m_computed.col(firstCol).segment(firstCol, n); m_workspace.head(n) = m_computed.block(firstCol, firstCol, n, n).diagonal(); ArrayRef diag = m_workspace.head(n); diag(0) = Literal(0); // Allocate space for singular values and vectors singVals.resize(n); U.resize(n+1, n+1); if (m_compV) V.resize(n, n); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE if (col0.hasNaN() || diag.hasNaN()) std::cout << "\n\nHAS NAN\n\n"; #endif // Many singular values might have been deflated, the zero ones have been moved to the end, // but others are interleaved and we must ignore them at this stage. // To this end, let's compute a permutation skipping them: Index actual_n = n; while(actual_n>1 && diag(actual_n-1)==Literal(0)) {--actual_n; eigen_internal_assert(col0(actual_n)==Literal(0)); } Index m = 0; // size of the deflated problem for(Index k=0;kconsiderZero) m_workspaceI(m++) = k; Map perm(m_workspaceI.data(),m); Map shifts(m_workspace.data()+1*n, n); Map mus(m_workspace.data()+2*n, n); Map zhat(m_workspace.data()+3*n, n); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "computeSVDofM using:\n"; std::cout << " z: " << col0.transpose() << "\n"; std::cout << " d: " << diag.transpose() << "\n"; #endif // Compute singVals, shifts, and mus computeSingVals(col0, diag, perm, singVals, shifts, mus); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << " j: " << (m_computed.block(firstCol, firstCol, n, n)).jacobiSvd().singularValues().transpose().reverse() << "\n\n"; std::cout << " sing-val: " << singVals.transpose() << "\n"; std::cout << " mu: " << mus.transpose() << "\n"; std::cout << " shift: " << shifts.transpose() << "\n"; { std::cout << "\n\n mus: " << mus.head(actual_n).transpose() << "\n\n"; std::cout << " check1 (expect0) : " << ((singVals.array()-(shifts+mus)) / singVals.array()).head(actual_n).transpose() << "\n\n"; assert((((singVals.array()-(shifts+mus)) / singVals.array()).head(actual_n) >= 0).all()); std::cout << " check2 (>0) : " << ((singVals.array()-diag) / singVals.array()).head(actual_n).transpose() << "\n\n"; assert((((singVals.array()-diag) / singVals.array()).head(actual_n) >= 0).all()); } #endif #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(singVals.allFinite()); assert(mus.allFinite()); assert(shifts.allFinite()); #endif // Compute zhat perturbCol0(col0, diag, perm, singVals, shifts, mus, zhat); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << " zhat: " << zhat.transpose() << "\n"; #endif #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(zhat.allFinite()); #endif computeSingVecs(zhat, diag, perm, singVals, shifts, mus, U, V); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "U^T U: " << (U.transpose() * U - MatrixXr(MatrixXr::Identity(U.cols(),U.cols()))).norm() << "\n"; std::cout << "V^T V: " << (V.transpose() * V - MatrixXr(MatrixXr::Identity(V.cols(),V.cols()))).norm() << "\n"; #endif #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(m_naiveU.allFinite()); assert(m_naiveV.allFinite()); assert(m_computed.allFinite()); assert(U.allFinite()); assert(V.allFinite()); // assert((U.transpose() * U - MatrixXr(MatrixXr::Identity(U.cols(),U.cols()))).norm() < 100*NumTraits::epsilon() * n); // assert((V.transpose() * V - MatrixXr(MatrixXr::Identity(V.cols(),V.cols()))).norm() < 100*NumTraits::epsilon() * n); #endif // Because of deflation, the singular values might not be completely sorted. // Fortunately, reordering them is a O(n) problem for(Index i=0; isingVals(i+1)) { using std::swap; swap(singVals(i),singVals(i+1)); U.col(i).swap(U.col(i+1)); if(m_compV) V.col(i).swap(V.col(i+1)); } } #ifdef EIGEN_BDCSVD_SANITY_CHECKS { bool singular_values_sorted = (((singVals.segment(1,actual_n-1)-singVals.head(actual_n-1))).array() >= 0).all(); if(!singular_values_sorted) std::cout << "Singular values are not sorted: " << singVals.segment(1,actual_n).transpose() << "\n"; assert(singular_values_sorted); } #endif // Reverse order so that singular values in increased order // Because of deflation, the zeros singular-values are already at the end singVals.head(actual_n).reverseInPlace(); U.leftCols(actual_n).rowwise().reverseInPlace(); if (m_compV) V.leftCols(actual_n).rowwise().reverseInPlace(); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE JacobiSVD jsvd(m_computed.block(firstCol, firstCol, n, n) ); std::cout << " * j: " << jsvd.singularValues().transpose() << "\n\n"; std::cout << " * sing-val: " << singVals.transpose() << "\n"; // std::cout << " * err: " << ((jsvd.singularValues()-singVals)>1e-13*singVals.norm()).transpose() << "\n"; #endif } template typename BDCSVD::RealScalar BDCSVD::secularEq(RealScalar mu, const ArrayRef& col0, const ArrayRef& diag, const IndicesRef &perm, const ArrayRef& diagShifted, RealScalar shift) { Index m = perm.size(); RealScalar res = Literal(1); for(Index i=0; i void BDCSVD::computeSingVals(const ArrayRef& col0, const ArrayRef& diag, const IndicesRef &perm, VectorType& singVals, ArrayRef shifts, ArrayRef mus) { using std::abs; using std::swap; using std::sqrt; Index n = col0.size(); Index actual_n = n; // Note that here actual_n is computed based on col0(i)==0 instead of diag(i)==0 as above // because 1) we have diag(i)==0 => col0(i)==0 and 2) if col0(i)==0, then diag(i) is already a singular value. while(actual_n>1 && col0(actual_n-1)==Literal(0)) --actual_n; for (Index k = 0; k < n; ++k) { if (col0(k) == Literal(0) || actual_n==1) { // if col0(k) == 0, then entry is deflated, so singular value is on diagonal // if actual_n==1, then the deflated problem is already diagonalized singVals(k) = k==0 ? col0(0) : diag(k); mus(k) = Literal(0); shifts(k) = k==0 ? col0(0) : diag(k); continue; } // otherwise, use secular equation to find singular value RealScalar left = diag(k); RealScalar right; // was: = (k != actual_n-1) ? diag(k+1) : (diag(actual_n-1) + col0.matrix().norm()); if(k==actual_n-1) right = (diag(actual_n-1) + col0.matrix().norm()); else { // Skip deflated singular values, // recall that at this stage we assume that z[j]!=0 and all entries for which z[j]==0 have been put aside. // This should be equivalent to using perm[] Index l = k+1; while(col0(l)==Literal(0)) { ++l; eigen_internal_assert(l Literal(0)) ? left : right; // measure everything relative to shift Map diagShifted(m_workspace.data()+4*n, n); diagShifted = diag - shift; if(k!=actual_n-1) { // check that after the shift, f(mid) is still negative: RealScalar midShifted = (right - left) / RealScalar(2); if(shift==right) midShifted = -midShifted; RealScalar fMidShifted = secularEq(midShifted, col0, diag, perm, diagShifted, shift); if(fMidShifted>0) { // fMid was erroneous, fix it: shift = fMidShifted > Literal(0) ? left : right; diagShifted = diag - shift; } } // initial guess RealScalar muPrev, muCur; if (shift == left) { muPrev = (right - left) * RealScalar(0.1); if (k == actual_n-1) muCur = right - left; else muCur = (right - left) * RealScalar(0.5); } else { muPrev = -(right - left) * RealScalar(0.1); muCur = -(right - left) * RealScalar(0.5); } RealScalar fPrev = secularEq(muPrev, col0, diag, perm, diagShifted, shift); RealScalar fCur = secularEq(muCur, col0, diag, perm, diagShifted, shift); if (abs(fPrev) < abs(fCur)) { swap(fPrev, fCur); swap(muPrev, muCur); } // rational interpolation: fit a function of the form a / mu + b through the two previous // iterates and use its zero to compute the next iterate bool useBisection = fPrev*fCur>Literal(0); while (fCur!=Literal(0) && abs(muCur - muPrev) > Literal(8) * NumTraits::epsilon() * numext::maxi(abs(muCur), abs(muPrev)) && abs(fCur - fPrev)>NumTraits::epsilon() && !useBisection) { ++m_numIters; // Find a and b such that the function f(mu) = a / mu + b matches the current and previous samples. RealScalar a = (fCur - fPrev) / (Literal(1)/muCur - Literal(1)/muPrev); RealScalar b = fCur - a / muCur; // And find mu such that f(mu)==0: RealScalar muZero = -a/b; RealScalar fZero = secularEq(muZero, col0, diag, perm, diagShifted, shift); #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert((numext::isfinite)(fZero)); #endif muPrev = muCur; fPrev = fCur; muCur = muZero; fCur = fZero; if (shift == left && (muCur < Literal(0) || muCur > right - left)) useBisection = true; if (shift == right && (muCur < -(right - left) || muCur > Literal(0))) useBisection = true; if (abs(fCur)>abs(fPrev)) useBisection = true; } // fall back on bisection method if rational interpolation did not work if (useBisection) { #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "useBisection for k = " << k << ", actual_n = " << actual_n << "\n"; #endif RealScalar leftShifted, rightShifted; if (shift == left) { // to avoid overflow, we must have mu > max(real_min, |z(k)|/sqrt(real_max)), // the factor 2 is to be more conservative leftShifted = numext::maxi( (std::numeric_limits::min)(), Literal(2) * abs(col0(k)) / sqrt((std::numeric_limits::max)()) ); // check that we did it right: eigen_internal_assert( (numext::isfinite)( (col0(k)/leftShifted)*(col0(k)/(diag(k)+shift+leftShifted)) ) ); // I don't understand why the case k==0 would be special there: // if (k == 0) rightShifted = right - left; else rightShifted = (k==actual_n-1) ? right : ((right - left) * RealScalar(0.51)); // theoretically we can take 0.5, but let's be safe } else { leftShifted = -(right - left) * RealScalar(0.51); if(k+1( (std::numeric_limits::min)(), abs(col0(k+1)) / sqrt((std::numeric_limits::max)()) ); else rightShifted = -(std::numeric_limits::min)(); } RealScalar fLeft = secularEq(leftShifted, col0, diag, perm, diagShifted, shift); eigen_internal_assert(fLeft [" << leftShifted << " " << rightShifted << "], shift=" << shift << " , f(right)=" << secularEq(0, col0, diag, perm, diagShifted, shift) << " == " << secularEq(right, col0, diag, perm, diag, 0) << " == " << fRight << "\n"; } #endif eigen_internal_assert(fLeft * fRight < Literal(0)); if(fLeft Literal(2) * NumTraits::epsilon() * numext::maxi(abs(leftShifted), abs(rightShifted))) { RealScalar midShifted = (leftShifted + rightShifted) / Literal(2); fMid = secularEq(midShifted, col0, diag, perm, diagShifted, shift); eigen_internal_assert((numext::isfinite)(fMid)); if (fLeft * fMid < Literal(0)) { rightShifted = midShifted; } else { leftShifted = midShifted; fLeft = fMid; } } muCur = (leftShifted + rightShifted) / Literal(2); } else { // We have a problem as shifting on the left or right give either a positive or negative value // at the middle of [left,right]... // Instead fo abbording or entering an infinite loop, // let's just use the middle as the estimated zero-crossing: muCur = (right - left) * RealScalar(0.5); if(shift == right) muCur = -muCur; } } singVals[k] = shift + muCur; shifts[k] = shift; mus[k] = muCur; #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE if(k+1=singVals[k-1]); assert(singVals[k]>=diag(k)); #endif // perturb singular value slightly if it equals diagonal entry to avoid division by zero later // (deflation is supposed to avoid this from happening) // - this does no seem to be necessary anymore - // if (singVals[k] == left) singVals[k] *= 1 + NumTraits::epsilon(); // if (singVals[k] == right) singVals[k] *= 1 - NumTraits::epsilon(); } } // zhat is perturbation of col0 for which singular vectors can be computed stably (see Section 3.1) template void BDCSVD::perturbCol0 (const ArrayRef& col0, const ArrayRef& diag, const IndicesRef &perm, const VectorType& singVals, const ArrayRef& shifts, const ArrayRef& mus, ArrayRef zhat) { using std::sqrt; Index n = col0.size(); Index m = perm.size(); if(m==0) { zhat.setZero(); return; } Index lastIdx = perm(m-1); // The offset permits to skip deflated entries while computing zhat for (Index k = 0; k < n; ++k) { if (col0(k) == Literal(0)) // deflated zhat(k) = Literal(0); else { // see equation (3.6) RealScalar dk = diag(k); RealScalar prod = (singVals(lastIdx) + dk) * (mus(lastIdx) + (shifts(lastIdx) - dk)); #ifdef EIGEN_BDCSVD_SANITY_CHECKS if(prod<0) { std::cout << "k = " << k << " ; z(k)=" << col0(k) << ", diag(k)=" << dk << "\n"; std::cout << "prod = " << "(" << singVals(lastIdx) << " + " << dk << ") * (" << mus(lastIdx) << " + (" << shifts(lastIdx) << " - " << dk << "))" << "\n"; std::cout << " = " << singVals(lastIdx) + dk << " * " << mus(lastIdx) + (shifts(lastIdx) - dk) << "\n"; } assert(prod>=0); #endif for(Index l = 0; l=k && (l==0 || l-1>=m)) { std::cout << "Error in perturbCol0\n"; std::cout << " " << k << "/" << n << " " << l << "/" << m << " " << i << "/" << n << " ; " << col0(k) << " " << diag(k) << " " << "\n"; std::cout << " " <=0); #endif #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE if(i!=k && numext::abs(((singVals(j)+dk)*(mus(j)+(shifts(j)-dk)))/((diag(i)+dk)*(diag(i)-dk)) - 1) > 0.9 ) std::cout << " " << ((singVals(j)+dk)*(mus(j)+(shifts(j)-dk)))/((diag(i)+dk)*(diag(i)-dk)) << " == (" << (singVals(j)+dk) << " * " << (mus(j)+(shifts(j)-dk)) << ") / (" << (diag(i)+dk) << " * " << (diag(i)-dk) << ")\n"; #endif } } #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "zhat(" << k << ") = sqrt( " << prod << ") ; " << (singVals(lastIdx) + dk) << " * " << mus(lastIdx) + shifts(lastIdx) << " - " << dk << "\n"; #endif RealScalar tmp = sqrt(prod); #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert((numext::isfinite)(tmp)); #endif zhat(k) = col0(k) > Literal(0) ? RealScalar(tmp) : RealScalar(-tmp); } } } // compute singular vectors template void BDCSVD::computeSingVecs (const ArrayRef& zhat, const ArrayRef& diag, const IndicesRef &perm, const VectorType& singVals, const ArrayRef& shifts, const ArrayRef& mus, MatrixXr& U, MatrixXr& V) { Index n = zhat.size(); Index m = perm.size(); for (Index k = 0; k < n; ++k) { if (zhat(k) == Literal(0)) { U.col(k) = VectorType::Unit(n+1, k); if (m_compV) V.col(k) = VectorType::Unit(n, k); } else { U.col(k).setZero(); for(Index l=0;l= 1, di almost null and zi non null. // We use a rotation to zero out zi applied to the left of M template void BDCSVD::deflation43(Eigen::Index firstCol, Eigen::Index shift, Eigen::Index i, Eigen::Index size) { using std::abs; using std::sqrt; using std::pow; Index start = firstCol + shift; RealScalar c = m_computed(start, start); RealScalar s = m_computed(start+i, start); RealScalar r = numext::hypot(c,s); if (r == Literal(0)) { m_computed(start+i, start+i) = Literal(0); return; } m_computed(start,start) = r; m_computed(start+i, start) = Literal(0); m_computed(start+i, start+i) = Literal(0); JacobiRotation J(c/r,-s/r); if (m_compU) m_naiveU.middleRows(firstCol, size+1).applyOnTheRight(firstCol, firstCol+i, J); else m_naiveU.applyOnTheRight(firstCol, firstCol+i, J); }// end deflation 43 // page 13 // i,j >= 1, i!=j and |di - dj| < epsilon * norm2(M) // We apply two rotations to have zj = 0; // TODO deflation44 is still broken and not properly tested template void BDCSVD::deflation44(Eigen::Index firstColu , Eigen::Index firstColm, Eigen::Index firstRowW, Eigen::Index firstColW, Eigen::Index i, Eigen::Index j, Eigen::Index size) { using std::abs; using std::sqrt; using std::conj; using std::pow; RealScalar c = m_computed(firstColm+i, firstColm); RealScalar s = m_computed(firstColm+j, firstColm); RealScalar r = sqrt(numext::abs2(c) + numext::abs2(s)); #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "deflation 4.4: " << i << "," << j << " -> " << c << " " << s << " " << r << " ; " << m_computed(firstColm + i-1, firstColm) << " " << m_computed(firstColm + i, firstColm) << " " << m_computed(firstColm + i+1, firstColm) << " " << m_computed(firstColm + i+2, firstColm) << "\n"; std::cout << m_computed(firstColm + i-1, firstColm + i-1) << " " << m_computed(firstColm + i, firstColm+i) << " " << m_computed(firstColm + i+1, firstColm+i+1) << " " << m_computed(firstColm + i+2, firstColm+i+2) << "\n"; #endif if (r==Literal(0)) { m_computed(firstColm + i, firstColm + i) = m_computed(firstColm + j, firstColm + j); return; } c/=r; s/=r; m_computed(firstColm + i, firstColm) = r; m_computed(firstColm + j, firstColm + j) = m_computed(firstColm + i, firstColm + i); m_computed(firstColm + j, firstColm) = Literal(0); JacobiRotation J(c,-s); if (m_compU) m_naiveU.middleRows(firstColu, size+1).applyOnTheRight(firstColu + i, firstColu + j, J); else m_naiveU.applyOnTheRight(firstColu+i, firstColu+j, J); if (m_compV) m_naiveV.middleRows(firstRowW, size).applyOnTheRight(firstColW + i, firstColW + j, J); }// end deflation 44 // acts on block from (firstCol+shift, firstCol+shift) to (lastCol+shift, lastCol+shift) [inclusive] template void BDCSVD::deflation(Eigen::Index firstCol, Eigen::Index lastCol, Eigen::Index k, Eigen::Index firstRowW, Eigen::Index firstColW, Eigen::Index shift) { using std::sqrt; using std::abs; const Index length = lastCol + 1 - firstCol; Block col0(m_computed, firstCol+shift, firstCol+shift, length, 1); Diagonal fulldiag(m_computed); VectorBlock,Dynamic> diag(fulldiag, firstCol+shift, length); const RealScalar considerZero = (std::numeric_limits::min)(); RealScalar maxDiag = diag.tail((std::max)(Index(1),length-1)).cwiseAbs().maxCoeff(); RealScalar epsilon_strict = numext::maxi(considerZero,NumTraits::epsilon() * maxDiag); RealScalar epsilon_coarse = Literal(8) * NumTraits::epsilon() * numext::maxi(col0.cwiseAbs().maxCoeff(), maxDiag); #ifdef EIGEN_BDCSVD_SANITY_CHECKS assert(m_naiveU.allFinite()); assert(m_naiveV.allFinite()); assert(m_computed.allFinite()); #endif #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "\ndeflate:" << diag.head(k+1).transpose() << " | " << diag.segment(k+1,length-k-1).transpose() << "\n"; #endif //condition 4.1 if (diag(0) < epsilon_coarse) { #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "deflation 4.1, because " << diag(0) << " < " << epsilon_coarse << "\n"; #endif diag(0) = epsilon_coarse; } //condition 4.2 for (Index i=1;i k) permutation[p] = j++; else if (j >= length) permutation[p] = i++; else if (diag(i) < diag(j)) permutation[p] = j++; else permutation[p] = i++; } } // If we have a total deflation, then we have to insert diag(0) at the right place if(total_deflation) { for(Index i=1; i0 && (abs(diag(i))1;--i) if( (diag(i) - diag(i-1)) < NumTraits::epsilon()*maxDiag ) { #ifdef EIGEN_BDCSVD_DEBUG_VERBOSE std::cout << "deflation 4.4 with i = " << i << " because " << diag(i) << " - " << diag(i-1) << " == " << (diag(i) - diag(i-1)) << " < " << NumTraits::epsilon()*/*diag(i)*/maxDiag << "\n"; #endif eigen_internal_assert(abs(diag(i) - diag(i-1)) BDCSVD::PlainObject> MatrixBase::bdcSvd(unsigned int computationOptions) const { return BDCSVD(*this, computationOptions); } } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/SVD/JacobiSVD.h0000644000175100001770000010033414107270226020722 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2010 Benoit Jacob // Copyright (C) 2013-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_JACOBISVD_H #define EIGEN_JACOBISVD_H namespace Eigen { namespace internal { // forward declaration (needed by ICC) // the empty body is required by MSVC template::IsComplex> struct svd_precondition_2x2_block_to_be_real {}; /*** QR preconditioners (R-SVD) *** *** Their role is to reduce the problem of computing the SVD to the case of a square matrix. *** This approach, known as R-SVD, is an optimization for rectangular-enough matrices, and is a requirement for *** JacobiSVD which by itself is only able to work on square matrices. ***/ enum { PreconditionIfMoreColsThanRows, PreconditionIfMoreRowsThanCols }; template struct qr_preconditioner_should_do_anything { enum { a = MatrixType::RowsAtCompileTime != Dynamic && MatrixType::ColsAtCompileTime != Dynamic && MatrixType::ColsAtCompileTime <= MatrixType::RowsAtCompileTime, b = MatrixType::RowsAtCompileTime != Dynamic && MatrixType::ColsAtCompileTime != Dynamic && MatrixType::RowsAtCompileTime <= MatrixType::ColsAtCompileTime, ret = !( (QRPreconditioner == NoQRPreconditioner) || (Case == PreconditionIfMoreColsThanRows && bool(a)) || (Case == PreconditionIfMoreRowsThanCols && bool(b)) ) }; }; template::ret > struct qr_preconditioner_impl {}; template class qr_preconditioner_impl { public: void allocate(const JacobiSVD&) {} bool run(JacobiSVD&, const MatrixType&) { return false; } }; /*** preconditioner using FullPivHouseholderQR ***/ template class qr_preconditioner_impl { public: typedef typename MatrixType::Scalar Scalar; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime }; typedef Matrix WorkspaceType; void allocate(const JacobiSVD& svd) { if (svd.rows() != m_qr.rows() || svd.cols() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.rows(), svd.cols()); } if (svd.m_computeFullU) m_workspace.resize(svd.rows()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.rows() > matrix.cols()) { m_qr.compute(matrix); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.cols(),matrix.cols()).template triangularView(); if(svd.m_computeFullU) m_qr.matrixQ().evalTo(svd.m_matrixU, m_workspace); if(svd.computeV()) svd.m_matrixV = m_qr.colsPermutation(); return true; } return false; } private: typedef FullPivHouseholderQR QRType; QRType m_qr; WorkspaceType m_workspace; }; template class qr_preconditioner_impl { public: typedef typename MatrixType::Scalar Scalar; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, Options = MatrixType::Options }; typedef typename internal::make_proper_matrix_type< Scalar, ColsAtCompileTime, RowsAtCompileTime, Options, MaxColsAtCompileTime, MaxRowsAtCompileTime >::type TransposeTypeWithSameStorageOrder; void allocate(const JacobiSVD& svd) { if (svd.cols() != m_qr.rows() || svd.rows() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.cols(), svd.rows()); } m_adjoint.resize(svd.cols(), svd.rows()); if (svd.m_computeFullV) m_workspace.resize(svd.cols()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.cols() > matrix.rows()) { m_adjoint = matrix.adjoint(); m_qr.compute(m_adjoint); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.rows(),matrix.rows()).template triangularView().adjoint(); if(svd.m_computeFullV) m_qr.matrixQ().evalTo(svd.m_matrixV, m_workspace); if(svd.computeU()) svd.m_matrixU = m_qr.colsPermutation(); return true; } else return false; } private: typedef FullPivHouseholderQR QRType; QRType m_qr; TransposeTypeWithSameStorageOrder m_adjoint; typename internal::plain_row_type::type m_workspace; }; /*** preconditioner using ColPivHouseholderQR ***/ template class qr_preconditioner_impl { public: void allocate(const JacobiSVD& svd) { if (svd.rows() != m_qr.rows() || svd.cols() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.rows(), svd.cols()); } if (svd.m_computeFullU) m_workspace.resize(svd.rows()); else if (svd.m_computeThinU) m_workspace.resize(svd.cols()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.rows() > matrix.cols()) { m_qr.compute(matrix); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.cols(),matrix.cols()).template triangularView(); if(svd.m_computeFullU) m_qr.householderQ().evalTo(svd.m_matrixU, m_workspace); else if(svd.m_computeThinU) { svd.m_matrixU.setIdentity(matrix.rows(), matrix.cols()); m_qr.householderQ().applyThisOnTheLeft(svd.m_matrixU, m_workspace); } if(svd.computeV()) svd.m_matrixV = m_qr.colsPermutation(); return true; } return false; } private: typedef ColPivHouseholderQR QRType; QRType m_qr; typename internal::plain_col_type::type m_workspace; }; template class qr_preconditioner_impl { public: typedef typename MatrixType::Scalar Scalar; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, Options = MatrixType::Options }; typedef typename internal::make_proper_matrix_type< Scalar, ColsAtCompileTime, RowsAtCompileTime, Options, MaxColsAtCompileTime, MaxRowsAtCompileTime >::type TransposeTypeWithSameStorageOrder; void allocate(const JacobiSVD& svd) { if (svd.cols() != m_qr.rows() || svd.rows() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.cols(), svd.rows()); } if (svd.m_computeFullV) m_workspace.resize(svd.cols()); else if (svd.m_computeThinV) m_workspace.resize(svd.rows()); m_adjoint.resize(svd.cols(), svd.rows()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.cols() > matrix.rows()) { m_adjoint = matrix.adjoint(); m_qr.compute(m_adjoint); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.rows(),matrix.rows()).template triangularView().adjoint(); if(svd.m_computeFullV) m_qr.householderQ().evalTo(svd.m_matrixV, m_workspace); else if(svd.m_computeThinV) { svd.m_matrixV.setIdentity(matrix.cols(), matrix.rows()); m_qr.householderQ().applyThisOnTheLeft(svd.m_matrixV, m_workspace); } if(svd.computeU()) svd.m_matrixU = m_qr.colsPermutation(); return true; } else return false; } private: typedef ColPivHouseholderQR QRType; QRType m_qr; TransposeTypeWithSameStorageOrder m_adjoint; typename internal::plain_row_type::type m_workspace; }; /*** preconditioner using HouseholderQR ***/ template class qr_preconditioner_impl { public: void allocate(const JacobiSVD& svd) { if (svd.rows() != m_qr.rows() || svd.cols() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.rows(), svd.cols()); } if (svd.m_computeFullU) m_workspace.resize(svd.rows()); else if (svd.m_computeThinU) m_workspace.resize(svd.cols()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.rows() > matrix.cols()) { m_qr.compute(matrix); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.cols(),matrix.cols()).template triangularView(); if(svd.m_computeFullU) m_qr.householderQ().evalTo(svd.m_matrixU, m_workspace); else if(svd.m_computeThinU) { svd.m_matrixU.setIdentity(matrix.rows(), matrix.cols()); m_qr.householderQ().applyThisOnTheLeft(svd.m_matrixU, m_workspace); } if(svd.computeV()) svd.m_matrixV.setIdentity(matrix.cols(), matrix.cols()); return true; } return false; } private: typedef HouseholderQR QRType; QRType m_qr; typename internal::plain_col_type::type m_workspace; }; template class qr_preconditioner_impl { public: typedef typename MatrixType::Scalar Scalar; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, Options = MatrixType::Options }; typedef typename internal::make_proper_matrix_type< Scalar, ColsAtCompileTime, RowsAtCompileTime, Options, MaxColsAtCompileTime, MaxRowsAtCompileTime >::type TransposeTypeWithSameStorageOrder; void allocate(const JacobiSVD& svd) { if (svd.cols() != m_qr.rows() || svd.rows() != m_qr.cols()) { m_qr.~QRType(); ::new (&m_qr) QRType(svd.cols(), svd.rows()); } if (svd.m_computeFullV) m_workspace.resize(svd.cols()); else if (svd.m_computeThinV) m_workspace.resize(svd.rows()); m_adjoint.resize(svd.cols(), svd.rows()); } bool run(JacobiSVD& svd, const MatrixType& matrix) { if(matrix.cols() > matrix.rows()) { m_adjoint = matrix.adjoint(); m_qr.compute(m_adjoint); svd.m_workMatrix = m_qr.matrixQR().block(0,0,matrix.rows(),matrix.rows()).template triangularView().adjoint(); if(svd.m_computeFullV) m_qr.householderQ().evalTo(svd.m_matrixV, m_workspace); else if(svd.m_computeThinV) { svd.m_matrixV.setIdentity(matrix.cols(), matrix.rows()); m_qr.householderQ().applyThisOnTheLeft(svd.m_matrixV, m_workspace); } if(svd.computeU()) svd.m_matrixU.setIdentity(matrix.rows(), matrix.rows()); return true; } else return false; } private: typedef HouseholderQR QRType; QRType m_qr; TransposeTypeWithSameStorageOrder m_adjoint; typename internal::plain_row_type::type m_workspace; }; /*** 2x2 SVD implementation *** *** JacobiSVD consists in performing a series of 2x2 SVD subproblems ***/ template struct svd_precondition_2x2_block_to_be_real { typedef JacobiSVD SVD; typedef typename MatrixType::RealScalar RealScalar; static bool run(typename SVD::WorkMatrixType&, SVD&, Index, Index, RealScalar&) { return true; } }; template struct svd_precondition_2x2_block_to_be_real { typedef JacobiSVD SVD; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; static bool run(typename SVD::WorkMatrixType& work_matrix, SVD& svd, Index p, Index q, RealScalar& maxDiagEntry) { using std::sqrt; using std::abs; Scalar z; JacobiRotation rot; RealScalar n = sqrt(numext::abs2(work_matrix.coeff(p,p)) + numext::abs2(work_matrix.coeff(q,p))); const RealScalar considerAsZero = (std::numeric_limits::min)(); const RealScalar precision = NumTraits::epsilon(); if(n==0) { // make sure first column is zero work_matrix.coeffRef(p,p) = work_matrix.coeffRef(q,p) = Scalar(0); if(abs(numext::imag(work_matrix.coeff(p,q)))>considerAsZero) { // work_matrix.coeff(p,q) can be zero if work_matrix.coeff(q,p) is not zero but small enough to underflow when computing n z = abs(work_matrix.coeff(p,q)) / work_matrix.coeff(p,q); work_matrix.row(p) *= z; if(svd.computeU()) svd.m_matrixU.col(p) *= conj(z); } if(abs(numext::imag(work_matrix.coeff(q,q)))>considerAsZero) { z = abs(work_matrix.coeff(q,q)) / work_matrix.coeff(q,q); work_matrix.row(q) *= z; if(svd.computeU()) svd.m_matrixU.col(q) *= conj(z); } // otherwise the second row is already zero, so we have nothing to do. } else { rot.c() = conj(work_matrix.coeff(p,p)) / n; rot.s() = work_matrix.coeff(q,p) / n; work_matrix.applyOnTheLeft(p,q,rot); if(svd.computeU()) svd.m_matrixU.applyOnTheRight(p,q,rot.adjoint()); if(abs(numext::imag(work_matrix.coeff(p,q)))>considerAsZero) { z = abs(work_matrix.coeff(p,q)) / work_matrix.coeff(p,q); work_matrix.col(q) *= z; if(svd.computeV()) svd.m_matrixV.col(q) *= z; } if(abs(numext::imag(work_matrix.coeff(q,q)))>considerAsZero) { z = abs(work_matrix.coeff(q,q)) / work_matrix.coeff(q,q); work_matrix.row(q) *= z; if(svd.computeU()) svd.m_matrixU.col(q) *= conj(z); } } // update largest diagonal entry maxDiagEntry = numext::maxi(maxDiagEntry,numext::maxi(abs(work_matrix.coeff(p,p)), abs(work_matrix.coeff(q,q)))); // and check whether the 2x2 block is already diagonal RealScalar threshold = numext::maxi(considerAsZero, precision * maxDiagEntry); return abs(work_matrix.coeff(p,q))>threshold || abs(work_matrix.coeff(q,p)) > threshold; } }; template struct traits > : traits<_MatrixType> { typedef _MatrixType MatrixType; }; } // end namespace internal /** \ingroup SVD_Module * * * \class JacobiSVD * * \brief Two-sided Jacobi SVD decomposition of a rectangular matrix * * \tparam _MatrixType the type of the matrix of which we are computing the SVD decomposition * \tparam QRPreconditioner this optional parameter allows to specify the type of QR decomposition that will be used internally * for the R-SVD step for non-square matrices. See discussion of possible values below. * * SVD decomposition consists in decomposing any n-by-p matrix \a A as a product * \f[ A = U S V^* \f] * where \a U is a n-by-n unitary, \a V is a p-by-p unitary, and \a S is a n-by-p real positive matrix which is zero outside of its main diagonal; * the diagonal entries of S are known as the \em singular \em values of \a A and the columns of \a U and \a V are known as the left * and right \em singular \em vectors of \a A respectively. * * Singular values are always sorted in decreasing order. * * This JacobiSVD decomposition computes only the singular values by default. If you want \a U or \a V, you need to ask for them explicitly. * * You can ask for only \em thin \a U or \a V to be computed, meaning the following. In case of a rectangular n-by-p matrix, letting \a m be the * smaller value among \a n and \a p, there are only \a m singular vectors; the remaining columns of \a U and \a V do not correspond to actual * singular vectors. Asking for \em thin \a U or \a V means asking for only their \a m first columns to be formed. So \a U is then a n-by-m matrix, * and \a V is then a p-by-m matrix. Notice that thin \a U and \a V are all you need for (least squares) solving. * * Here's an example demonstrating basic usage: * \include JacobiSVD_basic.cpp * Output: \verbinclude JacobiSVD_basic.out * * This JacobiSVD class is a two-sided Jacobi R-SVD decomposition, ensuring optimal reliability and accuracy. The downside is that it's slower than * bidiagonalizing SVD algorithms for large square matrices; however its complexity is still \f$ O(n^2p) \f$ where \a n is the smaller dimension and * \a p is the greater dimension, meaning that it is still of the same order of complexity as the faster bidiagonalizing R-SVD algorithms. * In particular, like any R-SVD, it takes advantage of non-squareness in that its complexity is only linear in the greater dimension. * * If the input matrix has inf or nan coefficients, the result of the computation is undefined, but the computation is guaranteed to * terminate in finite (and reasonable) time. * * The possible values for QRPreconditioner are: * \li ColPivHouseholderQRPreconditioner is the default. In practice it's very safe. It uses column-pivoting QR. * \li FullPivHouseholderQRPreconditioner, is the safest and slowest. It uses full-pivoting QR. * Contrary to other QRs, it doesn't allow computing thin unitaries. * \li HouseholderQRPreconditioner is the fastest, and less safe and accurate than the pivoting variants. It uses non-pivoting QR. * This is very similar in safety and accuracy to the bidiagonalization process used by bidiagonalizing SVD algorithms (since bidiagonalization * is inherently non-pivoting). However the resulting SVD is still more reliable than bidiagonalizing SVDs because the Jacobi-based iterarive * process is more reliable than the optimized bidiagonal SVD iterations. * \li NoQRPreconditioner allows not to use a QR preconditioner at all. This is useful if you know that you will only be computing * JacobiSVD decompositions of square matrices. Non-square matrices require a QR preconditioner. Using this option will result in * faster compilation and smaller executable code. It won't significantly speed up computation, since JacobiSVD is always checking * if QR preconditioning is needed before applying it anyway. * * \sa MatrixBase::jacobiSvd() */ template class JacobiSVD : public SVDBase > { typedef SVDBase Base; public: typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, DiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_DYNAMIC(RowsAtCompileTime,ColsAtCompileTime), MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, MaxDiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_FIXED(MaxRowsAtCompileTime,MaxColsAtCompileTime), MatrixOptions = MatrixType::Options }; typedef typename Base::MatrixUType MatrixUType; typedef typename Base::MatrixVType MatrixVType; typedef typename Base::SingularValuesType SingularValuesType; typedef typename internal::plain_row_type::type RowType; typedef typename internal::plain_col_type::type ColType; typedef Matrix WorkMatrixType; /** \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via JacobiSVD::compute(const MatrixType&). */ JacobiSVD() {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem size. * \sa JacobiSVD() */ JacobiSVD(Index rows, Index cols, unsigned int computationOptions = 0) { allocate(rows, cols, computationOptions); } /** \brief Constructor performing the decomposition of given matrix. * * \param matrix the matrix to decompose * \param computationOptions optional parameter allowing to specify if you want full or thin U or V unitaries to be computed. * By default, none is computed. This is a bit-field, the possible bits are #ComputeFullU, #ComputeThinU, * #ComputeFullV, #ComputeThinV. * * Thin unitaries are only available if your matrix type has a Dynamic number of columns (for example MatrixXf). They also are not * available with the (non-default) FullPivHouseholderQR preconditioner. */ explicit JacobiSVD(const MatrixType& matrix, unsigned int computationOptions = 0) { compute(matrix, computationOptions); } /** \brief Method performing the decomposition of given matrix using custom options. * * \param matrix the matrix to decompose * \param computationOptions optional parameter allowing to specify if you want full or thin U or V unitaries to be computed. * By default, none is computed. This is a bit-field, the possible bits are #ComputeFullU, #ComputeThinU, * #ComputeFullV, #ComputeThinV. * * Thin unitaries are only available if your matrix type has a Dynamic number of columns (for example MatrixXf). They also are not * available with the (non-default) FullPivHouseholderQR preconditioner. */ JacobiSVD& compute(const MatrixType& matrix, unsigned int computationOptions); /** \brief Method performing the decomposition of given matrix using current options. * * \param matrix the matrix to decompose * * This method uses the current \a computationOptions, as already passed to the constructor or to compute(const MatrixType&, unsigned int). */ JacobiSVD& compute(const MatrixType& matrix) { return compute(matrix, m_computationOptions); } using Base::computeU; using Base::computeV; using Base::rows; using Base::cols; using Base::rank; private: void allocate(Index rows, Index cols, unsigned int computationOptions); protected: using Base::m_matrixU; using Base::m_matrixV; using Base::m_singularValues; using Base::m_info; using Base::m_isInitialized; using Base::m_isAllocated; using Base::m_usePrescribedThreshold; using Base::m_computeFullU; using Base::m_computeThinU; using Base::m_computeFullV; using Base::m_computeThinV; using Base::m_computationOptions; using Base::m_nonzeroSingularValues; using Base::m_rows; using Base::m_cols; using Base::m_diagSize; using Base::m_prescribedThreshold; WorkMatrixType m_workMatrix; template friend struct internal::svd_precondition_2x2_block_to_be_real; template friend struct internal::qr_preconditioner_impl; internal::qr_preconditioner_impl m_qr_precond_morecols; internal::qr_preconditioner_impl m_qr_precond_morerows; MatrixType m_scaledMatrix; }; template void JacobiSVD::allocate(Eigen::Index rows, Eigen::Index cols, unsigned int computationOptions) { eigen_assert(rows >= 0 && cols >= 0); if (m_isAllocated && rows == m_rows && cols == m_cols && computationOptions == m_computationOptions) { return; } m_rows = rows; m_cols = cols; m_info = Success; m_isInitialized = false; m_isAllocated = true; m_computationOptions = computationOptions; m_computeFullU = (computationOptions & ComputeFullU) != 0; m_computeThinU = (computationOptions & ComputeThinU) != 0; m_computeFullV = (computationOptions & ComputeFullV) != 0; m_computeThinV = (computationOptions & ComputeThinV) != 0; eigen_assert(!(m_computeFullU && m_computeThinU) && "JacobiSVD: you can't ask for both full and thin U"); eigen_assert(!(m_computeFullV && m_computeThinV) && "JacobiSVD: you can't ask for both full and thin V"); eigen_assert(EIGEN_IMPLIES(m_computeThinU || m_computeThinV, MatrixType::ColsAtCompileTime==Dynamic) && "JacobiSVD: thin U and V are only available when your matrix has a dynamic number of columns."); if (QRPreconditioner == FullPivHouseholderQRPreconditioner) { eigen_assert(!(m_computeThinU || m_computeThinV) && "JacobiSVD: can't compute thin U or thin V with the FullPivHouseholderQR preconditioner. " "Use the ColPivHouseholderQR preconditioner instead."); } m_diagSize = (std::min)(m_rows, m_cols); m_singularValues.resize(m_diagSize); if(RowsAtCompileTime==Dynamic) m_matrixU.resize(m_rows, m_computeFullU ? m_rows : m_computeThinU ? m_diagSize : 0); if(ColsAtCompileTime==Dynamic) m_matrixV.resize(m_cols, m_computeFullV ? m_cols : m_computeThinV ? m_diagSize : 0); m_workMatrix.resize(m_diagSize, m_diagSize); if(m_cols>m_rows) m_qr_precond_morecols.allocate(*this); if(m_rows>m_cols) m_qr_precond_morerows.allocate(*this); if(m_rows!=m_cols) m_scaledMatrix.resize(rows,cols); } template JacobiSVD& JacobiSVD::compute(const MatrixType& matrix, unsigned int computationOptions) { using std::abs; allocate(matrix.rows(), matrix.cols(), computationOptions); // currently we stop when we reach precision 2*epsilon as the last bit of precision can require an unreasonable number of iterations, // only worsening the precision of U and V as we accumulate more rotations const RealScalar precision = RealScalar(2) * NumTraits::epsilon(); // limit for denormal numbers to be considered zero in order to avoid infinite loops (see bug 286) const RealScalar considerAsZero = (std::numeric_limits::min)(); // Scaling factor to reduce over/under-flows RealScalar scale = matrix.cwiseAbs().template maxCoeff(); if (!(numext::isfinite)(scale)) { m_isInitialized = true; m_info = InvalidInput; return *this; } if(scale==RealScalar(0)) scale = RealScalar(1); /*** step 1. The R-SVD step: we use a QR decomposition to reduce to the case of a square matrix */ if(m_rows!=m_cols) { m_scaledMatrix = matrix / scale; m_qr_precond_morecols.run(*this, m_scaledMatrix); m_qr_precond_morerows.run(*this, m_scaledMatrix); } else { m_workMatrix = matrix.block(0,0,m_diagSize,m_diagSize) / scale; if(m_computeFullU) m_matrixU.setIdentity(m_rows,m_rows); if(m_computeThinU) m_matrixU.setIdentity(m_rows,m_diagSize); if(m_computeFullV) m_matrixV.setIdentity(m_cols,m_cols); if(m_computeThinV) m_matrixV.setIdentity(m_cols, m_diagSize); } /*** step 2. The main Jacobi SVD iteration. ***/ RealScalar maxDiagEntry = m_workMatrix.cwiseAbs().diagonal().maxCoeff(); bool finished = false; while(!finished) { finished = true; // do a sweep: for all index pairs (p,q), perform SVD of the corresponding 2x2 sub-matrix for(Index p = 1; p < m_diagSize; ++p) { for(Index q = 0; q < p; ++q) { // if this 2x2 sub-matrix is not diagonal already... // notice that this comparison will evaluate to false if any NaN is involved, ensuring that NaN's don't // keep us iterating forever. Similarly, small denormal numbers are considered zero. RealScalar threshold = numext::maxi(considerAsZero, precision * maxDiagEntry); if(abs(m_workMatrix.coeff(p,q))>threshold || abs(m_workMatrix.coeff(q,p)) > threshold) { finished = false; // perform SVD decomposition of 2x2 sub-matrix corresponding to indices p,q to make it diagonal // the complex to real operation returns true if the updated 2x2 block is not already diagonal if(internal::svd_precondition_2x2_block_to_be_real::run(m_workMatrix, *this, p, q, maxDiagEntry)) { JacobiRotation j_left, j_right; internal::real_2x2_jacobi_svd(m_workMatrix, p, q, &j_left, &j_right); // accumulate resulting Jacobi rotations m_workMatrix.applyOnTheLeft(p,q,j_left); if(computeU()) m_matrixU.applyOnTheRight(p,q,j_left.transpose()); m_workMatrix.applyOnTheRight(p,q,j_right); if(computeV()) m_matrixV.applyOnTheRight(p,q,j_right); // keep track of the largest diagonal coefficient maxDiagEntry = numext::maxi(maxDiagEntry,numext::maxi(abs(m_workMatrix.coeff(p,p)), abs(m_workMatrix.coeff(q,q)))); } } } } } /*** step 3. The work matrix is now diagonal, so ensure it's positive so its diagonal entries are the singular values ***/ for(Index i = 0; i < m_diagSize; ++i) { // For a complex matrix, some diagonal coefficients might note have been // treated by svd_precondition_2x2_block_to_be_real, and the imaginary part // of some diagonal entry might not be null. if(NumTraits::IsComplex && abs(numext::imag(m_workMatrix.coeff(i,i)))>considerAsZero) { RealScalar a = abs(m_workMatrix.coeff(i,i)); m_singularValues.coeffRef(i) = abs(a); if(computeU()) m_matrixU.col(i) *= m_workMatrix.coeff(i,i)/a; } else { // m_workMatrix.coeff(i,i) is already real, no difficulty: RealScalar a = numext::real(m_workMatrix.coeff(i,i)); m_singularValues.coeffRef(i) = abs(a); if(computeU() && (a JacobiSVD::PlainObject> MatrixBase::jacobiSvd(unsigned int computationOptions) const { return JacobiSVD(*this, computationOptions); } } // end namespace Eigen #endif // EIGEN_JACOBISVD_H piqp-octave/src/eigen/Eigen/src/SVD/UpperBidiagonalization.h0000644000175100001770000003712514107270226023630 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2010 Benoit Jacob // Copyright (C) 2013-2014 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BIDIAGONALIZATION_H #define EIGEN_BIDIAGONALIZATION_H namespace Eigen { namespace internal { // UpperBidiagonalization will probably be replaced by a Bidiagonalization class, don't want to make it stable API. // At the same time, it's useful to keep for now as it's about the only thing that is testing the BandMatrix class. template class UpperBidiagonalization { public: typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, ColsAtCompileTimeMinusOne = internal::decrement_size::ret }; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix RowVectorType; typedef Matrix ColVectorType; typedef BandMatrix BidiagonalType; typedef Matrix DiagVectorType; typedef Matrix SuperDiagVectorType; typedef HouseholderSequence< const MatrixType, const typename internal::remove_all::ConjugateReturnType>::type > HouseholderUSequenceType; typedef HouseholderSequence< const typename internal::remove_all::type, Diagonal, OnTheRight > HouseholderVSequenceType; /** * \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via Bidiagonalization::compute(const MatrixType&). */ UpperBidiagonalization() : m_householder(), m_bidiagonal(), m_isInitialized(false) {} explicit UpperBidiagonalization(const MatrixType& matrix) : m_householder(matrix.rows(), matrix.cols()), m_bidiagonal(matrix.cols(), matrix.cols()), m_isInitialized(false) { compute(matrix); } UpperBidiagonalization& compute(const MatrixType& matrix); UpperBidiagonalization& computeUnblocked(const MatrixType& matrix); const MatrixType& householder() const { return m_householder; } const BidiagonalType& bidiagonal() const { return m_bidiagonal; } const HouseholderUSequenceType householderU() const { eigen_assert(m_isInitialized && "UpperBidiagonalization is not initialized."); return HouseholderUSequenceType(m_householder, m_householder.diagonal().conjugate()); } const HouseholderVSequenceType householderV() // const here gives nasty errors and i'm lazy { eigen_assert(m_isInitialized && "UpperBidiagonalization is not initialized."); return HouseholderVSequenceType(m_householder.conjugate(), m_householder.const_derived().template diagonal<1>()) .setLength(m_householder.cols()-1) .setShift(1); } protected: MatrixType m_householder; BidiagonalType m_bidiagonal; bool m_isInitialized; }; // Standard upper bidiagonalization without fancy optimizations // This version should be faster for small matrix size template void upperbidiagonalization_inplace_unblocked(MatrixType& mat, typename MatrixType::RealScalar *diagonal, typename MatrixType::RealScalar *upper_diagonal, typename MatrixType::Scalar* tempData = 0) { typedef typename MatrixType::Scalar Scalar; Index rows = mat.rows(); Index cols = mat.cols(); typedef Matrix TempType; TempType tempVector; if(tempData==0) { tempVector.resize(rows); tempData = tempVector.data(); } for (Index k = 0; /* breaks at k==cols-1 below */ ; ++k) { Index remainingRows = rows - k; Index remainingCols = cols - k - 1; // construct left householder transform in-place in A mat.col(k).tail(remainingRows) .makeHouseholderInPlace(mat.coeffRef(k,k), diagonal[k]); // apply householder transform to remaining part of A on the left mat.bottomRightCorner(remainingRows, remainingCols) .applyHouseholderOnTheLeft(mat.col(k).tail(remainingRows-1), mat.coeff(k,k), tempData); if(k == cols-1) break; // construct right householder transform in-place in mat mat.row(k).tail(remainingCols) .makeHouseholderInPlace(mat.coeffRef(k,k+1), upper_diagonal[k]); // apply householder transform to remaining part of mat on the left mat.bottomRightCorner(remainingRows-1, remainingCols) .applyHouseholderOnTheRight(mat.row(k).tail(remainingCols-1).adjoint(), mat.coeff(k,k+1), tempData); } } /** \internal * Helper routine for the block reduction to upper bidiagonal form. * * Let's partition the matrix A: * * | A00 A01 | * A = | | * | A10 A11 | * * This function reduces to bidiagonal form the left \c rows x \a blockSize vertical panel [A00/A10] * and the \a blockSize x \c cols horizontal panel [A00 A01] of the matrix \a A. The bottom-right block A11 * is updated using matrix-matrix products: * A22 -= V * Y^T - X * U^T * where V and U contains the left and right Householder vectors. U and V are stored in A10, and A01 * respectively, and the update matrices X and Y are computed during the reduction. * */ template void upperbidiagonalization_blocked_helper(MatrixType& A, typename MatrixType::RealScalar *diagonal, typename MatrixType::RealScalar *upper_diagonal, Index bs, Ref::Flags & RowMajorBit> > X, Ref::Flags & RowMajorBit> > Y) { typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename NumTraits::Literal Literal; enum { StorageOrder = traits::Flags & RowMajorBit }; typedef InnerStride ColInnerStride; typedef InnerStride RowInnerStride; typedef Ref, 0, ColInnerStride> SubColumnType; typedef Ref, 0, RowInnerStride> SubRowType; typedef Ref > SubMatType; Index brows = A.rows(); Index bcols = A.cols(); Scalar tau_u, tau_u_prev(0), tau_v; for(Index k = 0; k < bs; ++k) { Index remainingRows = brows - k; Index remainingCols = bcols - k - 1; SubMatType X_k1( X.block(k,0, remainingRows,k) ); SubMatType V_k1( A.block(k,0, remainingRows,k) ); // 1 - update the k-th column of A SubColumnType v_k = A.col(k).tail(remainingRows); v_k -= V_k1 * Y.row(k).head(k).adjoint(); if(k) v_k -= X_k1 * A.col(k).head(k); // 2 - construct left Householder transform in-place v_k.makeHouseholderInPlace(tau_v, diagonal[k]); if(k+10) A.coeffRef(k-1,k) = tau_u_prev; tau_u_prev = tau_u; } else A.coeffRef(k-1,k) = tau_u_prev; A.coeffRef(k,k) = tau_v; } if(bsbs && brows>bs) { SubMatType A11( A.bottomRightCorner(brows-bs,bcols-bs) ); SubMatType A10( A.block(bs,0, brows-bs,bs) ); SubMatType A01( A.block(0,bs, bs,bcols-bs) ); Scalar tmp = A01(bs-1,0); A01(bs-1,0) = Literal(1); A11.noalias() -= A10 * Y.topLeftCorner(bcols,bs).bottomRows(bcols-bs).adjoint(); A11.noalias() -= X.topLeftCorner(brows,bs).bottomRows(brows-bs) * A01; A01(bs-1,0) = tmp; } } /** \internal * * Implementation of a block-bidiagonal reduction. * It is based on the following paper: * The Design of a Parallel Dense Linear Algebra Software Library: Reduction to Hessenberg, Tridiagonal, and Bidiagonal Form. * by Jaeyoung Choi, Jack J. Dongarra, David W. Walker. (1995) * section 3.3 */ template void upperbidiagonalization_inplace_blocked(MatrixType& A, BidiagType& bidiagonal, Index maxBlockSize=32, typename MatrixType::Scalar* /*tempData*/ = 0) { typedef typename MatrixType::Scalar Scalar; typedef Block BlockType; Index rows = A.rows(); Index cols = A.cols(); Index size = (std::min)(rows, cols); // X and Y are work space enum { StorageOrder = traits::Flags & RowMajorBit }; Matrix X(rows,maxBlockSize); Matrix Y(cols,maxBlockSize); Index blockSize = (std::min)(maxBlockSize,size); Index k = 0; for(k = 0; k < size; k += blockSize) { Index bs = (std::min)(size-k,blockSize); // actual size of the block Index brows = rows - k; // rows of the block Index bcols = cols - k; // columns of the block // partition the matrix A: // // | A00 A01 A02 | // | | // A = | A10 A11 A12 | // | | // | A20 A21 A22 | // // where A11 is a bs x bs diagonal block, // and let: // | A11 A12 | // B = | | // | A21 A22 | BlockType B = A.block(k,k,brows,bcols); // This stage performs the bidiagonalization of A11, A21, A12, and updating of A22. // Finally, the algorithm continue on the updated A22. // // However, if B is too small, or A22 empty, then let's use an unblocked strategy if(k+bs==cols || bcols<48) // somewhat arbitrary threshold { upperbidiagonalization_inplace_unblocked(B, &(bidiagonal.template diagonal<0>().coeffRef(k)), &(bidiagonal.template diagonal<1>().coeffRef(k)), X.data() ); break; // We're done } else { upperbidiagonalization_blocked_helper( B, &(bidiagonal.template diagonal<0>().coeffRef(k)), &(bidiagonal.template diagonal<1>().coeffRef(k)), bs, X.topLeftCorner(brows,bs), Y.topLeftCorner(bcols,bs) ); } } } template UpperBidiagonalization<_MatrixType>& UpperBidiagonalization<_MatrixType>::computeUnblocked(const _MatrixType& matrix) { Index rows = matrix.rows(); Index cols = matrix.cols(); EIGEN_ONLY_USED_FOR_DEBUG(cols); eigen_assert(rows >= cols && "UpperBidiagonalization is only for Arices satisfying rows>=cols."); m_householder = matrix; ColVectorType temp(rows); upperbidiagonalization_inplace_unblocked(m_householder, &(m_bidiagonal.template diagonal<0>().coeffRef(0)), &(m_bidiagonal.template diagonal<1>().coeffRef(0)), temp.data()); m_isInitialized = true; return *this; } template UpperBidiagonalization<_MatrixType>& UpperBidiagonalization<_MatrixType>::compute(const _MatrixType& matrix) { Index rows = matrix.rows(); Index cols = matrix.cols(); EIGEN_ONLY_USED_FOR_DEBUG(rows); EIGEN_ONLY_USED_FOR_DEBUG(cols); eigen_assert(rows >= cols && "UpperBidiagonalization is only for Arices satisfying rows>=cols."); m_householder = matrix; upperbidiagonalization_inplace_blocked(m_householder, m_bidiagonal); m_isInitialized = true; return *this; } #if 0 /** \return the Householder QR decomposition of \c *this. * * \sa class Bidiagonalization */ template const UpperBidiagonalization::PlainObject> MatrixBase::bidiagonalization() const { return UpperBidiagonalization(eval()); } #endif } // end namespace internal } // end namespace Eigen #endif // EIGEN_BIDIAGONALIZATION_H piqp-octave/src/eigen/Eigen/src/SVD/SVDBase.h0000644000175100001770000003462714107270226020420 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009-2010 Benoit Jacob // Copyright (C) 2014 Gael Guennebaud // // Copyright (C) 2013 Gauthier Brun // Copyright (C) 2013 Nicolas Carre // Copyright (C) 2013 Jean Ceccato // Copyright (C) 2013 Pierre Zoppitelli // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SVDBASE_H #define EIGEN_SVDBASE_H namespace Eigen { namespace internal { template struct traits > : traits { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; } /** \ingroup SVD_Module * * * \class SVDBase * * \brief Base class of SVD algorithms * * \tparam Derived the type of the actual SVD decomposition * * SVD decomposition consists in decomposing any n-by-p matrix \a A as a product * \f[ A = U S V^* \f] * where \a U is a n-by-n unitary, \a V is a p-by-p unitary, and \a S is a n-by-p real positive matrix which is zero outside of its main diagonal; * the diagonal entries of S are known as the \em singular \em values of \a A and the columns of \a U and \a V are known as the left * and right \em singular \em vectors of \a A respectively. * * Singular values are always sorted in decreasing order. * * * You can ask for only \em thin \a U or \a V to be computed, meaning the following. In case of a rectangular n-by-p matrix, letting \a m be the * smaller value among \a n and \a p, there are only \a m singular vectors; the remaining columns of \a U and \a V do not correspond to actual * singular vectors. Asking for \em thin \a U or \a V means asking for only their \a m first columns to be formed. So \a U is then a n-by-m matrix, * and \a V is then a p-by-m matrix. Notice that thin \a U and \a V are all you need for (least squares) solving. * * The status of the computation can be retrived using the \a info() method. Unless \a info() returns \a Success, the results should be not * considered well defined. * * If the input matrix has inf or nan coefficients, the result of the computation is undefined, and \a info() will return \a InvalidInput, but the computation is guaranteed to * terminate in finite (and reasonable) time. * \sa class BDCSVD, class JacobiSVD */ template class SVDBase : public SolverBase > { public: template friend struct internal::solve_assertion; typedef typename internal::traits::MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef typename Eigen::internal::traits::StorageIndex StorageIndex; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, DiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_DYNAMIC(RowsAtCompileTime,ColsAtCompileTime), MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, MaxDiagSizeAtCompileTime = EIGEN_SIZE_MIN_PREFER_FIXED(MaxRowsAtCompileTime,MaxColsAtCompileTime), MatrixOptions = MatrixType::Options }; typedef Matrix MatrixUType; typedef Matrix MatrixVType; typedef typename internal::plain_diag_type::type SingularValuesType; Derived& derived() { return *static_cast(this); } const Derived& derived() const { return *static_cast(this); } /** \returns the \a U matrix. * * For the SVD decomposition of a n-by-p matrix, letting \a m be the minimum of \a n and \a p, * the U matrix is n-by-n if you asked for \link Eigen::ComputeFullU ComputeFullU \endlink, and is n-by-m if you asked for \link Eigen::ComputeThinU ComputeThinU \endlink. * * The \a m first columns of \a U are the left singular vectors of the matrix being decomposed. * * This method asserts that you asked for \a U to be computed. */ const MatrixUType& matrixU() const { _check_compute_assertions(); eigen_assert(computeU() && "This SVD decomposition didn't compute U. Did you ask for it?"); return m_matrixU; } /** \returns the \a V matrix. * * For the SVD decomposition of a n-by-p matrix, letting \a m be the minimum of \a n and \a p, * the V matrix is p-by-p if you asked for \link Eigen::ComputeFullV ComputeFullV \endlink, and is p-by-m if you asked for \link Eigen::ComputeThinV ComputeThinV \endlink. * * The \a m first columns of \a V are the right singular vectors of the matrix being decomposed. * * This method asserts that you asked for \a V to be computed. */ const MatrixVType& matrixV() const { _check_compute_assertions(); eigen_assert(computeV() && "This SVD decomposition didn't compute V. Did you ask for it?"); return m_matrixV; } /** \returns the vector of singular values. * * For the SVD decomposition of a n-by-p matrix, letting \a m be the minimum of \a n and \a p, the * returned vector has size \a m. Singular values are always sorted in decreasing order. */ const SingularValuesType& singularValues() const { _check_compute_assertions(); return m_singularValues; } /** \returns the number of singular values that are not exactly 0 */ Index nonzeroSingularValues() const { _check_compute_assertions(); return m_nonzeroSingularValues; } /** \returns the rank of the matrix of which \c *this is the SVD. * * \note This method has to determine which singular values should be considered nonzero. * For that, it uses the threshold value that you can control by calling * setThreshold(const RealScalar&). */ inline Index rank() const { using std::abs; _check_compute_assertions(); if(m_singularValues.size()==0) return 0; RealScalar premultiplied_threshold = numext::maxi(m_singularValues.coeff(0) * threshold(), (std::numeric_limits::min)()); Index i = m_nonzeroSingularValues-1; while(i>=0 && m_singularValues.coeff(i) < premultiplied_threshold) --i; return i+1; } /** Allows to prescribe a threshold to be used by certain methods, such as rank() and solve(), * which need to determine when singular values are to be considered nonzero. * This is not used for the SVD decomposition itself. * * When it needs to get the threshold value, Eigen calls threshold(). * The default is \c NumTraits::epsilon() * * \param threshold The new value to use as the threshold. * * A singular value will be considered nonzero if its value is strictly greater than * \f$ \vert singular value \vert \leqslant threshold \times \vert max singular value \vert \f$. * * If you want to come back to the default behavior, call setThreshold(Default_t) */ Derived& setThreshold(const RealScalar& threshold) { m_usePrescribedThreshold = true; m_prescribedThreshold = threshold; return derived(); } /** Allows to come back to the default behavior, letting Eigen use its default formula for * determining the threshold. * * You should pass the special object Eigen::Default as parameter here. * \code svd.setThreshold(Eigen::Default); \endcode * * See the documentation of setThreshold(const RealScalar&). */ Derived& setThreshold(Default_t) { m_usePrescribedThreshold = false; return derived(); } /** Returns the threshold that will be used by certain methods such as rank(). * * See the documentation of setThreshold(const RealScalar&). */ RealScalar threshold() const { eigen_assert(m_isInitialized || m_usePrescribedThreshold); // this temporary is needed to workaround a MSVC issue Index diagSize = (std::max)(1,m_diagSize); return m_usePrescribedThreshold ? m_prescribedThreshold : RealScalar(diagSize)*NumTraits::epsilon(); } /** \returns true if \a U (full or thin) is asked for in this SVD decomposition */ inline bool computeU() const { return m_computeFullU || m_computeThinU; } /** \returns true if \a V (full or thin) is asked for in this SVD decomposition */ inline bool computeV() const { return m_computeFullV || m_computeThinV; } inline Index rows() const { return m_rows; } inline Index cols() const { return m_cols; } #ifdef EIGEN_PARSED_BY_DOXYGEN /** \returns a (least squares) solution of \f$ A x = b \f$ using the current SVD decomposition of A. * * \param b the right-hand-side of the equation to solve. * * \note Solving requires both U and V to be computed. Thin U and V are enough, there is no need for full U or V. * * \note SVD solving is implicitly least-squares. Thus, this method serves both purposes of exact solving and least-squares solving. * In other words, the returned solution is guaranteed to minimize the Euclidean norm \f$ \Vert A x - b \Vert \f$. */ template inline const Solve solve(const MatrixBase& b) const; #endif /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful. */ EIGEN_DEVICE_FUNC ComputationInfo info() const { eigen_assert(m_isInitialized && "SVD is not initialized."); return m_info; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } void _check_compute_assertions() const { eigen_assert(m_isInitialized && "SVD is not initialized."); } template void _check_solve_assertion(const Rhs& b) const { EIGEN_ONLY_USED_FOR_DEBUG(b); _check_compute_assertions(); eigen_assert(computeU() && computeV() && "SVDBase::solve(): Both unitaries U and V are required to be computed (thin unitaries suffice)."); eigen_assert((Transpose_?cols():rows())==b.rows() && "SVDBase::solve(): invalid number of rows of the right hand side matrix b"); } // return true if already allocated bool allocate(Index rows, Index cols, unsigned int computationOptions) ; MatrixUType m_matrixU; MatrixVType m_matrixV; SingularValuesType m_singularValues; ComputationInfo m_info; bool m_isInitialized, m_isAllocated, m_usePrescribedThreshold; bool m_computeFullU, m_computeThinU; bool m_computeFullV, m_computeThinV; unsigned int m_computationOptions; Index m_nonzeroSingularValues, m_rows, m_cols, m_diagSize; RealScalar m_prescribedThreshold; /** \brief Default Constructor. * * Default constructor of SVDBase */ SVDBase() : m_info(Success), m_isInitialized(false), m_isAllocated(false), m_usePrescribedThreshold(false), m_computeFullU(false), m_computeThinU(false), m_computeFullV(false), m_computeThinV(false), m_computationOptions(0), m_rows(-1), m_cols(-1), m_diagSize(0) { check_template_parameters(); } }; #ifndef EIGEN_PARSED_BY_DOXYGEN template template void SVDBase::_solve_impl(const RhsType &rhs, DstType &dst) const { // A = U S V^* // So A^{-1} = V S^{-1} U^* Matrix tmp; Index l_rank = rank(); tmp.noalias() = m_matrixU.leftCols(l_rank).adjoint() * rhs; tmp = m_singularValues.head(l_rank).asDiagonal().inverse() * tmp; dst = m_matrixV.leftCols(l_rank) * tmp; } template template void SVDBase::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { // A = U S V^* // So A^{-*} = U S^{-1} V^* // And A^{-T} = U_conj S^{-1} V^T Matrix tmp; Index l_rank = rank(); tmp.noalias() = m_matrixV.leftCols(l_rank).transpose().template conjugateIf() * rhs; tmp = m_singularValues.head(l_rank).asDiagonal().inverse() * tmp; dst = m_matrixU.template conjugateIf().leftCols(l_rank) * tmp; } #endif template bool SVDBase::allocate(Index rows, Index cols, unsigned int computationOptions) { eigen_assert(rows >= 0 && cols >= 0); if (m_isAllocated && rows == m_rows && cols == m_cols && computationOptions == m_computationOptions) { return true; } m_rows = rows; m_cols = cols; m_info = Success; m_isInitialized = false; m_isAllocated = true; m_computationOptions = computationOptions; m_computeFullU = (computationOptions & ComputeFullU) != 0; m_computeThinU = (computationOptions & ComputeThinU) != 0; m_computeFullV = (computationOptions & ComputeFullV) != 0; m_computeThinV = (computationOptions & ComputeThinV) != 0; eigen_assert(!(m_computeFullU && m_computeThinU) && "SVDBase: you can't ask for both full and thin U"); eigen_assert(!(m_computeFullV && m_computeThinV) && "SVDBase: you can't ask for both full and thin V"); eigen_assert(EIGEN_IMPLIES(m_computeThinU || m_computeThinV, MatrixType::ColsAtCompileTime==Dynamic) && "SVDBase: thin U and V are only available when your matrix has a dynamic number of columns."); m_diagSize = (std::min)(m_rows, m_cols); m_singularValues.resize(m_diagSize); if(RowsAtCompileTime==Dynamic) m_matrixU.resize(m_rows, m_computeFullU ? m_rows : m_computeThinU ? m_diagSize : 0); if(ColsAtCompileTime==Dynamic) m_matrixV.resize(m_cols, m_computeFullV ? m_cols : m_computeThinV ? m_diagSize : 0); return false; } }// end namespace #endif // EIGEN_SVDBASE_H piqp-octave/src/eigen/Eigen/src/SVD/JacobiSVD_LAPACKE.h0000644000175100001770000001175314107270226022050 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Singular Value Decomposition - SVD. ******************************************************************************** */ #ifndef EIGEN_JACOBISVD_LAPACKE_H #define EIGEN_JACOBISVD_LAPACKE_H namespace Eigen { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_SVD(EIGTYPE, LAPACKE_TYPE, LAPACKE_RTYPE, LAPACKE_PREFIX, EIGCOLROW, LAPACKE_COLROW) \ template<> inline \ JacobiSVD, ColPivHouseholderQRPreconditioner>& \ JacobiSVD, ColPivHouseholderQRPreconditioner>::compute(const Matrix& matrix, unsigned int computationOptions) \ { \ typedef Matrix MatrixType; \ /*typedef MatrixType::Scalar Scalar;*/ \ /*typedef MatrixType::RealScalar RealScalar;*/ \ allocate(matrix.rows(), matrix.cols(), computationOptions); \ \ /*const RealScalar precision = RealScalar(2) * NumTraits::epsilon();*/ \ m_nonzeroSingularValues = m_diagSize; \ \ lapack_int lda = internal::convert_index(matrix.outerStride()), ldu, ldvt; \ lapack_int matrix_order = LAPACKE_COLROW; \ char jobu, jobvt; \ LAPACKE_TYPE *u, *vt, dummy; \ jobu = (m_computeFullU) ? 'A' : (m_computeThinU) ? 'S' : 'N'; \ jobvt = (m_computeFullV) ? 'A' : (m_computeThinV) ? 'S' : 'N'; \ if (computeU()) { \ ldu = internal::convert_index(m_matrixU.outerStride()); \ u = (LAPACKE_TYPE*)m_matrixU.data(); \ } else { ldu=1; u=&dummy; }\ MatrixType localV; \ lapack_int vt_rows = (m_computeFullV) ? internal::convert_index(m_cols) : (m_computeThinV) ? internal::convert_index(m_diagSize) : 1; \ if (computeV()) { \ localV.resize(vt_rows, m_cols); \ ldvt = internal::convert_index(localV.outerStride()); \ vt = (LAPACKE_TYPE*)localV.data(); \ } else { ldvt=1; vt=&dummy; }\ Matrix superb; superb.resize(m_diagSize, 1); \ MatrixType m_temp; m_temp = matrix; \ LAPACKE_##LAPACKE_PREFIX##gesvd( matrix_order, jobu, jobvt, internal::convert_index(m_rows), internal::convert_index(m_cols), (LAPACKE_TYPE*)m_temp.data(), lda, (LAPACKE_RTYPE*)m_singularValues.data(), u, ldu, vt, ldvt, superb.data()); \ if (computeV()) m_matrixV = localV.adjoint(); \ /* for(int i=0;i struct lapacke_llt; #define EIGEN_LAPACKE_LLT(EIGTYPE, BLASTYPE, LAPACKE_PREFIX) \ template<> struct lapacke_llt \ { \ template \ static inline Index potrf(MatrixType& m, char uplo) \ { \ lapack_int matrix_order; \ lapack_int size, lda, info, StorageOrder; \ EIGTYPE* a; \ eigen_assert(m.rows()==m.cols()); \ /* Set up parameters for ?potrf */ \ size = convert_index(m.rows()); \ StorageOrder = MatrixType::Flags&RowMajorBit?RowMajor:ColMajor; \ matrix_order = StorageOrder==RowMajor ? LAPACK_ROW_MAJOR : LAPACK_COL_MAJOR; \ a = &(m.coeffRef(0,0)); \ lda = convert_index(m.outerStride()); \ \ info = LAPACKE_##LAPACKE_PREFIX##potrf( matrix_order, uplo, size, (BLASTYPE*)a, lda ); \ info = (info==0) ? -1 : info>0 ? info-1 : size; \ return info; \ } \ }; \ template<> struct llt_inplace \ { \ template \ static Index blocked(MatrixType& m) \ { \ return lapacke_llt::potrf(m, 'L'); \ } \ template \ static Index rankUpdate(MatrixType& mat, const VectorType& vec, const typename MatrixType::RealScalar& sigma) \ { return Eigen::internal::llt_rank_update_lower(mat, vec, sigma); } \ }; \ template<> struct llt_inplace \ { \ template \ static Index blocked(MatrixType& m) \ { \ return lapacke_llt::potrf(m, 'U'); \ } \ template \ static Index rankUpdate(MatrixType& mat, const VectorType& vec, const typename MatrixType::RealScalar& sigma) \ { \ Transpose matt(mat); \ return llt_inplace::rankUpdate(matt, vec.conjugate(), sigma); \ } \ }; EIGEN_LAPACKE_LLT(double, double, d) EIGEN_LAPACKE_LLT(float, float, s) EIGEN_LAPACKE_LLT(dcomplex, lapack_complex_double, z) EIGEN_LAPACKE_LLT(scomplex, lapack_complex_float, c) } // end namespace internal } // end namespace Eigen #endif // EIGEN_LLT_LAPACKE_H piqp-octave/src/eigen/Eigen/src/Cholesky/LLT.h0000644000175100001770000004451014107270226020741 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_LLT_H #define EIGEN_LLT_H namespace Eigen { namespace internal{ template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; template struct LLT_Traits; } /** \ingroup Cholesky_Module * * \class LLT * * \brief Standard Cholesky decomposition (LL^T) of a matrix and associated features * * \tparam _MatrixType the type of the matrix of which we are computing the LL^T Cholesky decomposition * \tparam _UpLo the triangular part that will be used for the decompositon: Lower (default) or Upper. * The other triangular part won't be read. * * This class performs a LL^T Cholesky decomposition of a symmetric, positive definite * matrix A such that A = LL^* = U^*U, where L is lower triangular. * * While the Cholesky decomposition is particularly useful to solve selfadjoint problems like D^*D x = b, * for that purpose, we recommend the Cholesky decomposition without square root which is more stable * and even faster. Nevertheless, this standard Cholesky decomposition remains useful in many other * situations like generalised eigen problems with hermitian matrices. * * Remember that Cholesky decompositions are not rank-revealing. This LLT decomposition is only stable on positive definite matrices, * use LDLT instead for the semidefinite case. Also, do not use a Cholesky decomposition to determine whether a system of equations * has a solution. * * Example: \include LLT_example.cpp * Output: \verbinclude LLT_example.out * * \b Performance: for best performance, it is recommended to use a column-major storage format * with the Lower triangular part (the default), or, equivalently, a row-major storage format * with the Upper triangular part. Otherwise, you might get a 20% slowdown for the full factorization * step, and rank-updates can be up to 3 times slower. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * Note that during the decomposition, only the lower (or upper, as defined by _UpLo) triangular part of A is considered. * Therefore, the strict lower part does not have to store correct values. * * \sa MatrixBase::llt(), SelfAdjointView::llt(), class LDLT */ template class LLT : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; friend class SolverBase; EIGEN_GENERIC_PUBLIC_INTERFACE(LLT) enum { MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; enum { PacketSize = internal::packet_traits::size, AlignmentMask = int(PacketSize)-1, UpLo = _UpLo }; typedef internal::LLT_Traits Traits; /** * \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via LLT::compute(const MatrixType&). */ LLT() : m_matrix(), m_isInitialized(false) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa LLT() */ explicit LLT(Index size) : m_matrix(size, size), m_isInitialized(false) {} template explicit LLT(const EigenBase& matrix) : m_matrix(matrix.rows(), matrix.cols()), m_isInitialized(false) { compute(matrix.derived()); } /** \brief Constructs a LLT factorization from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when * \c MatrixType is a Eigen::Ref. * * \sa LLT(const EigenBase&) */ template explicit LLT(EigenBase& matrix) : m_matrix(matrix.derived()), m_isInitialized(false) { compute(matrix.derived()); } /** \returns a view of the upper triangular matrix U */ inline typename Traits::MatrixU matrixU() const { eigen_assert(m_isInitialized && "LLT is not initialized."); return Traits::getU(m_matrix); } /** \returns a view of the lower triangular matrix L */ inline typename Traits::MatrixL matrixL() const { eigen_assert(m_isInitialized && "LLT is not initialized."); return Traits::getL(m_matrix); } #ifdef EIGEN_PARSED_BY_DOXYGEN /** \returns the solution x of \f$ A x = b \f$ using the current decomposition of A. * * Since this LLT class assumes anyway that the matrix A is invertible, the solution * theoretically exists and is unique regardless of b. * * Example: \include LLT_solve.cpp * Output: \verbinclude LLT_solve.out * * \sa solveInPlace(), MatrixBase::llt(), SelfAdjointView::llt() */ template inline const Solve solve(const MatrixBase& b) const; #endif template void solveInPlace(const MatrixBase &bAndX) const; template LLT& compute(const EigenBase& matrix); /** \returns an estimate of the reciprocal condition number of the matrix of * which \c *this is the Cholesky decomposition. */ RealScalar rcond() const { eigen_assert(m_isInitialized && "LLT is not initialized."); eigen_assert(m_info == Success && "LLT failed because matrix appears to be negative"); return internal::rcond_estimate_helper(m_l1_norm, *this); } /** \returns the LLT decomposition matrix * * TODO: document the storage layout */ inline const MatrixType& matrixLLT() const { eigen_assert(m_isInitialized && "LLT is not initialized."); return m_matrix; } MatrixType reconstructedMatrix() const; /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the matrix.appears not to be positive definite. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "LLT is not initialized."); return m_info; } /** \returns the adjoint of \c *this, that is, a const reference to the decomposition itself as the underlying matrix is self-adjoint. * * This method is provided for compatibility with other matrix decompositions, thus enabling generic code such as: * \code x = decomposition.adjoint().solve(b) \endcode */ const LLT& adjoint() const EIGEN_NOEXCEPT { return *this; }; inline EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_matrix.rows(); } inline EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_matrix.cols(); } template LLT & rankUpdate(const VectorType& vec, const RealScalar& sigma = 1); #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } /** \internal * Used to compute and store L * The strict upper part is not used and even not initialized. */ MatrixType m_matrix; RealScalar m_l1_norm; bool m_isInitialized; ComputationInfo m_info; }; namespace internal { template struct llt_inplace; template static Index llt_rank_update_lower(MatrixType& mat, const VectorType& vec, const typename MatrixType::RealScalar& sigma) { using std::sqrt; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::ColXpr ColXpr; typedef typename internal::remove_all::type ColXprCleaned; typedef typename ColXprCleaned::SegmentReturnType ColXprSegment; typedef Matrix TempVectorType; typedef typename TempVectorType::SegmentReturnType TempVecSegment; Index n = mat.cols(); eigen_assert(mat.rows()==n && vec.size()==n); TempVectorType temp; if(sigma>0) { // This version is based on Givens rotations. // It is faster than the other one below, but only works for updates, // i.e., for sigma > 0 temp = sqrt(sigma) * vec; for(Index i=0; i g; g.makeGivens(mat(i,i), -temp(i), &mat(i,i)); Index rs = n-i-1; if(rs>0) { ColXprSegment x(mat.col(i).tail(rs)); TempVecSegment y(temp.tail(rs)); apply_rotation_in_the_plane(x, y, g); } } } else { temp = vec; RealScalar beta = 1; for(Index j=0; j struct llt_inplace { typedef typename NumTraits::Real RealScalar; template static Index unblocked(MatrixType& mat) { using std::sqrt; eigen_assert(mat.rows()==mat.cols()); const Index size = mat.rows(); for(Index k = 0; k < size; ++k) { Index rs = size-k-1; // remaining size Block A21(mat,k+1,k,rs,1); Block A10(mat,k,0,1,k); Block A20(mat,k+1,0,rs,k); RealScalar x = numext::real(mat.coeff(k,k)); if (k>0) x -= A10.squaredNorm(); if (x<=RealScalar(0)) return k; mat.coeffRef(k,k) = x = sqrt(x); if (k>0 && rs>0) A21.noalias() -= A20 * A10.adjoint(); if (rs>0) A21 /= x; } return -1; } template static Index blocked(MatrixType& m) { eigen_assert(m.rows()==m.cols()); Index size = m.rows(); if(size<32) return unblocked(m); Index blockSize = size/8; blockSize = (blockSize/16)*16; blockSize = (std::min)((std::max)(blockSize,Index(8)), Index(128)); for (Index k=0; k A11(m,k, k, bs,bs); Block A21(m,k+bs,k, rs,bs); Block A22(m,k+bs,k+bs,rs,rs); Index ret; if((ret=unblocked(A11))>=0) return k+ret; if(rs>0) A11.adjoint().template triangularView().template solveInPlace(A21); if(rs>0) A22.template selfadjointView().rankUpdate(A21,typename NumTraits::Literal(-1)); // bottleneck } return -1; } template static Index rankUpdate(MatrixType& mat, const VectorType& vec, const RealScalar& sigma) { return Eigen::internal::llt_rank_update_lower(mat, vec, sigma); } }; template struct llt_inplace { typedef typename NumTraits::Real RealScalar; template static EIGEN_STRONG_INLINE Index unblocked(MatrixType& mat) { Transpose matt(mat); return llt_inplace::unblocked(matt); } template static EIGEN_STRONG_INLINE Index blocked(MatrixType& mat) { Transpose matt(mat); return llt_inplace::blocked(matt); } template static Index rankUpdate(MatrixType& mat, const VectorType& vec, const RealScalar& sigma) { Transpose matt(mat); return llt_inplace::rankUpdate(matt, vec.conjugate(), sigma); } }; template struct LLT_Traits { typedef const TriangularView MatrixL; typedef const TriangularView MatrixU; static inline MatrixL getL(const MatrixType& m) { return MatrixL(m); } static inline MatrixU getU(const MatrixType& m) { return MatrixU(m.adjoint()); } static bool inplace_decomposition(MatrixType& m) { return llt_inplace::blocked(m)==-1; } }; template struct LLT_Traits { typedef const TriangularView MatrixL; typedef const TriangularView MatrixU; static inline MatrixL getL(const MatrixType& m) { return MatrixL(m.adjoint()); } static inline MatrixU getU(const MatrixType& m) { return MatrixU(m); } static bool inplace_decomposition(MatrixType& m) { return llt_inplace::blocked(m)==-1; } }; } // end namespace internal /** Computes / recomputes the Cholesky decomposition A = LL^* = U^*U of \a matrix * * \returns a reference to *this * * Example: \include TutorialLinAlgComputeTwice.cpp * Output: \verbinclude TutorialLinAlgComputeTwice.out */ template template LLT& LLT::compute(const EigenBase& a) { check_template_parameters(); eigen_assert(a.rows()==a.cols()); const Index size = a.rows(); m_matrix.resize(size, size); if (!internal::is_same_dense(m_matrix, a.derived())) m_matrix = a.derived(); // Compute matrix L1 norm = max abs column sum. m_l1_norm = RealScalar(0); // TODO move this code to SelfAdjointView for (Index col = 0; col < size; ++col) { RealScalar abs_col_sum; if (_UpLo == Lower) abs_col_sum = m_matrix.col(col).tail(size - col).template lpNorm<1>() + m_matrix.row(col).head(col).template lpNorm<1>(); else abs_col_sum = m_matrix.col(col).head(col).template lpNorm<1>() + m_matrix.row(col).tail(size - col).template lpNorm<1>(); if (abs_col_sum > m_l1_norm) m_l1_norm = abs_col_sum; } m_isInitialized = true; bool ok = Traits::inplace_decomposition(m_matrix); m_info = ok ? Success : NumericalIssue; return *this; } /** Performs a rank one update (or dowdate) of the current decomposition. * If A = LL^* before the rank one update, * then after it we have LL^* = A + sigma * v v^* where \a v must be a vector * of same dimension. */ template template LLT<_MatrixType,_UpLo> & LLT<_MatrixType,_UpLo>::rankUpdate(const VectorType& v, const RealScalar& sigma) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(VectorType); eigen_assert(v.size()==m_matrix.cols()); eigen_assert(m_isInitialized); if(internal::llt_inplace::rankUpdate(m_matrix,v,sigma)>=0) m_info = NumericalIssue; else m_info = Success; return *this; } #ifndef EIGEN_PARSED_BY_DOXYGEN template template void LLT<_MatrixType,_UpLo>::_solve_impl(const RhsType &rhs, DstType &dst) const { _solve_impl_transposed(rhs, dst); } template template void LLT<_MatrixType,_UpLo>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { dst = rhs; matrixL().template conjugateIf().solveInPlace(dst); matrixU().template conjugateIf().solveInPlace(dst); } #endif /** \internal use x = llt_object.solve(x); * * This is the \em in-place version of solve(). * * \param bAndX represents both the right-hand side matrix b and result x. * * This version avoids a copy when the right hand side matrix b is not needed anymore. * * \warning The parameter is only marked 'const' to make the C++ compiler accept a temporary expression here. * This function will const_cast it, so constness isn't honored here. * * \sa LLT::solve(), MatrixBase::llt() */ template template void LLT::solveInPlace(const MatrixBase &bAndX) const { eigen_assert(m_isInitialized && "LLT is not initialized."); eigen_assert(m_matrix.rows()==bAndX.rows()); matrixL().solveInPlace(bAndX); matrixU().solveInPlace(bAndX); } /** \returns the matrix represented by the decomposition, * i.e., it returns the product: L L^*. * This function is provided for debug purpose. */ template MatrixType LLT::reconstructedMatrix() const { eigen_assert(m_isInitialized && "LLT is not initialized."); return matrixL() * matrixL().adjoint().toDenseMatrix(); } /** \cholesky_module * \returns the LLT decomposition of \c *this * \sa SelfAdjointView::llt() */ template inline const LLT::PlainObject> MatrixBase::llt() const { return LLT(derived()); } /** \cholesky_module * \returns the LLT decomposition of \c *this * \sa SelfAdjointView::llt() */ template inline const LLT::PlainObject, UpLo> SelfAdjointView::llt() const { return LLT(m_matrix); } } // end namespace Eigen #endif // EIGEN_LLT_H piqp-octave/src/eigen/Eigen/src/Cholesky/LDLT.h0000644000175100001770000006054614107270226021054 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2011 Gael Guennebaud // Copyright (C) 2009 Keir Mierle // Copyright (C) 2009 Benoit Jacob // Copyright (C) 2011 Timothy E. Holy // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_LDLT_H #define EIGEN_LDLT_H namespace Eigen { namespace internal { template struct traits > : traits<_MatrixType> { typedef MatrixXpr XprKind; typedef SolverStorage StorageKind; typedef int StorageIndex; enum { Flags = 0 }; }; template struct LDLT_Traits; // PositiveSemiDef means positive semi-definite and non-zero; same for NegativeSemiDef enum SignMatrix { PositiveSemiDef, NegativeSemiDef, ZeroSign, Indefinite }; } /** \ingroup Cholesky_Module * * \class LDLT * * \brief Robust Cholesky decomposition of a matrix with pivoting * * \tparam _MatrixType the type of the matrix of which to compute the LDL^T Cholesky decomposition * \tparam _UpLo the triangular part that will be used for the decompositon: Lower (default) or Upper. * The other triangular part won't be read. * * Perform a robust Cholesky decomposition of a positive semidefinite or negative semidefinite * matrix \f$ A \f$ such that \f$ A = P^TLDL^*P \f$, where P is a permutation matrix, L * is lower triangular with a unit diagonal and D is a diagonal matrix. * * The decomposition uses pivoting to ensure stability, so that D will have * zeros in the bottom right rank(A) - n submatrix. Avoiding the square root * on D also stabilizes the computation. * * Remember that Cholesky decompositions are not rank-revealing. Also, do not use a Cholesky * decomposition to determine whether a system of equations has a solution. * * This class supports the \link InplaceDecomposition inplace decomposition \endlink mechanism. * * \sa MatrixBase::ldlt(), SelfAdjointView::ldlt(), class LLT */ template class LDLT : public SolverBase > { public: typedef _MatrixType MatrixType; typedef SolverBase Base; friend class SolverBase; EIGEN_GENERIC_PUBLIC_INTERFACE(LDLT) enum { MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime, UpLo = _UpLo }; typedef Matrix TmpMatrixType; typedef Transpositions TranspositionType; typedef PermutationMatrix PermutationType; typedef internal::LDLT_Traits Traits; /** \brief Default Constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via LDLT::compute(const MatrixType&). */ LDLT() : m_matrix(), m_transpositions(), m_sign(internal::ZeroSign), m_isInitialized(false) {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa LDLT() */ explicit LDLT(Index size) : m_matrix(size, size), m_transpositions(size), m_temporary(size), m_sign(internal::ZeroSign), m_isInitialized(false) {} /** \brief Constructor with decomposition * * This calculates the decomposition for the input \a matrix. * * \sa LDLT(Index size) */ template explicit LDLT(const EigenBase& matrix) : m_matrix(matrix.rows(), matrix.cols()), m_transpositions(matrix.rows()), m_temporary(matrix.rows()), m_sign(internal::ZeroSign), m_isInitialized(false) { compute(matrix.derived()); } /** \brief Constructs a LDLT factorization from a given matrix * * This overloaded constructor is provided for \link InplaceDecomposition inplace decomposition \endlink when \c MatrixType is a Eigen::Ref. * * \sa LDLT(const EigenBase&) */ template explicit LDLT(EigenBase& matrix) : m_matrix(matrix.derived()), m_transpositions(matrix.rows()), m_temporary(matrix.rows()), m_sign(internal::ZeroSign), m_isInitialized(false) { compute(matrix.derived()); } /** Clear any existing decomposition * \sa rankUpdate(w,sigma) */ void setZero() { m_isInitialized = false; } /** \returns a view of the upper triangular matrix U */ inline typename Traits::MatrixU matrixU() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return Traits::getU(m_matrix); } /** \returns a view of the lower triangular matrix L */ inline typename Traits::MatrixL matrixL() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return Traits::getL(m_matrix); } /** \returns the permutation matrix P as a transposition sequence. */ inline const TranspositionType& transpositionsP() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_transpositions; } /** \returns the coefficients of the diagonal matrix D */ inline Diagonal vectorD() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_matrix.diagonal(); } /** \returns true if the matrix is positive (semidefinite) */ inline bool isPositive() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_sign == internal::PositiveSemiDef || m_sign == internal::ZeroSign; } /** \returns true if the matrix is negative (semidefinite) */ inline bool isNegative(void) const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_sign == internal::NegativeSemiDef || m_sign == internal::ZeroSign; } #ifdef EIGEN_PARSED_BY_DOXYGEN /** \returns a solution x of \f$ A x = b \f$ using the current decomposition of A. * * This function also supports in-place solves using the syntax x = decompositionObject.solve(x) . * * \note_about_checking_solutions * * More precisely, this method solves \f$ A x = b \f$ using the decomposition \f$ A = P^T L D L^* P \f$ * by solving the systems \f$ P^T y_1 = b \f$, \f$ L y_2 = y_1 \f$, \f$ D y_3 = y_2 \f$, * \f$ L^* y_4 = y_3 \f$ and \f$ P x = y_4 \f$ in succession. If the matrix \f$ A \f$ is singular, then * \f$ D \f$ will also be singular (all the other matrices are invertible). In that case, the * least-square solution of \f$ D y_3 = y_2 \f$ is computed. This does not mean that this function * computes the least-square solution of \f$ A x = b \f$ if \f$ A \f$ is singular. * * \sa MatrixBase::ldlt(), SelfAdjointView::ldlt() */ template inline const Solve solve(const MatrixBase& b) const; #endif template bool solveInPlace(MatrixBase &bAndX) const; template LDLT& compute(const EigenBase& matrix); /** \returns an estimate of the reciprocal condition number of the matrix of * which \c *this is the LDLT decomposition. */ RealScalar rcond() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return internal::rcond_estimate_helper(m_l1_norm, *this); } template LDLT& rankUpdate(const MatrixBase& w, const RealScalar& alpha=1); /** \returns the internal LDLT decomposition matrix * * TODO: document the storage layout */ inline const MatrixType& matrixLDLT() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_matrix; } MatrixType reconstructedMatrix() const; /** \returns the adjoint of \c *this, that is, a const reference to the decomposition itself as the underlying matrix is self-adjoint. * * This method is provided for compatibility with other matrix decompositions, thus enabling generic code such as: * \code x = decomposition.adjoint().solve(b) \endcode */ const LDLT& adjoint() const { return *this; }; EIGEN_DEVICE_FUNC inline EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_matrix.rows(); } EIGEN_DEVICE_FUNC inline EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_matrix.cols(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the factorization failed because of a zero pivot. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); return m_info; } #ifndef EIGEN_PARSED_BY_DOXYGEN template void _solve_impl(const RhsType &rhs, DstType &dst) const; template void _solve_impl_transposed(const RhsType &rhs, DstType &dst) const; #endif protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } /** \internal * Used to compute and store the Cholesky decomposition A = L D L^* = U^* D U. * The strict upper part is used during the decomposition, the strict lower * part correspond to the coefficients of L (its diagonal is equal to 1 and * is not stored), and the diagonal entries correspond to D. */ MatrixType m_matrix; RealScalar m_l1_norm; TranspositionType m_transpositions; TmpMatrixType m_temporary; internal::SignMatrix m_sign; bool m_isInitialized; ComputationInfo m_info; }; namespace internal { template struct ldlt_inplace; template<> struct ldlt_inplace { template static bool unblocked(MatrixType& mat, TranspositionType& transpositions, Workspace& temp, SignMatrix& sign) { using std::abs; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename TranspositionType::StorageIndex IndexType; eigen_assert(mat.rows()==mat.cols()); const Index size = mat.rows(); bool found_zero_pivot = false; bool ret = true; if (size <= 1) { transpositions.setIdentity(); if(size==0) sign = ZeroSign; else if (numext::real(mat.coeff(0,0)) > static_cast(0) ) sign = PositiveSemiDef; else if (numext::real(mat.coeff(0,0)) < static_cast(0)) sign = NegativeSemiDef; else sign = ZeroSign; return true; } for (Index k = 0; k < size; ++k) { // Find largest diagonal element Index index_of_biggest_in_corner; mat.diagonal().tail(size-k).cwiseAbs().maxCoeff(&index_of_biggest_in_corner); index_of_biggest_in_corner += k; transpositions.coeffRef(k) = IndexType(index_of_biggest_in_corner); if(k != index_of_biggest_in_corner) { // apply the transposition while taking care to consider only // the lower triangular part Index s = size-index_of_biggest_in_corner-1; // trailing size after the biggest element mat.row(k).head(k).swap(mat.row(index_of_biggest_in_corner).head(k)); mat.col(k).tail(s).swap(mat.col(index_of_biggest_in_corner).tail(s)); std::swap(mat.coeffRef(k,k),mat.coeffRef(index_of_biggest_in_corner,index_of_biggest_in_corner)); for(Index i=k+1;i::IsComplex) mat.coeffRef(index_of_biggest_in_corner,k) = numext::conj(mat.coeff(index_of_biggest_in_corner,k)); } // partition the matrix: // A00 | - | - // lu = A10 | A11 | - // A20 | A21 | A22 Index rs = size - k - 1; Block A21(mat,k+1,k,rs,1); Block A10(mat,k,0,1,k); Block A20(mat,k+1,0,rs,k); if(k>0) { temp.head(k) = mat.diagonal().real().head(k).asDiagonal() * A10.adjoint(); mat.coeffRef(k,k) -= (A10 * temp.head(k)).value(); if(rs>0) A21.noalias() -= A20 * temp.head(k); } // In some previous versions of Eigen (e.g., 3.2.1), the scaling was omitted if the pivot // was smaller than the cutoff value. However, since LDLT is not rank-revealing // we should only make sure that we do not introduce INF or NaN values. // Remark that LAPACK also uses 0 as the cutoff value. RealScalar realAkk = numext::real(mat.coeffRef(k,k)); bool pivot_is_valid = (abs(realAkk) > RealScalar(0)); if(k==0 && !pivot_is_valid) { // The entire diagonal is zero, there is nothing more to do // except filling the transpositions, and checking whether the matrix is zero. sign = ZeroSign; for(Index j = 0; j0) && pivot_is_valid) A21 /= realAkk; else if(rs>0) ret = ret && (A21.array()==Scalar(0)).all(); if(found_zero_pivot && pivot_is_valid) ret = false; // factorization failed else if(!pivot_is_valid) found_zero_pivot = true; if (sign == PositiveSemiDef) { if (realAkk < static_cast(0)) sign = Indefinite; } else if (sign == NegativeSemiDef) { if (realAkk > static_cast(0)) sign = Indefinite; } else if (sign == ZeroSign) { if (realAkk > static_cast(0)) sign = PositiveSemiDef; else if (realAkk < static_cast(0)) sign = NegativeSemiDef; } } return ret; } // Reference for the algorithm: Davis and Hager, "Multiple Rank // Modifications of a Sparse Cholesky Factorization" (Algorithm 1) // Trivial rearrangements of their computations (Timothy E. Holy) // allow their algorithm to work for rank-1 updates even if the // original matrix is not of full rank. // Here only rank-1 updates are implemented, to reduce the // requirement for intermediate storage and improve accuracy template static bool updateInPlace(MatrixType& mat, MatrixBase& w, const typename MatrixType::RealScalar& sigma=1) { using numext::isfinite; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; const Index size = mat.rows(); eigen_assert(mat.cols() == size && w.size()==size); RealScalar alpha = 1; // Apply the update for (Index j = 0; j < size; j++) { // Check for termination due to an original decomposition of low-rank if (!(isfinite)(alpha)) break; // Update the diagonal terms RealScalar dj = numext::real(mat.coeff(j,j)); Scalar wj = w.coeff(j); RealScalar swj2 = sigma*numext::abs2(wj); RealScalar gamma = dj*alpha + swj2; mat.coeffRef(j,j) += swj2/alpha; alpha += swj2/dj; // Update the terms of L Index rs = size-j-1; w.tail(rs) -= wj * mat.col(j).tail(rs); if(gamma != 0) mat.col(j).tail(rs) += (sigma*numext::conj(wj)/gamma)*w.tail(rs); } return true; } template static bool update(MatrixType& mat, const TranspositionType& transpositions, Workspace& tmp, const WType& w, const typename MatrixType::RealScalar& sigma=1) { // Apply the permutation to the input w tmp = transpositions * w; return ldlt_inplace::updateInPlace(mat,tmp,sigma); } }; template<> struct ldlt_inplace { template static EIGEN_STRONG_INLINE bool unblocked(MatrixType& mat, TranspositionType& transpositions, Workspace& temp, SignMatrix& sign) { Transpose matt(mat); return ldlt_inplace::unblocked(matt, transpositions, temp, sign); } template static EIGEN_STRONG_INLINE bool update(MatrixType& mat, TranspositionType& transpositions, Workspace& tmp, WType& w, const typename MatrixType::RealScalar& sigma=1) { Transpose matt(mat); return ldlt_inplace::update(matt, transpositions, tmp, w.conjugate(), sigma); } }; template struct LDLT_Traits { typedef const TriangularView MatrixL; typedef const TriangularView MatrixU; static inline MatrixL getL(const MatrixType& m) { return MatrixL(m); } static inline MatrixU getU(const MatrixType& m) { return MatrixU(m.adjoint()); } }; template struct LDLT_Traits { typedef const TriangularView MatrixL; typedef const TriangularView MatrixU; static inline MatrixL getL(const MatrixType& m) { return MatrixL(m.adjoint()); } static inline MatrixU getU(const MatrixType& m) { return MatrixU(m); } }; } // end namespace internal /** Compute / recompute the LDLT decomposition A = L D L^* = U^* D U of \a matrix */ template template LDLT& LDLT::compute(const EigenBase& a) { check_template_parameters(); eigen_assert(a.rows()==a.cols()); const Index size = a.rows(); m_matrix = a.derived(); // Compute matrix L1 norm = max abs column sum. m_l1_norm = RealScalar(0); // TODO move this code to SelfAdjointView for (Index col = 0; col < size; ++col) { RealScalar abs_col_sum; if (_UpLo == Lower) abs_col_sum = m_matrix.col(col).tail(size - col).template lpNorm<1>() + m_matrix.row(col).head(col).template lpNorm<1>(); else abs_col_sum = m_matrix.col(col).head(col).template lpNorm<1>() + m_matrix.row(col).tail(size - col).template lpNorm<1>(); if (abs_col_sum > m_l1_norm) m_l1_norm = abs_col_sum; } m_transpositions.resize(size); m_isInitialized = false; m_temporary.resize(size); m_sign = internal::ZeroSign; m_info = internal::ldlt_inplace::unblocked(m_matrix, m_transpositions, m_temporary, m_sign) ? Success : NumericalIssue; m_isInitialized = true; return *this; } /** Update the LDLT decomposition: given A = L D L^T, efficiently compute the decomposition of A + sigma w w^T. * \param w a vector to be incorporated into the decomposition. * \param sigma a scalar, +1 for updates and -1 for "downdates," which correspond to removing previously-added column vectors. Optional; default value is +1. * \sa setZero() */ template template LDLT& LDLT::rankUpdate(const MatrixBase& w, const typename LDLT::RealScalar& sigma) { typedef typename TranspositionType::StorageIndex IndexType; const Index size = w.rows(); if (m_isInitialized) { eigen_assert(m_matrix.rows()==size); } else { m_matrix.resize(size,size); m_matrix.setZero(); m_transpositions.resize(size); for (Index i = 0; i < size; i++) m_transpositions.coeffRef(i) = IndexType(i); m_temporary.resize(size); m_sign = sigma>=0 ? internal::PositiveSemiDef : internal::NegativeSemiDef; m_isInitialized = true; } internal::ldlt_inplace::update(m_matrix, m_transpositions, m_temporary, w, sigma); return *this; } #ifndef EIGEN_PARSED_BY_DOXYGEN template template void LDLT<_MatrixType,_UpLo>::_solve_impl(const RhsType &rhs, DstType &dst) const { _solve_impl_transposed(rhs, dst); } template template void LDLT<_MatrixType,_UpLo>::_solve_impl_transposed(const RhsType &rhs, DstType &dst) const { // dst = P b dst = m_transpositions * rhs; // dst = L^-1 (P b) // dst = L^-*T (P b) matrixL().template conjugateIf().solveInPlace(dst); // dst = D^-* (L^-1 P b) // dst = D^-1 (L^-*T P b) // more precisely, use pseudo-inverse of D (see bug 241) using std::abs; const typename Diagonal::RealReturnType vecD(vectorD()); // In some previous versions, tolerance was set to the max of 1/highest (or rather numeric_limits::min()) // and the maximal diagonal entry * epsilon as motivated by LAPACK's xGELSS: // RealScalar tolerance = numext::maxi(vecD.array().abs().maxCoeff() * NumTraits::epsilon(),RealScalar(1) / NumTraits::highest()); // However, LDLT is not rank revealing, and so adjusting the tolerance wrt to the highest // diagonal element is not well justified and leads to numerical issues in some cases. // Moreover, Lapack's xSYTRS routines use 0 for the tolerance. // Using numeric_limits::min() gives us more robustness to denormals. RealScalar tolerance = (std::numeric_limits::min)(); for (Index i = 0; i < vecD.size(); ++i) { if(abs(vecD(i)) > tolerance) dst.row(i) /= vecD(i); else dst.row(i).setZero(); } // dst = L^-* (D^-* L^-1 P b) // dst = L^-T (D^-1 L^-*T P b) matrixL().transpose().template conjugateIf().solveInPlace(dst); // dst = P^T (L^-* D^-* L^-1 P b) = A^-1 b // dst = P^-T (L^-T D^-1 L^-*T P b) = A^-1 b dst = m_transpositions.transpose() * dst; } #endif /** \internal use x = ldlt_object.solve(x); * * This is the \em in-place version of solve(). * * \param bAndX represents both the right-hand side matrix b and result x. * * \returns true always! If you need to check for existence of solutions, use another decomposition like LU, QR, or SVD. * * This version avoids a copy when the right hand side matrix b is not * needed anymore. * * \sa LDLT::solve(), MatrixBase::ldlt() */ template template bool LDLT::solveInPlace(MatrixBase &bAndX) const { eigen_assert(m_isInitialized && "LDLT is not initialized."); eigen_assert(m_matrix.rows() == bAndX.rows()); bAndX = this->solve(bAndX); return true; } /** \returns the matrix represented by the decomposition, * i.e., it returns the product: P^T L D L^* P. * This function is provided for debug purpose. */ template MatrixType LDLT::reconstructedMatrix() const { eigen_assert(m_isInitialized && "LDLT is not initialized."); const Index size = m_matrix.rows(); MatrixType res(size,size); // P res.setIdentity(); res = transpositionsP() * res; // L^* P res = matrixU() * res; // D(L^*P) res = vectorD().real().asDiagonal() * res; // L(DL^*P) res = matrixL() * res; // P^T (LDL^*P) res = transpositionsP().transpose() * res; return res; } /** \cholesky_module * \returns the Cholesky decomposition with full pivoting without square root of \c *this * \sa MatrixBase::ldlt() */ template inline const LDLT::PlainObject, UpLo> SelfAdjointView::ldlt() const { return LDLT(m_matrix); } /** \cholesky_module * \returns the Cholesky decomposition with full pivoting without square root of \c *this * \sa SelfAdjointView::ldlt() */ template inline const LDLT::PlainObject> MatrixBase::ldlt() const { return LDLT(derived()); } } // end namespace Eigen #endif // EIGEN_LDLT_H piqp-octave/src/eigen/Eigen/src/Jacobi/0000755000175100001770000000000014107270226017537 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Jacobi/Jacobi.h0000644000175100001770000003777714107270226021124 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Benoit Jacob // Copyright (C) 2009 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_JACOBI_H #define EIGEN_JACOBI_H namespace Eigen { /** \ingroup Jacobi_Module * \jacobi_module * \class JacobiRotation * \brief Rotation given by a cosine-sine pair. * * This class represents a Jacobi or Givens rotation. * This is a 2D rotation in the plane \c J of angle \f$ \theta \f$ defined by * its cosine \c c and sine \c s as follow: * \f$ J = \left ( \begin{array}{cc} c & \overline s \\ -s & \overline c \end{array} \right ) \f$ * * You can apply the respective counter-clockwise rotation to a column vector \c v by * applying its adjoint on the left: \f$ v = J^* v \f$ that translates to the following Eigen code: * \code * v.applyOnTheLeft(J.adjoint()); * \endcode * * \sa MatrixBase::applyOnTheLeft(), MatrixBase::applyOnTheRight() */ template class JacobiRotation { public: typedef typename NumTraits::Real RealScalar; /** Default constructor without any initialization. */ EIGEN_DEVICE_FUNC JacobiRotation() {} /** Construct a planar rotation from a cosine-sine pair (\a c, \c s). */ EIGEN_DEVICE_FUNC JacobiRotation(const Scalar& c, const Scalar& s) : m_c(c), m_s(s) {} EIGEN_DEVICE_FUNC Scalar& c() { return m_c; } EIGEN_DEVICE_FUNC Scalar c() const { return m_c; } EIGEN_DEVICE_FUNC Scalar& s() { return m_s; } EIGEN_DEVICE_FUNC Scalar s() const { return m_s; } /** Concatenates two planar rotation */ EIGEN_DEVICE_FUNC JacobiRotation operator*(const JacobiRotation& other) { using numext::conj; return JacobiRotation(m_c * other.m_c - conj(m_s) * other.m_s, conj(m_c * conj(other.m_s) + conj(m_s) * conj(other.m_c))); } /** Returns the transposed transformation */ EIGEN_DEVICE_FUNC JacobiRotation transpose() const { using numext::conj; return JacobiRotation(m_c, -conj(m_s)); } /** Returns the adjoint transformation */ EIGEN_DEVICE_FUNC JacobiRotation adjoint() const { using numext::conj; return JacobiRotation(conj(m_c), -m_s); } template EIGEN_DEVICE_FUNC bool makeJacobi(const MatrixBase&, Index p, Index q); EIGEN_DEVICE_FUNC bool makeJacobi(const RealScalar& x, const Scalar& y, const RealScalar& z); EIGEN_DEVICE_FUNC void makeGivens(const Scalar& p, const Scalar& q, Scalar* r=0); protected: EIGEN_DEVICE_FUNC void makeGivens(const Scalar& p, const Scalar& q, Scalar* r, internal::true_type); EIGEN_DEVICE_FUNC void makeGivens(const Scalar& p, const Scalar& q, Scalar* r, internal::false_type); Scalar m_c, m_s; }; /** Makes \c *this as a Jacobi rotation \a J such that applying \a J on both the right and left sides of the selfadjoint 2x2 matrix * \f$ B = \left ( \begin{array}{cc} x & y \\ \overline y & z \end{array} \right )\f$ yields a diagonal matrix \f$ A = J^* B J \f$ * * \sa MatrixBase::makeJacobi(const MatrixBase&, Index, Index), MatrixBase::applyOnTheLeft(), MatrixBase::applyOnTheRight() */ template EIGEN_DEVICE_FUNC bool JacobiRotation::makeJacobi(const RealScalar& x, const Scalar& y, const RealScalar& z) { using std::sqrt; using std::abs; RealScalar deno = RealScalar(2)*abs(y); if(deno < (std::numeric_limits::min)()) { m_c = Scalar(1); m_s = Scalar(0); return false; } else { RealScalar tau = (x-z)/deno; RealScalar w = sqrt(numext::abs2(tau) + RealScalar(1)); RealScalar t; if(tau>RealScalar(0)) { t = RealScalar(1) / (tau + w); } else { t = RealScalar(1) / (tau - w); } RealScalar sign_t = t > RealScalar(0) ? RealScalar(1) : RealScalar(-1); RealScalar n = RealScalar(1) / sqrt(numext::abs2(t)+RealScalar(1)); m_s = - sign_t * (numext::conj(y) / abs(y)) * abs(t) * n; m_c = n; return true; } } /** Makes \c *this as a Jacobi rotation \c J such that applying \a J on both the right and left sides of the 2x2 selfadjoint matrix * \f$ B = \left ( \begin{array}{cc} \text{this}_{pp} & \text{this}_{pq} \\ (\text{this}_{pq})^* & \text{this}_{qq} \end{array} \right )\f$ yields * a diagonal matrix \f$ A = J^* B J \f$ * * Example: \include Jacobi_makeJacobi.cpp * Output: \verbinclude Jacobi_makeJacobi.out * * \sa JacobiRotation::makeJacobi(RealScalar, Scalar, RealScalar), MatrixBase::applyOnTheLeft(), MatrixBase::applyOnTheRight() */ template template EIGEN_DEVICE_FUNC inline bool JacobiRotation::makeJacobi(const MatrixBase& m, Index p, Index q) { return makeJacobi(numext::real(m.coeff(p,p)), m.coeff(p,q), numext::real(m.coeff(q,q))); } /** Makes \c *this as a Givens rotation \c G such that applying \f$ G^* \f$ to the left of the vector * \f$ V = \left ( \begin{array}{c} p \\ q \end{array} \right )\f$ yields: * \f$ G^* V = \left ( \begin{array}{c} r \\ 0 \end{array} \right )\f$. * * The value of \a r is returned if \a r is not null (the default is null). * Also note that G is built such that the cosine is always real. * * Example: \include Jacobi_makeGivens.cpp * Output: \verbinclude Jacobi_makeGivens.out * * This function implements the continuous Givens rotation generation algorithm * found in Anderson (2000), Discontinuous Plane Rotations and the Symmetric Eigenvalue Problem. * LAPACK Working Note 150, University of Tennessee, UT-CS-00-454, December 4, 2000. * * \sa MatrixBase::applyOnTheLeft(), MatrixBase::applyOnTheRight() */ template EIGEN_DEVICE_FUNC void JacobiRotation::makeGivens(const Scalar& p, const Scalar& q, Scalar* r) { makeGivens(p, q, r, typename internal::conditional::IsComplex, internal::true_type, internal::false_type>::type()); } // specialization for complexes template EIGEN_DEVICE_FUNC void JacobiRotation::makeGivens(const Scalar& p, const Scalar& q, Scalar* r, internal::true_type) { using std::sqrt; using std::abs; using numext::conj; if(q==Scalar(0)) { m_c = numext::real(p)<0 ? Scalar(-1) : Scalar(1); m_s = 0; if(r) *r = m_c * p; } else if(p==Scalar(0)) { m_c = 0; m_s = -q/abs(q); if(r) *r = abs(q); } else { RealScalar p1 = numext::norm1(p); RealScalar q1 = numext::norm1(q); if(p1>=q1) { Scalar ps = p / p1; RealScalar p2 = numext::abs2(ps); Scalar qs = q / p1; RealScalar q2 = numext::abs2(qs); RealScalar u = sqrt(RealScalar(1) + q2/p2); if(numext::real(p) EIGEN_DEVICE_FUNC void JacobiRotation::makeGivens(const Scalar& p, const Scalar& q, Scalar* r, internal::false_type) { using std::sqrt; using std::abs; if(q==Scalar(0)) { m_c = p abs(q)) { Scalar t = q/p; Scalar u = sqrt(Scalar(1) + numext::abs2(t)); if(p EIGEN_DEVICE_FUNC void apply_rotation_in_the_plane(DenseBase& xpr_x, DenseBase& xpr_y, const JacobiRotation& j); } /** \jacobi_module * Applies the rotation in the plane \a j to the rows \a p and \a q of \c *this, i.e., it computes B = J * B, * with \f$ B = \left ( \begin{array}{cc} \text{*this.row}(p) \\ \text{*this.row}(q) \end{array} \right ) \f$. * * \sa class JacobiRotation, MatrixBase::applyOnTheRight(), internal::apply_rotation_in_the_plane() */ template template EIGEN_DEVICE_FUNC inline void MatrixBase::applyOnTheLeft(Index p, Index q, const JacobiRotation& j) { RowXpr x(this->row(p)); RowXpr y(this->row(q)); internal::apply_rotation_in_the_plane(x, y, j); } /** \ingroup Jacobi_Module * Applies the rotation in the plane \a j to the columns \a p and \a q of \c *this, i.e., it computes B = B * J * with \f$ B = \left ( \begin{array}{cc} \text{*this.col}(p) & \text{*this.col}(q) \end{array} \right ) \f$. * * \sa class JacobiRotation, MatrixBase::applyOnTheLeft(), internal::apply_rotation_in_the_plane() */ template template EIGEN_DEVICE_FUNC inline void MatrixBase::applyOnTheRight(Index p, Index q, const JacobiRotation& j) { ColXpr x(this->col(p)); ColXpr y(this->col(q)); internal::apply_rotation_in_the_plane(x, y, j.transpose()); } namespace internal { template struct apply_rotation_in_the_plane_selector { static EIGEN_DEVICE_FUNC inline void run(Scalar *x, Index incrx, Scalar *y, Index incry, Index size, OtherScalar c, OtherScalar s) { for(Index i=0; i struct apply_rotation_in_the_plane_selector { static inline void run(Scalar *x, Index incrx, Scalar *y, Index incry, Index size, OtherScalar c, OtherScalar s) { enum { PacketSize = packet_traits::size, OtherPacketSize = packet_traits::size }; typedef typename packet_traits::type Packet; typedef typename packet_traits::type OtherPacket; /*** dynamic-size vectorized paths ***/ if(SizeAtCompileTime == Dynamic && ((incrx==1 && incry==1) || PacketSize == 1)) { // both vectors are sequentially stored in memory => vectorization enum { Peeling = 2 }; Index alignedStart = internal::first_default_aligned(y, size); Index alignedEnd = alignedStart + ((size-alignedStart)/PacketSize)*PacketSize; const OtherPacket pc = pset1(c); const OtherPacket ps = pset1(s); conj_helper::IsComplex,false> pcj; conj_helper pm; for(Index i=0; i(px); Packet yi = pload(py); pstore(px, padd(pm.pmul(pc,xi),pcj.pmul(ps,yi))); pstore(py, psub(pcj.pmul(pc,yi),pm.pmul(ps,xi))); px += PacketSize; py += PacketSize; } } else { Index peelingEnd = alignedStart + ((size-alignedStart)/(Peeling*PacketSize))*(Peeling*PacketSize); for(Index i=alignedStart; i(px); Packet xi1 = ploadu(px+PacketSize); Packet yi = pload (py); Packet yi1 = pload (py+PacketSize); pstoreu(px, padd(pm.pmul(pc,xi),pcj.pmul(ps,yi))); pstoreu(px+PacketSize, padd(pm.pmul(pc,xi1),pcj.pmul(ps,yi1))); pstore (py, psub(pcj.pmul(pc,yi),pm.pmul(ps,xi))); pstore (py+PacketSize, psub(pcj.pmul(pc,yi1),pm.pmul(ps,xi1))); px += Peeling*PacketSize; py += Peeling*PacketSize; } if(alignedEnd!=peelingEnd) { Packet xi = ploadu(x+peelingEnd); Packet yi = pload (y+peelingEnd); pstoreu(x+peelingEnd, padd(pm.pmul(pc,xi),pcj.pmul(ps,yi))); pstore (y+peelingEnd, psub(pcj.pmul(pc,yi),pm.pmul(ps,xi))); } } for(Index i=alignedEnd; i0) // FIXME should be compared to the required alignment { const OtherPacket pc = pset1(c); const OtherPacket ps = pset1(s); conj_helper::IsComplex,false> pcj; conj_helper pm; Scalar* EIGEN_RESTRICT px = x; Scalar* EIGEN_RESTRICT py = y; for(Index i=0; i(px); Packet yi = pload(py); pstore(px, padd(pm.pmul(pc,xi),pcj.pmul(ps,yi))); pstore(py, psub(pcj.pmul(pc,yi),pm.pmul(ps,xi))); px += PacketSize; py += PacketSize; } } /*** non-vectorized path ***/ else { apply_rotation_in_the_plane_selector::run(x,incrx,y,incry,size,c,s); } } }; template EIGEN_DEVICE_FUNC void /*EIGEN_DONT_INLINE*/ apply_rotation_in_the_plane(DenseBase& xpr_x, DenseBase& xpr_y, const JacobiRotation& j) { typedef typename VectorX::Scalar Scalar; const bool Vectorizable = (int(VectorX::Flags) & int(VectorY::Flags) & PacketAccessBit) && (int(packet_traits::size) == int(packet_traits::size)); eigen_assert(xpr_x.size() == xpr_y.size()); Index size = xpr_x.size(); Index incrx = xpr_x.derived().innerStride(); Index incry = xpr_y.derived().innerStride(); Scalar* EIGEN_RESTRICT x = &xpr_x.derived().coeffRef(0); Scalar* EIGEN_RESTRICT y = &xpr_y.derived().coeffRef(0); OtherScalar c = j.c(); OtherScalar s = j.s(); if (c==OtherScalar(1) && s==OtherScalar(0)) return; apply_rotation_in_the_plane_selector< Scalar,OtherScalar, VectorX::SizeAtCompileTime, EIGEN_PLAIN_ENUM_MIN(evaluator::Alignment, evaluator::Alignment), Vectorizable>::run(x,incrx,y,incry,size,c,s); } } // end namespace internal } // end namespace Eigen #endif // EIGEN_JACOBI_H piqp-octave/src/eigen/Eigen/src/PaStiXSupport/0000755000175100001770000000000014107270226021115 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/PaStiXSupport/PaStiXSupport.h0000644000175100001770000005335114107270226024042 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_PASTIXSUPPORT_H #define EIGEN_PASTIXSUPPORT_H namespace Eigen { #if defined(DCOMPLEX) #define PASTIX_COMPLEX COMPLEX #define PASTIX_DCOMPLEX DCOMPLEX #else #define PASTIX_COMPLEX std::complex #define PASTIX_DCOMPLEX std::complex #endif /** \ingroup PaStiXSupport_Module * \brief Interface to the PaStix solver * * This class is used to solve the linear systems A.X = B via the PaStix library. * The matrix can be either real or complex, symmetric or not. * * \sa TutorialSparseDirectSolvers */ template class PastixLU; template class PastixLLT; template class PastixLDLT; namespace internal { template struct pastix_traits; template struct pastix_traits< PastixLU<_MatrixType> > { typedef _MatrixType MatrixType; typedef typename _MatrixType::Scalar Scalar; typedef typename _MatrixType::RealScalar RealScalar; typedef typename _MatrixType::StorageIndex StorageIndex; }; template struct pastix_traits< PastixLLT<_MatrixType,Options> > { typedef _MatrixType MatrixType; typedef typename _MatrixType::Scalar Scalar; typedef typename _MatrixType::RealScalar RealScalar; typedef typename _MatrixType::StorageIndex StorageIndex; }; template struct pastix_traits< PastixLDLT<_MatrixType,Options> > { typedef _MatrixType MatrixType; typedef typename _MatrixType::Scalar Scalar; typedef typename _MatrixType::RealScalar RealScalar; typedef typename _MatrixType::StorageIndex StorageIndex; }; inline void eigen_pastix(pastix_data_t **pastix_data, int pastix_comm, int n, int *ptr, int *idx, float *vals, int *perm, int * invp, float *x, int nbrhs, int *iparm, double *dparm) { if (n == 0) { ptr = NULL; idx = NULL; vals = NULL; } if (nbrhs == 0) {x = NULL; nbrhs=1;} s_pastix(pastix_data, pastix_comm, n, ptr, idx, vals, perm, invp, x, nbrhs, iparm, dparm); } inline void eigen_pastix(pastix_data_t **pastix_data, int pastix_comm, int n, int *ptr, int *idx, double *vals, int *perm, int * invp, double *x, int nbrhs, int *iparm, double *dparm) { if (n == 0) { ptr = NULL; idx = NULL; vals = NULL; } if (nbrhs == 0) {x = NULL; nbrhs=1;} d_pastix(pastix_data, pastix_comm, n, ptr, idx, vals, perm, invp, x, nbrhs, iparm, dparm); } inline void eigen_pastix(pastix_data_t **pastix_data, int pastix_comm, int n, int *ptr, int *idx, std::complex *vals, int *perm, int * invp, std::complex *x, int nbrhs, int *iparm, double *dparm) { if (n == 0) { ptr = NULL; idx = NULL; vals = NULL; } if (nbrhs == 0) {x = NULL; nbrhs=1;} c_pastix(pastix_data, pastix_comm, n, ptr, idx, reinterpret_cast(vals), perm, invp, reinterpret_cast(x), nbrhs, iparm, dparm); } inline void eigen_pastix(pastix_data_t **pastix_data, int pastix_comm, int n, int *ptr, int *idx, std::complex *vals, int *perm, int * invp, std::complex *x, int nbrhs, int *iparm, double *dparm) { if (n == 0) { ptr = NULL; idx = NULL; vals = NULL; } if (nbrhs == 0) {x = NULL; nbrhs=1;} z_pastix(pastix_data, pastix_comm, n, ptr, idx, reinterpret_cast(vals), perm, invp, reinterpret_cast(x), nbrhs, iparm, dparm); } // Convert the matrix to Fortran-style Numbering template void c_to_fortran_numbering (MatrixType& mat) { if ( !(mat.outerIndexPtr()[0]) ) { int i; for(i = 0; i <= mat.rows(); ++i) ++mat.outerIndexPtr()[i]; for(i = 0; i < mat.nonZeros(); ++i) ++mat.innerIndexPtr()[i]; } } // Convert to C-style Numbering template void fortran_to_c_numbering (MatrixType& mat) { // Check the Numbering if ( mat.outerIndexPtr()[0] == 1 ) { // Convert to C-style numbering int i; for(i = 0; i <= mat.rows(); ++i) --mat.outerIndexPtr()[i]; for(i = 0; i < mat.nonZeros(); ++i) --mat.innerIndexPtr()[i]; } } } // This is the base class to interface with PaStiX functions. // Users should not used this class directly. template class PastixBase : public SparseSolverBase { protected: typedef SparseSolverBase Base; using Base::derived; using Base::m_isInitialized; public: using Base::_solve_impl; typedef typename internal::pastix_traits::MatrixType _MatrixType; typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; typedef typename MatrixType::StorageIndex StorageIndex; typedef Matrix Vector; typedef SparseMatrix ColSpMatrix; enum { ColsAtCompileTime = MatrixType::ColsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; public: PastixBase() : m_initisOk(false), m_analysisIsOk(false), m_factorizationIsOk(false), m_pastixdata(0), m_size(0) { init(); } ~PastixBase() { clean(); } template bool _solve_impl(const MatrixBase &b, MatrixBase &x) const; /** Returns a reference to the integer vector IPARM of PaStiX parameters * to modify the default parameters. * The statistics related to the different phases of factorization and solve are saved here as well * \sa analyzePattern() factorize() */ Array& iparm() { return m_iparm; } /** Return a reference to a particular index parameter of the IPARM vector * \sa iparm() */ int& iparm(int idxparam) { return m_iparm(idxparam); } /** Returns a reference to the double vector DPARM of PaStiX parameters * The statistics related to the different phases of factorization and solve are saved here as well * \sa analyzePattern() factorize() */ Array& dparm() { return m_dparm; } /** Return a reference to a particular index parameter of the DPARM vector * \sa dparm() */ double& dparm(int idxparam) { return m_dparm(idxparam); } inline Index cols() const { return m_size; } inline Index rows() const { return m_size; } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, * \c NumericalIssue if the PaStiX reports a problem * \c InvalidInput if the input matrix is invalid * * \sa iparm() */ ComputationInfo info() const { eigen_assert(m_isInitialized && "Decomposition is not initialized."); return m_info; } protected: // Initialize the Pastix data structure, check the matrix void init(); // Compute the ordering and the symbolic factorization void analyzePattern(ColSpMatrix& mat); // Compute the numerical factorization void factorize(ColSpMatrix& mat); // Free all the data allocated by Pastix void clean() { eigen_assert(m_initisOk && "The Pastix structure should be allocated first"); m_iparm(IPARM_START_TASK) = API_TASK_CLEAN; m_iparm(IPARM_END_TASK) = API_TASK_CLEAN; internal::eigen_pastix(&m_pastixdata, MPI_COMM_WORLD, 0, 0, 0, (Scalar*)0, m_perm.data(), m_invp.data(), 0, 0, m_iparm.data(), m_dparm.data()); } void compute(ColSpMatrix& mat); int m_initisOk; int m_analysisIsOk; int m_factorizationIsOk; mutable ComputationInfo m_info; mutable pastix_data_t *m_pastixdata; // Data structure for pastix mutable int m_comm; // The MPI communicator identifier mutable Array m_iparm; // integer vector for the input parameters mutable Array m_dparm; // Scalar vector for the input parameters mutable Matrix m_perm; // Permutation vector mutable Matrix m_invp; // Inverse permutation vector mutable int m_size; // Size of the matrix }; /** Initialize the PaStiX data structure. *A first call to this function fills iparm and dparm with the default PaStiX parameters * \sa iparm() dparm() */ template void PastixBase::init() { m_size = 0; m_iparm.setZero(IPARM_SIZE); m_dparm.setZero(DPARM_SIZE); m_iparm(IPARM_MODIFY_PARAMETER) = API_NO; pastix(&m_pastixdata, MPI_COMM_WORLD, 0, 0, 0, 0, 0, 0, 0, 1, m_iparm.data(), m_dparm.data()); m_iparm[IPARM_MATRIX_VERIFICATION] = API_NO; m_iparm[IPARM_VERBOSE] = API_VERBOSE_NOT; m_iparm[IPARM_ORDERING] = API_ORDER_SCOTCH; m_iparm[IPARM_INCOMPLETE] = API_NO; m_iparm[IPARM_OOC_LIMIT] = 2000; m_iparm[IPARM_RHS_MAKING] = API_RHS_B; m_iparm(IPARM_MATRIX_VERIFICATION) = API_NO; m_iparm(IPARM_START_TASK) = API_TASK_INIT; m_iparm(IPARM_END_TASK) = API_TASK_INIT; internal::eigen_pastix(&m_pastixdata, MPI_COMM_WORLD, 0, 0, 0, (Scalar*)0, 0, 0, 0, 0, m_iparm.data(), m_dparm.data()); // Check the returned error if(m_iparm(IPARM_ERROR_NUMBER)) { m_info = InvalidInput; m_initisOk = false; } else { m_info = Success; m_initisOk = true; } } template void PastixBase::compute(ColSpMatrix& mat) { eigen_assert(mat.rows() == mat.cols() && "The input matrix should be squared"); analyzePattern(mat); factorize(mat); m_iparm(IPARM_MATRIX_VERIFICATION) = API_NO; } template void PastixBase::analyzePattern(ColSpMatrix& mat) { eigen_assert(m_initisOk && "The initialization of PaSTiX failed"); // clean previous calls if(m_size>0) clean(); m_size = internal::convert_index(mat.rows()); m_perm.resize(m_size); m_invp.resize(m_size); m_iparm(IPARM_START_TASK) = API_TASK_ORDERING; m_iparm(IPARM_END_TASK) = API_TASK_ANALYSE; internal::eigen_pastix(&m_pastixdata, MPI_COMM_WORLD, m_size, mat.outerIndexPtr(), mat.innerIndexPtr(), mat.valuePtr(), m_perm.data(), m_invp.data(), 0, 0, m_iparm.data(), m_dparm.data()); // Check the returned error if(m_iparm(IPARM_ERROR_NUMBER)) { m_info = NumericalIssue; m_analysisIsOk = false; } else { m_info = Success; m_analysisIsOk = true; } } template void PastixBase::factorize(ColSpMatrix& mat) { // if(&m_cpyMat != &mat) m_cpyMat = mat; eigen_assert(m_analysisIsOk && "The analysis phase should be called before the factorization phase"); m_iparm(IPARM_START_TASK) = API_TASK_NUMFACT; m_iparm(IPARM_END_TASK) = API_TASK_NUMFACT; m_size = internal::convert_index(mat.rows()); internal::eigen_pastix(&m_pastixdata, MPI_COMM_WORLD, m_size, mat.outerIndexPtr(), mat.innerIndexPtr(), mat.valuePtr(), m_perm.data(), m_invp.data(), 0, 0, m_iparm.data(), m_dparm.data()); // Check the returned error if(m_iparm(IPARM_ERROR_NUMBER)) { m_info = NumericalIssue; m_factorizationIsOk = false; m_isInitialized = false; } else { m_info = Success; m_factorizationIsOk = true; m_isInitialized = true; } } /* Solve the system */ template template bool PastixBase::_solve_impl(const MatrixBase &b, MatrixBase &x) const { eigen_assert(m_isInitialized && "The matrix should be factorized first"); EIGEN_STATIC_ASSERT((Dest::Flags&RowMajorBit)==0, THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES); int rhs = 1; x = b; /* on return, x is overwritten by the computed solution */ for (int i = 0; i < b.cols(); i++){ m_iparm[IPARM_START_TASK] = API_TASK_SOLVE; m_iparm[IPARM_END_TASK] = API_TASK_REFINE; internal::eigen_pastix(&m_pastixdata, MPI_COMM_WORLD, internal::convert_index(x.rows()), 0, 0, 0, m_perm.data(), m_invp.data(), &x(0, i), rhs, m_iparm.data(), m_dparm.data()); } // Check the returned error m_info = m_iparm(IPARM_ERROR_NUMBER)==0 ? Success : NumericalIssue; return m_iparm(IPARM_ERROR_NUMBER)==0; } /** \ingroup PaStiXSupport_Module * \class PastixLU * \brief Sparse direct LU solver based on PaStiX library * * This class is used to solve the linear systems A.X = B with a supernodal LU * factorization in the PaStiX library. The matrix A should be squared and nonsingular * PaStiX requires that the matrix A has a symmetric structural pattern. * This interface can symmetrize the input matrix otherwise. * The vectors or matrices X and B can be either dense or sparse. * * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * \tparam IsStrSym Indicates if the input matrix has a symmetric pattern, default is false * NOTE : Note that if the analysis and factorization phase are called separately, * the input matrix will be symmetrized at each call, hence it is advised to * symmetrize the matrix in a end-user program and set \p IsStrSym to true * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept, class SparseLU * */ template class PastixLU : public PastixBase< PastixLU<_MatrixType> > { public: typedef _MatrixType MatrixType; typedef PastixBase > Base; typedef typename Base::ColSpMatrix ColSpMatrix; typedef typename MatrixType::StorageIndex StorageIndex; public: PastixLU() : Base() { init(); } explicit PastixLU(const MatrixType& matrix):Base() { init(); compute(matrix); } /** Compute the LU supernodal factorization of \p matrix. * iparm and dparm can be used to tune the PaStiX parameters. * see the PaStiX user's manual * \sa analyzePattern() factorize() */ void compute (const MatrixType& matrix) { m_structureIsUptodate = false; ColSpMatrix temp; grabMatrix(matrix, temp); Base::compute(temp); } /** Compute the LU symbolic factorization of \p matrix using its sparsity pattern. * Several ordering methods can be used at this step. See the PaStiX user's manual. * The result of this operation can be used with successive matrices having the same pattern as \p matrix * \sa factorize() */ void analyzePattern(const MatrixType& matrix) { m_structureIsUptodate = false; ColSpMatrix temp; grabMatrix(matrix, temp); Base::analyzePattern(temp); } /** Compute the LU supernodal factorization of \p matrix * WARNING The matrix \p matrix should have the same structural pattern * as the same used in the analysis phase. * \sa analyzePattern() */ void factorize(const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::factorize(temp); } protected: void init() { m_structureIsUptodate = false; m_iparm(IPARM_SYM) = API_SYM_NO; m_iparm(IPARM_FACTORIZATION) = API_FACT_LU; } void grabMatrix(const MatrixType& matrix, ColSpMatrix& out) { if(IsStrSym) out = matrix; else { if(!m_structureIsUptodate) { // update the transposed structure m_transposedStructure = matrix.transpose(); // Set the elements of the matrix to zero for (Index j=0; j * \tparam UpLo The part of the matrix to use : Lower or Upper. The default is Lower as required by PaStiX * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept, class SimplicialLLT */ template class PastixLLT : public PastixBase< PastixLLT<_MatrixType, _UpLo> > { public: typedef _MatrixType MatrixType; typedef PastixBase > Base; typedef typename Base::ColSpMatrix ColSpMatrix; public: enum { UpLo = _UpLo }; PastixLLT() : Base() { init(); } explicit PastixLLT(const MatrixType& matrix):Base() { init(); compute(matrix); } /** Compute the L factor of the LL^T supernodal factorization of \p matrix * \sa analyzePattern() factorize() */ void compute (const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::compute(temp); } /** Compute the LL^T symbolic factorization of \p matrix using its sparsity pattern * The result of this operation can be used with successive matrices having the same pattern as \p matrix * \sa factorize() */ void analyzePattern(const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::analyzePattern(temp); } /** Compute the LL^T supernodal numerical factorization of \p matrix * \sa analyzePattern() */ void factorize(const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::factorize(temp); } protected: using Base::m_iparm; void init() { m_iparm(IPARM_SYM) = API_SYM_YES; m_iparm(IPARM_FACTORIZATION) = API_FACT_LLT; } void grabMatrix(const MatrixType& matrix, ColSpMatrix& out) { out.resize(matrix.rows(), matrix.cols()); // Pastix supports only lower, column-major matrices out.template selfadjointView() = matrix.template selfadjointView(); internal::c_to_fortran_numbering(out); } }; /** \ingroup PaStiXSupport_Module * \class PastixLDLT * \brief A sparse direct supernodal Cholesky (LLT) factorization and solver based on the PaStiX library * * This class is used to solve the linear systems A.X = B via a LDL^T supernodal Cholesky factorization * available in the PaStiX library. The matrix A should be symmetric and positive definite * WARNING Selfadjoint complex matrices are not supported in the current version of PaStiX * The vectors or matrices X and B can be either dense or sparse * * \tparam MatrixType the type of the sparse matrix A, it must be a SparseMatrix<> * \tparam UpLo The part of the matrix to use : Lower or Upper. The default is Lower as required by PaStiX * * \implsparsesolverconcept * * \sa \ref TutorialSparseSolverConcept, class SimplicialLDLT */ template class PastixLDLT : public PastixBase< PastixLDLT<_MatrixType, _UpLo> > { public: typedef _MatrixType MatrixType; typedef PastixBase > Base; typedef typename Base::ColSpMatrix ColSpMatrix; public: enum { UpLo = _UpLo }; PastixLDLT():Base() { init(); } explicit PastixLDLT(const MatrixType& matrix):Base() { init(); compute(matrix); } /** Compute the L and D factors of the LDL^T factorization of \p matrix * \sa analyzePattern() factorize() */ void compute (const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::compute(temp); } /** Compute the LDL^T symbolic factorization of \p matrix using its sparsity pattern * The result of this operation can be used with successive matrices having the same pattern as \p matrix * \sa factorize() */ void analyzePattern(const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::analyzePattern(temp); } /** Compute the LDL^T supernodal numerical factorization of \p matrix * */ void factorize(const MatrixType& matrix) { ColSpMatrix temp; grabMatrix(matrix, temp); Base::factorize(temp); } protected: using Base::m_iparm; void init() { m_iparm(IPARM_SYM) = API_SYM_YES; m_iparm(IPARM_FACTORIZATION) = API_FACT_LDLT; } void grabMatrix(const MatrixType& matrix, ColSpMatrix& out) { // Pastix supports only lower, column-major matrices out.resize(matrix.rows(), matrix.cols()); out.template selfadjointView() = matrix.template selfadjointView(); internal::c_to_fortran_numbering(out); } }; } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/Eigenvalues/0000755000175100001770000000000014107270226020617 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Eigenvalues/RealSchur_LAPACKE.h0000644000175100001770000000710214107270226024000 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Real Schur needed to real unsymmetrical eigenvalues/eigenvectors. ******************************************************************************** */ #ifndef EIGEN_REAL_SCHUR_LAPACKE_H #define EIGEN_REAL_SCHUR_LAPACKE_H namespace Eigen { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_SCHUR_REAL(EIGTYPE, LAPACKE_TYPE, LAPACKE_PREFIX, LAPACKE_PREFIX_U, EIGCOLROW, LAPACKE_COLROW) \ template<> template inline \ RealSchur >& \ RealSchur >::compute(const EigenBase& matrix, bool computeU) \ { \ eigen_assert(matrix.cols() == matrix.rows()); \ \ lapack_int n = internal::convert_index(matrix.cols()), sdim, info; \ lapack_int matrix_order = LAPACKE_COLROW; \ char jobvs, sort='N'; \ LAPACK_##LAPACKE_PREFIX_U##_SELECT2 select = 0; \ jobvs = (computeU) ? 'V' : 'N'; \ m_matU.resize(n, n); \ lapack_int ldvs = internal::convert_index(m_matU.outerStride()); \ m_matT = matrix; \ lapack_int lda = internal::convert_index(m_matT.outerStride()); \ Matrix wr, wi; \ wr.resize(n, 1); wi.resize(n, 1); \ info = LAPACKE_##LAPACKE_PREFIX##gees( matrix_order, jobvs, sort, select, n, (LAPACKE_TYPE*)m_matT.data(), lda, &sdim, (LAPACKE_TYPE*)wr.data(), (LAPACKE_TYPE*)wi.data(), (LAPACKE_TYPE*)m_matU.data(), ldvs ); \ if(info == 0) \ m_info = Success; \ else \ m_info = NoConvergence; \ \ m_isInitialized = true; \ m_matUisUptodate = computeU; \ return *this; \ \ } EIGEN_LAPACKE_SCHUR_REAL(double, double, d, D, ColMajor, LAPACK_COL_MAJOR) EIGEN_LAPACKE_SCHUR_REAL(float, float, s, S, ColMajor, LAPACK_COL_MAJOR) EIGEN_LAPACKE_SCHUR_REAL(double, double, d, D, RowMajor, LAPACK_ROW_MAJOR) EIGEN_LAPACKE_SCHUR_REAL(float, float, s, S, RowMajor, LAPACK_ROW_MAJOR) } // end namespace Eigen #endif // EIGEN_REAL_SCHUR_LAPACKE_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/SelfAdjointEigenSolver.h0000644000175100001770000010455614107270226025350 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // Copyright (C) 2010 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SELFADJOINTEIGENSOLVER_H #define EIGEN_SELFADJOINTEIGENSOLVER_H #include "./Tridiagonalization.h" namespace Eigen { template class GeneralizedSelfAdjointEigenSolver; namespace internal { template struct direct_selfadjoint_eigenvalues; template EIGEN_DEVICE_FUNC ComputationInfo computeFromTridiagonal_impl(DiagType& diag, SubDiagType& subdiag, const Index maxIterations, bool computeEigenvectors, MatrixType& eivec); } /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class SelfAdjointEigenSolver * * \brief Computes eigenvalues and eigenvectors of selfadjoint matrices * * \tparam _MatrixType the type of the matrix of which we are computing the * eigendecomposition; this is expected to be an instantiation of the Matrix * class template. * * A matrix \f$ A \f$ is selfadjoint if it equals its adjoint. For real * matrices, this means that the matrix is symmetric: it equals its * transpose. This class computes the eigenvalues and eigenvectors of a * selfadjoint matrix. These are the scalars \f$ \lambda \f$ and vectors * \f$ v \f$ such that \f$ Av = \lambda v \f$. The eigenvalues of a * selfadjoint matrix are always real. If \f$ D \f$ is a diagonal matrix with * the eigenvalues on the diagonal, and \f$ V \f$ is a matrix with the * eigenvectors as its columns, then \f$ A = V D V^{-1} \f$. This is called the * eigendecomposition. * * For a selfadjoint matrix, \f$ V \f$ is unitary, meaning its inverse is equal * to its adjoint, \f$ V^{-1} = V^{\dagger} \f$. If \f$ A \f$ is real, then * \f$ V \f$ is also real and therefore orthogonal, meaning its inverse is * equal to its transpose, \f$ V^{-1} = V^T \f$. * * The algorithm exploits the fact that the matrix is selfadjoint, making it * faster and more accurate than the general purpose eigenvalue algorithms * implemented in EigenSolver and ComplexEigenSolver. * * Only the \b lower \b triangular \b part of the input matrix is referenced. * * Call the function compute() to compute the eigenvalues and eigenvectors of * a given matrix. Alternatively, you can use the * SelfAdjointEigenSolver(const MatrixType&, int) constructor which computes * the eigenvalues and eigenvectors at construction time. Once the eigenvalue * and eigenvectors are computed, they can be retrieved with the eigenvalues() * and eigenvectors() functions. * * The documentation for SelfAdjointEigenSolver(const MatrixType&, int) * contains an example of the typical use of this class. * * To solve the \em generalized eigenvalue problem \f$ Av = \lambda Bv \f$ and * the likes, see the class GeneralizedSelfAdjointEigenSolver. * * \sa MatrixBase::eigenvalues(), class EigenSolver, class ComplexEigenSolver */ template class SelfAdjointEigenSolver { public: typedef _MatrixType MatrixType; enum { Size = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; /** \brief Scalar type for matrices of type \p _MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix EigenvectorsType; /** \brief Real scalar type for \p _MatrixType. * * This is just \c Scalar if #Scalar is real (e.g., \c float or * \c double), and the type of the real part of \c Scalar if #Scalar is * complex. */ typedef typename NumTraits::Real RealScalar; friend struct internal::direct_selfadjoint_eigenvalues::IsComplex>; /** \brief Type for vector of eigenvalues as returned by eigenvalues(). * * This is a column vector with entries of type #RealScalar. * The length of the vector is the size of \p _MatrixType. */ typedef typename internal::plain_col_type::type RealVectorType; typedef Tridiagonalization TridiagonalizationType; typedef typename TridiagonalizationType::SubDiagonalType SubDiagonalType; /** \brief Default constructor for fixed-size matrices. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). This constructor * can only be used if \p _MatrixType is a fixed-size matrix; use * SelfAdjointEigenSolver(Index) for dynamic-size matrices. * * Example: \include SelfAdjointEigenSolver_SelfAdjointEigenSolver.cpp * Output: \verbinclude SelfAdjointEigenSolver_SelfAdjointEigenSolver.out */ EIGEN_DEVICE_FUNC SelfAdjointEigenSolver() : m_eivec(), m_eivalues(), m_subdiag(), m_hcoeffs(), m_info(InvalidInput), m_isInitialized(false), m_eigenvectorsOk(false) { } /** \brief Constructor, pre-allocates memory for dynamic-size matrices. * * \param [in] size Positive integer, size of the matrix whose * eigenvalues and eigenvectors will be computed. * * This constructor is useful for dynamic-size matrices, when the user * intends to perform decompositions via compute(). The \p size * parameter is only used as a hint. It is not an error to give a wrong * \p size, but it may impair performance. * * \sa compute() for an example */ EIGEN_DEVICE_FUNC explicit SelfAdjointEigenSolver(Index size) : m_eivec(size, size), m_eivalues(size), m_subdiag(size > 1 ? size - 1 : 1), m_hcoeffs(size > 1 ? size - 1 : 1), m_isInitialized(false), m_eigenvectorsOk(false) {} /** \brief Constructor; computes eigendecomposition of given matrix. * * \param[in] matrix Selfadjoint matrix whose eigendecomposition is to * be computed. Only the lower triangular part of the matrix is referenced. * \param[in] options Can be #ComputeEigenvectors (default) or #EigenvaluesOnly. * * This constructor calls compute(const MatrixType&, int) to compute the * eigenvalues of the matrix \p matrix. The eigenvectors are computed if * \p options equals #ComputeEigenvectors. * * Example: \include SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.cpp * Output: \verbinclude SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.out * * \sa compute(const MatrixType&, int) */ template EIGEN_DEVICE_FUNC explicit SelfAdjointEigenSolver(const EigenBase& matrix, int options = ComputeEigenvectors) : m_eivec(matrix.rows(), matrix.cols()), m_eivalues(matrix.cols()), m_subdiag(matrix.rows() > 1 ? matrix.rows() - 1 : 1), m_hcoeffs(matrix.cols() > 1 ? matrix.cols() - 1 : 1), m_isInitialized(false), m_eigenvectorsOk(false) { compute(matrix.derived(), options); } /** \brief Computes eigendecomposition of given matrix. * * \param[in] matrix Selfadjoint matrix whose eigendecomposition is to * be computed. Only the lower triangular part of the matrix is referenced. * \param[in] options Can be #ComputeEigenvectors (default) or #EigenvaluesOnly. * \returns Reference to \c *this * * This function computes the eigenvalues of \p matrix. The eigenvalues() * function can be used to retrieve them. If \p options equals #ComputeEigenvectors, * then the eigenvectors are also computed and can be retrieved by * calling eigenvectors(). * * This implementation uses a symmetric QR algorithm. The matrix is first * reduced to tridiagonal form using the Tridiagonalization class. The * tridiagonal matrix is then brought to diagonal form with implicit * symmetric QR steps with Wilkinson shift. Details can be found in * Section 8.3 of Golub \& Van Loan, %Matrix Computations. * * The cost of the computation is about \f$ 9n^3 \f$ if the eigenvectors * are required and \f$ 4n^3/3 \f$ if they are not required. * * This method reuses the memory in the SelfAdjointEigenSolver object that * was allocated when the object was constructed, if the size of the * matrix does not change. * * Example: \include SelfAdjointEigenSolver_compute_MatrixType.cpp * Output: \verbinclude SelfAdjointEigenSolver_compute_MatrixType.out * * \sa SelfAdjointEigenSolver(const MatrixType&, int) */ template EIGEN_DEVICE_FUNC SelfAdjointEigenSolver& compute(const EigenBase& matrix, int options = ComputeEigenvectors); /** \brief Computes eigendecomposition of given matrix using a closed-form algorithm * * This is a variant of compute(const MatrixType&, int options) which * directly solves the underlying polynomial equation. * * Currently only 2x2 and 3x3 matrices for which the sizes are known at compile time are supported (e.g., Matrix3d). * * This method is usually significantly faster than the QR iterative algorithm * but it might also be less accurate. It is also worth noting that * for 3x3 matrices it involves trigonometric operations which are * not necessarily available for all scalar types. * * For the 3x3 case, we observed the following worst case relative error regarding the eigenvalues: * - double: 1e-8 * - float: 1e-3 * * \sa compute(const MatrixType&, int options) */ EIGEN_DEVICE_FUNC SelfAdjointEigenSolver& computeDirect(const MatrixType& matrix, int options = ComputeEigenvectors); /** *\brief Computes the eigen decomposition from a tridiagonal symmetric matrix * * \param[in] diag The vector containing the diagonal of the matrix. * \param[in] subdiag The subdiagonal of the matrix. * \param[in] options Can be #ComputeEigenvectors (default) or #EigenvaluesOnly. * \returns Reference to \c *this * * This function assumes that the matrix has been reduced to tridiagonal form. * * \sa compute(const MatrixType&, int) for more information */ SelfAdjointEigenSolver& computeFromTridiagonal(const RealVectorType& diag, const SubDiagonalType& subdiag , int options=ComputeEigenvectors); /** \brief Returns the eigenvectors of given matrix. * * \returns A const reference to the matrix whose columns are the eigenvectors. * * \pre The eigenvectors have been computed before. * * Column \f$ k \f$ of the returned matrix is an eigenvector corresponding * to eigenvalue number \f$ k \f$ as returned by eigenvalues(). The * eigenvectors are normalized to have (Euclidean) norm equal to one. If * this object was used to solve the eigenproblem for the selfadjoint * matrix \f$ A \f$, then the matrix returned by this function is the * matrix \f$ V \f$ in the eigendecomposition \f$ A = V D V^{-1} \f$. * * For a selfadjoint matrix, \f$ V \f$ is unitary, meaning its inverse is equal * to its adjoint, \f$ V^{-1} = V^{\dagger} \f$. If \f$ A \f$ is real, then * \f$ V \f$ is also real and therefore orthogonal, meaning its inverse is * equal to its transpose, \f$ V^{-1} = V^T \f$. * * Example: \include SelfAdjointEigenSolver_eigenvectors.cpp * Output: \verbinclude SelfAdjointEigenSolver_eigenvectors.out * * \sa eigenvalues() */ EIGEN_DEVICE_FUNC const EigenvectorsType& eigenvectors() const { eigen_assert(m_isInitialized && "SelfAdjointEigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); return m_eivec; } /** \brief Returns the eigenvalues of given matrix. * * \returns A const reference to the column vector containing the eigenvalues. * * \pre The eigenvalues have been computed before. * * The eigenvalues are repeated according to their algebraic multiplicity, * so there are as many eigenvalues as rows in the matrix. The eigenvalues * are sorted in increasing order. * * Example: \include SelfAdjointEigenSolver_eigenvalues.cpp * Output: \verbinclude SelfAdjointEigenSolver_eigenvalues.out * * \sa eigenvectors(), MatrixBase::eigenvalues() */ EIGEN_DEVICE_FUNC const RealVectorType& eigenvalues() const { eigen_assert(m_isInitialized && "SelfAdjointEigenSolver is not initialized."); return m_eivalues; } /** \brief Computes the positive-definite square root of the matrix. * * \returns the positive-definite square root of the matrix * * \pre The eigenvalues and eigenvectors of a positive-definite matrix * have been computed before. * * The square root of a positive-definite matrix \f$ A \f$ is the * positive-definite matrix whose square equals \f$ A \f$. This function * uses the eigendecomposition \f$ A = V D V^{-1} \f$ to compute the * square root as \f$ A^{1/2} = V D^{1/2} V^{-1} \f$. * * Example: \include SelfAdjointEigenSolver_operatorSqrt.cpp * Output: \verbinclude SelfAdjointEigenSolver_operatorSqrt.out * * \sa operatorInverseSqrt(), MatrixFunctions Module */ EIGEN_DEVICE_FUNC MatrixType operatorSqrt() const { eigen_assert(m_isInitialized && "SelfAdjointEigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); return m_eivec * m_eivalues.cwiseSqrt().asDiagonal() * m_eivec.adjoint(); } /** \brief Computes the inverse square root of the matrix. * * \returns the inverse positive-definite square root of the matrix * * \pre The eigenvalues and eigenvectors of a positive-definite matrix * have been computed before. * * This function uses the eigendecomposition \f$ A = V D V^{-1} \f$ to * compute the inverse square root as \f$ V D^{-1/2} V^{-1} \f$. This is * cheaper than first computing the square root with operatorSqrt() and * then its inverse with MatrixBase::inverse(). * * Example: \include SelfAdjointEigenSolver_operatorInverseSqrt.cpp * Output: \verbinclude SelfAdjointEigenSolver_operatorInverseSqrt.out * * \sa operatorSqrt(), MatrixBase::inverse(), MatrixFunctions Module */ EIGEN_DEVICE_FUNC MatrixType operatorInverseSqrt() const { eigen_assert(m_isInitialized && "SelfAdjointEigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); return m_eivec * m_eivalues.cwiseInverse().cwiseSqrt().asDiagonal() * m_eivec.adjoint(); } /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, \c NoConvergence otherwise. */ EIGEN_DEVICE_FUNC ComputationInfo info() const { eigen_assert(m_isInitialized && "SelfAdjointEigenSolver is not initialized."); return m_info; } /** \brief Maximum number of iterations. * * The algorithm terminates if it does not converge within m_maxIterations * n iterations, where n * denotes the size of the matrix. This value is currently set to 30 (copied from LAPACK). */ static const int m_maxIterations = 30; protected: static EIGEN_DEVICE_FUNC void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } EigenvectorsType m_eivec; RealVectorType m_eivalues; typename TridiagonalizationType::SubDiagonalType m_subdiag; typename TridiagonalizationType::CoeffVectorType m_hcoeffs; ComputationInfo m_info; bool m_isInitialized; bool m_eigenvectorsOk; }; namespace internal { /** \internal * * \eigenvalues_module \ingroup Eigenvalues_Module * * Performs a QR step on a tridiagonal symmetric matrix represented as a * pair of two vectors \a diag and \a subdiag. * * \param diag the diagonal part of the input selfadjoint tridiagonal matrix * \param subdiag the sub-diagonal part of the input selfadjoint tridiagonal matrix * \param start starting index of the submatrix to work on * \param end last+1 index of the submatrix to work on * \param matrixQ pointer to the column-major matrix holding the eigenvectors, can be 0 * \param n size of the input matrix * * For compilation efficiency reasons, this procedure does not use eigen expression * for its arguments. * * Implemented from Golub's "Matrix Computations", algorithm 8.3.2: * "implicit symmetric QR step with Wilkinson shift" */ template EIGEN_DEVICE_FUNC static void tridiagonal_qr_step(RealScalar* diag, RealScalar* subdiag, Index start, Index end, Scalar* matrixQ, Index n); } template template EIGEN_DEVICE_FUNC SelfAdjointEigenSolver& SelfAdjointEigenSolver ::compute(const EigenBase& a_matrix, int options) { check_template_parameters(); const InputType &matrix(a_matrix.derived()); EIGEN_USING_STD(abs); eigen_assert(matrix.cols() == matrix.rows()); eigen_assert((options&~(EigVecMask|GenEigMask))==0 && (options&EigVecMask)!=EigVecMask && "invalid option parameter"); bool computeEigenvectors = (options&ComputeEigenvectors)==ComputeEigenvectors; Index n = matrix.cols(); m_eivalues.resize(n,1); if(n==1) { m_eivec = matrix; m_eivalues.coeffRef(0,0) = numext::real(m_eivec.coeff(0,0)); if(computeEigenvectors) m_eivec.setOnes(n,n); m_info = Success; m_isInitialized = true; m_eigenvectorsOk = computeEigenvectors; return *this; } // declare some aliases RealVectorType& diag = m_eivalues; EigenvectorsType& mat = m_eivec; // map the matrix coefficients to [-1:1] to avoid over- and underflow. mat = matrix.template triangularView(); RealScalar scale = mat.cwiseAbs().maxCoeff(); if(scale==RealScalar(0)) scale = RealScalar(1); mat.template triangularView() /= scale; m_subdiag.resize(n-1); m_hcoeffs.resize(n-1); internal::tridiagonalization_inplace(mat, diag, m_subdiag, m_hcoeffs, computeEigenvectors); m_info = internal::computeFromTridiagonal_impl(diag, m_subdiag, m_maxIterations, computeEigenvectors, m_eivec); // scale back the eigen values m_eivalues *= scale; m_isInitialized = true; m_eigenvectorsOk = computeEigenvectors; return *this; } template SelfAdjointEigenSolver& SelfAdjointEigenSolver ::computeFromTridiagonal(const RealVectorType& diag, const SubDiagonalType& subdiag , int options) { //TODO : Add an option to scale the values beforehand bool computeEigenvectors = (options&ComputeEigenvectors)==ComputeEigenvectors; m_eivalues = diag; m_subdiag = subdiag; if (computeEigenvectors) { m_eivec.setIdentity(diag.size(), diag.size()); } m_info = internal::computeFromTridiagonal_impl(m_eivalues, m_subdiag, m_maxIterations, computeEigenvectors, m_eivec); m_isInitialized = true; m_eigenvectorsOk = computeEigenvectors; return *this; } namespace internal { /** * \internal * \brief Compute the eigendecomposition from a tridiagonal matrix * * \param[in,out] diag : On input, the diagonal of the matrix, on output the eigenvalues * \param[in,out] subdiag : The subdiagonal part of the matrix (entries are modified during the decomposition) * \param[in] maxIterations : the maximum number of iterations * \param[in] computeEigenvectors : whether the eigenvectors have to be computed or not * \param[out] eivec : The matrix to store the eigenvectors if computeEigenvectors==true. Must be allocated on input. * \returns \c Success or \c NoConvergence */ template EIGEN_DEVICE_FUNC ComputationInfo computeFromTridiagonal_impl(DiagType& diag, SubDiagType& subdiag, const Index maxIterations, bool computeEigenvectors, MatrixType& eivec) { ComputationInfo info; typedef typename MatrixType::Scalar Scalar; Index n = diag.size(); Index end = n-1; Index start = 0; Index iter = 0; // total number of iterations typedef typename DiagType::RealScalar RealScalar; const RealScalar considerAsZero = (std::numeric_limits::min)(); const RealScalar precision_inv = RealScalar(1)/NumTraits::epsilon(); while (end>0) { for (Index i = start; i0 && subdiag[end-1]==RealScalar(0)) { end--; } if (end<=0) break; // if we spent too many iterations, we give up iter++; if(iter > maxIterations * n) break; start = end - 1; while (start>0 && subdiag[start-1]!=0) start--; internal::tridiagonal_qr_step(diag.data(), subdiag.data(), start, end, computeEigenvectors ? eivec.data() : (Scalar*)0, n); } if (iter <= maxIterations * n) info = Success; else info = NoConvergence; // Sort eigenvalues and corresponding vectors. // TODO make the sort optional ? // TODO use a better sort algorithm !! if (info == Success) { for (Index i = 0; i < n-1; ++i) { Index k; diag.segment(i,n-i).minCoeff(&k); if (k > 0) { numext::swap(diag[i], diag[k+i]); if(computeEigenvectors) eivec.col(i).swap(eivec.col(k+i)); } } } return info; } template struct direct_selfadjoint_eigenvalues { EIGEN_DEVICE_FUNC static inline void run(SolverType& eig, const typename SolverType::MatrixType& A, int options) { eig.compute(A,options); } }; template struct direct_selfadjoint_eigenvalues { typedef typename SolverType::MatrixType MatrixType; typedef typename SolverType::RealVectorType VectorType; typedef typename SolverType::Scalar Scalar; typedef typename SolverType::EigenvectorsType EigenvectorsType; /** \internal * Computes the roots of the characteristic polynomial of \a m. * For numerical stability m.trace() should be near zero and to avoid over- or underflow m should be normalized. */ EIGEN_DEVICE_FUNC static inline void computeRoots(const MatrixType& m, VectorType& roots) { EIGEN_USING_STD(sqrt) EIGEN_USING_STD(atan2) EIGEN_USING_STD(cos) EIGEN_USING_STD(sin) const Scalar s_inv3 = Scalar(1)/Scalar(3); const Scalar s_sqrt3 = sqrt(Scalar(3)); // The characteristic equation is x^3 - c2*x^2 + c1*x - c0 = 0. The // eigenvalues are the roots to this equation, all guaranteed to be // real-valued, because the matrix is symmetric. Scalar c0 = m(0,0)*m(1,1)*m(2,2) + Scalar(2)*m(1,0)*m(2,0)*m(2,1) - m(0,0)*m(2,1)*m(2,1) - m(1,1)*m(2,0)*m(2,0) - m(2,2)*m(1,0)*m(1,0); Scalar c1 = m(0,0)*m(1,1) - m(1,0)*m(1,0) + m(0,0)*m(2,2) - m(2,0)*m(2,0) + m(1,1)*m(2,2) - m(2,1)*m(2,1); Scalar c2 = m(0,0) + m(1,1) + m(2,2); // Construct the parameters used in classifying the roots of the equation // and in solving the equation for the roots in closed form. Scalar c2_over_3 = c2*s_inv3; Scalar a_over_3 = (c2*c2_over_3 - c1)*s_inv3; a_over_3 = numext::maxi(a_over_3, Scalar(0)); Scalar half_b = Scalar(0.5)*(c0 + c2_over_3*(Scalar(2)*c2_over_3*c2_over_3 - c1)); Scalar q = a_over_3*a_over_3*a_over_3 - half_b*half_b; q = numext::maxi(q, Scalar(0)); // Compute the eigenvalues by solving for the roots of the polynomial. Scalar rho = sqrt(a_over_3); Scalar theta = atan2(sqrt(q),half_b)*s_inv3; // since sqrt(q) > 0, atan2 is in [0, pi] and theta is in [0, pi/3] Scalar cos_theta = cos(theta); Scalar sin_theta = sin(theta); // roots are already sorted, since cos is monotonically decreasing on [0, pi] roots(0) = c2_over_3 - rho*(cos_theta + s_sqrt3*sin_theta); // == 2*rho*cos(theta+2pi/3) roots(1) = c2_over_3 - rho*(cos_theta - s_sqrt3*sin_theta); // == 2*rho*cos(theta+ pi/3) roots(2) = c2_over_3 + Scalar(2)*rho*cos_theta; } EIGEN_DEVICE_FUNC static inline bool extract_kernel(MatrixType& mat, Ref res, Ref representative) { EIGEN_USING_STD(abs); EIGEN_USING_STD(sqrt); Index i0; // Find non-zero column i0 (by construction, there must exist a non zero coefficient on the diagonal): mat.diagonal().cwiseAbs().maxCoeff(&i0); // mat.col(i0) is a good candidate for an orthogonal vector to the current eigenvector, // so let's save it: representative = mat.col(i0); Scalar n0, n1; VectorType c0, c1; n0 = (c0 = representative.cross(mat.col((i0+1)%3))).squaredNorm(); n1 = (c1 = representative.cross(mat.col((i0+2)%3))).squaredNorm(); if(n0>n1) res = c0/sqrt(n0); else res = c1/sqrt(n1); return true; } EIGEN_DEVICE_FUNC static inline void run(SolverType& solver, const MatrixType& mat, int options) { eigen_assert(mat.cols() == 3 && mat.cols() == mat.rows()); eigen_assert((options&~(EigVecMask|GenEigMask))==0 && (options&EigVecMask)!=EigVecMask && "invalid option parameter"); bool computeEigenvectors = (options&ComputeEigenvectors)==ComputeEigenvectors; EigenvectorsType& eivecs = solver.m_eivec; VectorType& eivals = solver.m_eivalues; // Shift the matrix to the mean eigenvalue and map the matrix coefficients to [-1:1] to avoid over- and underflow. Scalar shift = mat.trace() / Scalar(3); // TODO Avoid this copy. Currently it is necessary to suppress bogus values when determining maxCoeff and for computing the eigenvectors later MatrixType scaledMat = mat.template selfadjointView(); scaledMat.diagonal().array() -= shift; Scalar scale = scaledMat.cwiseAbs().maxCoeff(); if(scale > 0) scaledMat /= scale; // TODO for scale==0 we could save the remaining operations // compute the eigenvalues computeRoots(scaledMat,eivals); // compute the eigenvectors if(computeEigenvectors) { if((eivals(2)-eivals(0))<=Eigen::NumTraits::epsilon()) { // All three eigenvalues are numerically the same eivecs.setIdentity(); } else { MatrixType tmp; tmp = scaledMat; // Compute the eigenvector of the most distinct eigenvalue Scalar d0 = eivals(2) - eivals(1); Scalar d1 = eivals(1) - eivals(0); Index k(0), l(2); if(d0 > d1) { numext::swap(k,l); d0 = d1; } // Compute the eigenvector of index k { tmp.diagonal().array () -= eivals(k); // By construction, 'tmp' is of rank 2, and its kernel corresponds to the respective eigenvector. extract_kernel(tmp, eivecs.col(k), eivecs.col(l)); } // Compute eigenvector of index l if(d0<=2*Eigen::NumTraits::epsilon()*d1) { // If d0 is too small, then the two other eigenvalues are numerically the same, // and thus we only have to ortho-normalize the near orthogonal vector we saved above. eivecs.col(l) -= eivecs.col(k).dot(eivecs.col(l))*eivecs.col(l); eivecs.col(l).normalize(); } else { tmp = scaledMat; tmp.diagonal().array () -= eivals(l); VectorType dummy; extract_kernel(tmp, eivecs.col(l), dummy); } // Compute last eigenvector from the other two eivecs.col(1) = eivecs.col(2).cross(eivecs.col(0)).normalized(); } } // Rescale back to the original size. eivals *= scale; eivals.array() += shift; solver.m_info = Success; solver.m_isInitialized = true; solver.m_eigenvectorsOk = computeEigenvectors; } }; // 2x2 direct eigenvalues decomposition, code from Hauke Heibel template struct direct_selfadjoint_eigenvalues { typedef typename SolverType::MatrixType MatrixType; typedef typename SolverType::RealVectorType VectorType; typedef typename SolverType::Scalar Scalar; typedef typename SolverType::EigenvectorsType EigenvectorsType; EIGEN_DEVICE_FUNC static inline void computeRoots(const MatrixType& m, VectorType& roots) { EIGEN_USING_STD(sqrt); const Scalar t0 = Scalar(0.5) * sqrt( numext::abs2(m(0,0)-m(1,1)) + Scalar(4)*numext::abs2(m(1,0))); const Scalar t1 = Scalar(0.5) * (m(0,0) + m(1,1)); roots(0) = t1 - t0; roots(1) = t1 + t0; } EIGEN_DEVICE_FUNC static inline void run(SolverType& solver, const MatrixType& mat, int options) { EIGEN_USING_STD(sqrt); EIGEN_USING_STD(abs); eigen_assert(mat.cols() == 2 && mat.cols() == mat.rows()); eigen_assert((options&~(EigVecMask|GenEigMask))==0 && (options&EigVecMask)!=EigVecMask && "invalid option parameter"); bool computeEigenvectors = (options&ComputeEigenvectors)==ComputeEigenvectors; EigenvectorsType& eivecs = solver.m_eivec; VectorType& eivals = solver.m_eivalues; // Shift the matrix to the mean eigenvalue and map the matrix coefficients to [-1:1] to avoid over- and underflow. Scalar shift = mat.trace() / Scalar(2); MatrixType scaledMat = mat; scaledMat.coeffRef(0,1) = mat.coeff(1,0); scaledMat.diagonal().array() -= shift; Scalar scale = scaledMat.cwiseAbs().maxCoeff(); if(scale > Scalar(0)) scaledMat /= scale; // Compute the eigenvalues computeRoots(scaledMat,eivals); // compute the eigen vectors if(computeEigenvectors) { if((eivals(1)-eivals(0))<=abs(eivals(1))*Eigen::NumTraits::epsilon()) { eivecs.setIdentity(); } else { scaledMat.diagonal().array () -= eivals(1); Scalar a2 = numext::abs2(scaledMat(0,0)); Scalar c2 = numext::abs2(scaledMat(1,1)); Scalar b2 = numext::abs2(scaledMat(1,0)); if(a2>c2) { eivecs.col(1) << -scaledMat(1,0), scaledMat(0,0); eivecs.col(1) /= sqrt(a2+b2); } else { eivecs.col(1) << -scaledMat(1,1), scaledMat(1,0); eivecs.col(1) /= sqrt(c2+b2); } eivecs.col(0) << eivecs.col(1).unitOrthogonal(); } } // Rescale back to the original size. eivals *= scale; eivals.array() += shift; solver.m_info = Success; solver.m_isInitialized = true; solver.m_eigenvectorsOk = computeEigenvectors; } }; } template EIGEN_DEVICE_FUNC SelfAdjointEigenSolver& SelfAdjointEigenSolver ::computeDirect(const MatrixType& matrix, int options) { internal::direct_selfadjoint_eigenvalues::IsComplex>::run(*this,matrix,options); return *this; } namespace internal { // Francis implicit QR step. template EIGEN_DEVICE_FUNC static void tridiagonal_qr_step(RealScalar* diag, RealScalar* subdiag, Index start, Index end, Scalar* matrixQ, Index n) { // Wilkinson Shift. RealScalar td = (diag[end-1] - diag[end])*RealScalar(0.5); RealScalar e = subdiag[end-1]; // Note that thanks to scaling, e^2 or td^2 cannot overflow, however they can still // underflow thus leading to inf/NaN values when using the following commented code: // RealScalar e2 = numext::abs2(subdiag[end-1]); // RealScalar mu = diag[end] - e2 / (td + (td>0 ? 1 : -1) * sqrt(td*td + e2)); // This explain the following, somewhat more complicated, version: RealScalar mu = diag[end]; if(td==RealScalar(0)) { mu -= numext::abs(e); } else if (e != RealScalar(0)) { const RealScalar e2 = numext::abs2(e); const RealScalar h = numext::hypot(td,e); if(e2 == RealScalar(0)) { mu -= e / ((td + (td>RealScalar(0) ? h : -h)) / e); } else { mu -= e2 / (td + (td>RealScalar(0) ? h : -h)); } } RealScalar x = diag[start] - mu; RealScalar z = subdiag[start]; // If z ever becomes zero, the Givens rotation will be the identity and // z will stay zero for all future iterations. for (Index k = start; k < end && z != RealScalar(0); ++k) { JacobiRotation rot; rot.makeGivens(x, z); // do T = G' T G RealScalar sdk = rot.s() * diag[k] + rot.c() * subdiag[k]; RealScalar dkp1 = rot.s() * subdiag[k] + rot.c() * diag[k+1]; diag[k] = rot.c() * (rot.c() * diag[k] - rot.s() * subdiag[k]) - rot.s() * (rot.c() * subdiag[k] - rot.s() * diag[k+1]); diag[k+1] = rot.s() * sdk + rot.c() * dkp1; subdiag[k] = rot.c() * sdk - rot.s() * dkp1; if (k > start) subdiag[k - 1] = rot.c() * subdiag[k-1] - rot.s() * z; // "Chasing the bulge" to return to triangular form. x = subdiag[k]; if (k < end - 1) { z = -rot.s() * subdiag[k+1]; subdiag[k + 1] = rot.c() * subdiag[k+1]; } // apply the givens rotation to the unit matrix Q = Q * G if (matrixQ) { // FIXME if StorageOrder == RowMajor this operation is not very efficient Map > q(matrixQ,n,n); q.applyOnTheRight(k,k+1,rot); } } } } // end namespace internal } // end namespace Eigen #endif // EIGEN_SELFADJOINTEIGENSOLVER_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h0000644000175100001770000002276414107270226027522 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // Copyright (C) 2010 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_GENERALIZEDSELFADJOINTEIGENSOLVER_H #define EIGEN_GENERALIZEDSELFADJOINTEIGENSOLVER_H #include "./Tridiagonalization.h" namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class GeneralizedSelfAdjointEigenSolver * * \brief Computes eigenvalues and eigenvectors of the generalized selfadjoint eigen problem * * \tparam _MatrixType the type of the matrix of which we are computing the * eigendecomposition; this is expected to be an instantiation of the Matrix * class template. * * This class solves the generalized eigenvalue problem * \f$ Av = \lambda Bv \f$. In this case, the matrix \f$ A \f$ should be * selfadjoint and the matrix \f$ B \f$ should be positive definite. * * Only the \b lower \b triangular \b part of the input matrix is referenced. * * Call the function compute() to compute the eigenvalues and eigenvectors of * a given matrix. Alternatively, you can use the * GeneralizedSelfAdjointEigenSolver(const MatrixType&, const MatrixType&, int) * constructor which computes the eigenvalues and eigenvectors at construction time. * Once the eigenvalue and eigenvectors are computed, they can be retrieved with the eigenvalues() * and eigenvectors() functions. * * The documentation for GeneralizedSelfAdjointEigenSolver(const MatrixType&, const MatrixType&, int) * contains an example of the typical use of this class. * * \sa class SelfAdjointEigenSolver, class EigenSolver, class ComplexEigenSolver */ template class GeneralizedSelfAdjointEigenSolver : public SelfAdjointEigenSolver<_MatrixType> { typedef SelfAdjointEigenSolver<_MatrixType> Base; public: typedef _MatrixType MatrixType; /** \brief Default constructor for fixed-size matrices. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). This constructor * can only be used if \p _MatrixType is a fixed-size matrix; use * GeneralizedSelfAdjointEigenSolver(Index) for dynamic-size matrices. */ GeneralizedSelfAdjointEigenSolver() : Base() {} /** \brief Constructor, pre-allocates memory for dynamic-size matrices. * * \param [in] size Positive integer, size of the matrix whose * eigenvalues and eigenvectors will be computed. * * This constructor is useful for dynamic-size matrices, when the user * intends to perform decompositions via compute(). The \p size * parameter is only used as a hint. It is not an error to give a wrong * \p size, but it may impair performance. * * \sa compute() for an example */ explicit GeneralizedSelfAdjointEigenSolver(Index size) : Base(size) {} /** \brief Constructor; computes generalized eigendecomposition of given matrix pencil. * * \param[in] matA Selfadjoint matrix in matrix pencil. * Only the lower triangular part of the matrix is referenced. * \param[in] matB Positive-definite matrix in matrix pencil. * Only the lower triangular part of the matrix is referenced. * \param[in] options A or-ed set of flags {#ComputeEigenvectors,#EigenvaluesOnly} | {#Ax_lBx,#ABx_lx,#BAx_lx}. * Default is #ComputeEigenvectors|#Ax_lBx. * * This constructor calls compute(const MatrixType&, const MatrixType&, int) * to compute the eigenvalues and (if requested) the eigenvectors of the * generalized eigenproblem \f$ Ax = \lambda B x \f$ with \a matA the * selfadjoint matrix \f$ A \f$ and \a matB the positive definite matrix * \f$ B \f$. Each eigenvector \f$ x \f$ satisfies the property * \f$ x^* B x = 1 \f$. The eigenvectors are computed if * \a options contains ComputeEigenvectors. * * In addition, the two following variants can be solved via \p options: * - \c ABx_lx: \f$ ABx = \lambda x \f$ * - \c BAx_lx: \f$ BAx = \lambda x \f$ * * Example: \include SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.cpp * Output: \verbinclude SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.out * * \sa compute(const MatrixType&, const MatrixType&, int) */ GeneralizedSelfAdjointEigenSolver(const MatrixType& matA, const MatrixType& matB, int options = ComputeEigenvectors|Ax_lBx) : Base(matA.cols()) { compute(matA, matB, options); } /** \brief Computes generalized eigendecomposition of given matrix pencil. * * \param[in] matA Selfadjoint matrix in matrix pencil. * Only the lower triangular part of the matrix is referenced. * \param[in] matB Positive-definite matrix in matrix pencil. * Only the lower triangular part of the matrix is referenced. * \param[in] options A or-ed set of flags {#ComputeEigenvectors,#EigenvaluesOnly} | {#Ax_lBx,#ABx_lx,#BAx_lx}. * Default is #ComputeEigenvectors|#Ax_lBx. * * \returns Reference to \c *this * * According to \p options, this function computes eigenvalues and (if requested) * the eigenvectors of one of the following three generalized eigenproblems: * - \c Ax_lBx: \f$ Ax = \lambda B x \f$ * - \c ABx_lx: \f$ ABx = \lambda x \f$ * - \c BAx_lx: \f$ BAx = \lambda x \f$ * with \a matA the selfadjoint matrix \f$ A \f$ and \a matB the positive definite * matrix \f$ B \f$. * In addition, each eigenvector \f$ x \f$ satisfies the property \f$ x^* B x = 1 \f$. * * The eigenvalues() function can be used to retrieve * the eigenvalues. If \p options contains ComputeEigenvectors, then the * eigenvectors are also computed and can be retrieved by calling * eigenvectors(). * * The implementation uses LLT to compute the Cholesky decomposition * \f$ B = LL^* \f$ and computes the classical eigendecomposition * of the selfadjoint matrix \f$ L^{-1} A (L^*)^{-1} \f$ if \p options contains Ax_lBx * and of \f$ L^{*} A L \f$ otherwise. This solves the * generalized eigenproblem, because any solution of the generalized * eigenproblem \f$ Ax = \lambda B x \f$ corresponds to a solution * \f$ L^{-1} A (L^*)^{-1} (L^* x) = \lambda (L^* x) \f$ of the * eigenproblem for \f$ L^{-1} A (L^*)^{-1} \f$. Similar statements * can be made for the two other variants. * * Example: \include SelfAdjointEigenSolver_compute_MatrixType2.cpp * Output: \verbinclude SelfAdjointEigenSolver_compute_MatrixType2.out * * \sa GeneralizedSelfAdjointEigenSolver(const MatrixType&, const MatrixType&, int) */ GeneralizedSelfAdjointEigenSolver& compute(const MatrixType& matA, const MatrixType& matB, int options = ComputeEigenvectors|Ax_lBx); protected: }; template GeneralizedSelfAdjointEigenSolver& GeneralizedSelfAdjointEigenSolver:: compute(const MatrixType& matA, const MatrixType& matB, int options) { eigen_assert(matA.cols()==matA.rows() && matB.rows()==matA.rows() && matB.cols()==matB.rows()); eigen_assert((options&~(EigVecMask|GenEigMask))==0 && (options&EigVecMask)!=EigVecMask && ((options&GenEigMask)==0 || (options&GenEigMask)==Ax_lBx || (options&GenEigMask)==ABx_lx || (options&GenEigMask)==BAx_lx) && "invalid option parameter"); bool computeEigVecs = ((options&EigVecMask)==0) || ((options&EigVecMask)==ComputeEigenvectors); // Compute the cholesky decomposition of matB = L L' = U'U LLT cholB(matB); int type = (options&GenEigMask); if(type==0) type = Ax_lBx; if(type==Ax_lBx) { // compute C = inv(L) A inv(L') MatrixType matC = matA.template selfadjointView(); cholB.matrixL().template solveInPlace(matC); cholB.matrixU().template solveInPlace(matC); Base::compute(matC, computeEigVecs ? ComputeEigenvectors : EigenvaluesOnly ); // transform back the eigen vectors: evecs = inv(U) * evecs if(computeEigVecs) cholB.matrixU().solveInPlace(Base::m_eivec); } else if(type==ABx_lx) { // compute C = L' A L MatrixType matC = matA.template selfadjointView(); matC = matC * cholB.matrixL(); matC = cholB.matrixU() * matC; Base::compute(matC, computeEigVecs ? ComputeEigenvectors : EigenvaluesOnly); // transform back the eigen vectors: evecs = inv(U) * evecs if(computeEigVecs) cholB.matrixU().solveInPlace(Base::m_eivec); } else if(type==BAx_lx) { // compute C = L' A L MatrixType matC = matA.template selfadjointView(); matC = matC * cholB.matrixL(); matC = cholB.matrixU() * matC; Base::compute(matC, computeEigVecs ? ComputeEigenvectors : EigenvaluesOnly); // transform back the eigen vectors: evecs = L * evecs if(computeEigVecs) Base::m_eivec = cholB.matrixL() * Base::m_eivec; } return *this; } } // end namespace Eigen #endif // EIGEN_GENERALIZEDSELFADJOINTEIGENSOLVER_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/GeneralizedEigenSolver.h0000644000175100001770000004143014107270226025366 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012-2016 Gael Guennebaud // Copyright (C) 2010,2012 Jitse Niesen // Copyright (C) 2016 Tobias Wood // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_GENERALIZEDEIGENSOLVER_H #define EIGEN_GENERALIZEDEIGENSOLVER_H #include "./RealQZ.h" namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class GeneralizedEigenSolver * * \brief Computes the generalized eigenvalues and eigenvectors of a pair of general matrices * * \tparam _MatrixType the type of the matrices of which we are computing the * eigen-decomposition; this is expected to be an instantiation of the Matrix * class template. Currently, only real matrices are supported. * * The generalized eigenvalues and eigenvectors of a matrix pair \f$ A \f$ and \f$ B \f$ are scalars * \f$ \lambda \f$ and vectors \f$ v \f$ such that \f$ Av = \lambda Bv \f$. If * \f$ D \f$ is a diagonal matrix with the eigenvalues on the diagonal, and * \f$ V \f$ is a matrix with the eigenvectors as its columns, then \f$ A V = * B V D \f$. The matrix \f$ V \f$ is almost always invertible, in which case we * have \f$ A = B V D V^{-1} \f$. This is called the generalized eigen-decomposition. * * The generalized eigenvalues and eigenvectors of a matrix pair may be complex, even when the * matrices are real. Moreover, the generalized eigenvalue might be infinite if the matrix B is * singular. To workaround this difficulty, the eigenvalues are provided as a pair of complex \f$ \alpha \f$ * and real \f$ \beta \f$ such that: \f$ \lambda_i = \alpha_i / \beta_i \f$. If \f$ \beta_i \f$ is (nearly) zero, * then one can consider the well defined left eigenvalue \f$ \mu = \beta_i / \alpha_i\f$ such that: * \f$ \mu_i A v_i = B v_i \f$, or even \f$ \mu_i u_i^T A = u_i^T B \f$ where \f$ u_i \f$ is * called the left eigenvector. * * Call the function compute() to compute the generalized eigenvalues and eigenvectors of * a given matrix pair. Alternatively, you can use the * GeneralizedEigenSolver(const MatrixType&, const MatrixType&, bool) constructor which computes the * eigenvalues and eigenvectors at construction time. Once the eigenvalue and * eigenvectors are computed, they can be retrieved with the eigenvalues() and * eigenvectors() functions. * * Here is an usage example of this class: * Example: \include GeneralizedEigenSolver.cpp * Output: \verbinclude GeneralizedEigenSolver.out * * \sa MatrixBase::eigenvalues(), class ComplexEigenSolver, class SelfAdjointEigenSolver */ template class GeneralizedEigenSolver { public: /** \brief Synonym for the template parameter \p _MatrixType. */ typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; /** \brief Scalar type for matrices of type #MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** \brief Complex scalar type for #MatrixType. * * This is \c std::complex if #Scalar is real (e.g., * \c float or \c double) and just \c Scalar if #Scalar is * complex. */ typedef std::complex ComplexScalar; /** \brief Type for vector of real scalar values eigenvalues as returned by betas(). * * This is a column vector with entries of type #Scalar. * The length of the vector is the size of #MatrixType. */ typedef Matrix VectorType; /** \brief Type for vector of complex scalar values eigenvalues as returned by alphas(). * * This is a column vector with entries of type #ComplexScalar. * The length of the vector is the size of #MatrixType. */ typedef Matrix ComplexVectorType; /** \brief Expression type for the eigenvalues as returned by eigenvalues(). */ typedef CwiseBinaryOp,ComplexVectorType,VectorType> EigenvalueType; /** \brief Type for matrix of eigenvectors as returned by eigenvectors(). * * This is a square matrix with entries of type #ComplexScalar. * The size is the same as the size of #MatrixType. */ typedef Matrix EigenvectorsType; /** \brief Default constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via EigenSolver::compute(const MatrixType&, bool). * * \sa compute() for an example. */ GeneralizedEigenSolver() : m_eivec(), m_alphas(), m_betas(), m_valuesOkay(false), m_vectorsOkay(false), m_realQZ() {} /** \brief Default constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa GeneralizedEigenSolver() */ explicit GeneralizedEigenSolver(Index size) : m_eivec(size, size), m_alphas(size), m_betas(size), m_valuesOkay(false), m_vectorsOkay(false), m_realQZ(size), m_tmp(size) {} /** \brief Constructor; computes the generalized eigendecomposition of given matrix pair. * * \param[in] A Square matrix whose eigendecomposition is to be computed. * \param[in] B Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are computed. * * This constructor calls compute() to compute the generalized eigenvalues * and eigenvectors. * * \sa compute() */ GeneralizedEigenSolver(const MatrixType& A, const MatrixType& B, bool computeEigenvectors = true) : m_eivec(A.rows(), A.cols()), m_alphas(A.cols()), m_betas(A.cols()), m_valuesOkay(false), m_vectorsOkay(false), m_realQZ(A.cols()), m_tmp(A.cols()) { compute(A, B, computeEigenvectors); } /* \brief Returns the computed generalized eigenvectors. * * \returns %Matrix whose columns are the (possibly complex) right eigenvectors. * i.e. the eigenvectors that solve (A - l*B)x = 0. The ordering matches the eigenvalues. * * \pre Either the constructor * GeneralizedEigenSolver(const MatrixType&,const MatrixType&, bool) or the member function * compute(const MatrixType&, const MatrixType& bool) has been called before, and * \p computeEigenvectors was set to true (the default). * * \sa eigenvalues() */ EigenvectorsType eigenvectors() const { eigen_assert(m_vectorsOkay && "Eigenvectors for GeneralizedEigenSolver were not calculated."); return m_eivec; } /** \brief Returns an expression of the computed generalized eigenvalues. * * \returns An expression of the column vector containing the eigenvalues. * * It is a shortcut for \code this->alphas().cwiseQuotient(this->betas()); \endcode * Not that betas might contain zeros. It is therefore not recommended to use this function, * but rather directly deal with the alphas and betas vectors. * * \pre Either the constructor * GeneralizedEigenSolver(const MatrixType&,const MatrixType&,bool) or the member function * compute(const MatrixType&,const MatrixType&,bool) has been called before. * * The eigenvalues are repeated according to their algebraic multiplicity, * so there are as many eigenvalues as rows in the matrix. The eigenvalues * are not sorted in any particular order. * * \sa alphas(), betas(), eigenvectors() */ EigenvalueType eigenvalues() const { eigen_assert(m_valuesOkay && "GeneralizedEigenSolver is not initialized."); return EigenvalueType(m_alphas,m_betas); } /** \returns A const reference to the vectors containing the alpha values * * This vector permits to reconstruct the j-th eigenvalues as alphas(i)/betas(j). * * \sa betas(), eigenvalues() */ ComplexVectorType alphas() const { eigen_assert(m_valuesOkay && "GeneralizedEigenSolver is not initialized."); return m_alphas; } /** \returns A const reference to the vectors containing the beta values * * This vector permits to reconstruct the j-th eigenvalues as alphas(i)/betas(j). * * \sa alphas(), eigenvalues() */ VectorType betas() const { eigen_assert(m_valuesOkay && "GeneralizedEigenSolver is not initialized."); return m_betas; } /** \brief Computes generalized eigendecomposition of given matrix. * * \param[in] A Square matrix whose eigendecomposition is to be computed. * \param[in] B Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are * computed. * \returns Reference to \c *this * * This function computes the eigenvalues of the real matrix \p matrix. * The eigenvalues() function can be used to retrieve them. If * \p computeEigenvectors is true, then the eigenvectors are also computed * and can be retrieved by calling eigenvectors(). * * The matrix is first reduced to real generalized Schur form using the RealQZ * class. The generalized Schur decomposition is then used to compute the eigenvalues * and eigenvectors. * * The cost of the computation is dominated by the cost of the * generalized Schur decomposition. * * This method reuses of the allocated data in the GeneralizedEigenSolver object. */ GeneralizedEigenSolver& compute(const MatrixType& A, const MatrixType& B, bool computeEigenvectors = true); ComputationInfo info() const { eigen_assert(m_valuesOkay && "EigenSolver is not initialized."); return m_realQZ.info(); } /** Sets the maximal number of iterations allowed. */ GeneralizedEigenSolver& setMaxIterations(Index maxIters) { m_realQZ.setMaxIterations(maxIters); return *this; } protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); EIGEN_STATIC_ASSERT(!NumTraits::IsComplex, NUMERIC_TYPE_MUST_BE_REAL); } EigenvectorsType m_eivec; ComplexVectorType m_alphas; VectorType m_betas; bool m_valuesOkay, m_vectorsOkay; RealQZ m_realQZ; ComplexVectorType m_tmp; }; template GeneralizedEigenSolver& GeneralizedEigenSolver::compute(const MatrixType& A, const MatrixType& B, bool computeEigenvectors) { check_template_parameters(); using std::sqrt; using std::abs; eigen_assert(A.cols() == A.rows() && B.cols() == A.rows() && B.cols() == B.rows()); Index size = A.cols(); m_valuesOkay = false; m_vectorsOkay = false; // Reduce to generalized real Schur form: // A = Q S Z and B = Q T Z m_realQZ.compute(A, B, computeEigenvectors); if (m_realQZ.info() == Success) { // Resize storage m_alphas.resize(size); m_betas.resize(size); if (computeEigenvectors) { m_eivec.resize(size,size); m_tmp.resize(size); } // Aliases: Map v(reinterpret_cast(m_tmp.data()), size); ComplexVectorType &cv = m_tmp; const MatrixType &mS = m_realQZ.matrixS(); const MatrixType &mT = m_realQZ.matrixT(); Index i = 0; while (i < size) { if (i == size - 1 || mS.coeff(i+1, i) == Scalar(0)) { // Real eigenvalue m_alphas.coeffRef(i) = mS.diagonal().coeff(i); m_betas.coeffRef(i) = mT.diagonal().coeff(i); if (computeEigenvectors) { v.setConstant(Scalar(0.0)); v.coeffRef(i) = Scalar(1.0); // For singular eigenvalues do nothing more if(abs(m_betas.coeffRef(i)) >= (std::numeric_limits::min)()) { // Non-singular eigenvalue const Scalar alpha = real(m_alphas.coeffRef(i)); const Scalar beta = m_betas.coeffRef(i); for (Index j = i-1; j >= 0; j--) { const Index st = j+1; const Index sz = i-j; if (j > 0 && mS.coeff(j, j-1) != Scalar(0)) { // 2x2 block Matrix rhs = (alpha*mT.template block<2,Dynamic>(j-1,st,2,sz) - beta*mS.template block<2,Dynamic>(j-1,st,2,sz)) .lazyProduct( v.segment(st,sz) ); Matrix lhs = beta * mS.template block<2,2>(j-1,j-1) - alpha * mT.template block<2,2>(j-1,j-1); v.template segment<2>(j-1) = lhs.partialPivLu().solve(rhs); j--; } else { v.coeffRef(j) = -v.segment(st,sz).transpose().cwiseProduct(beta*mS.block(j,st,1,sz) - alpha*mT.block(j,st,1,sz)).sum() / (beta*mS.coeffRef(j,j) - alpha*mT.coeffRef(j,j)); } } } m_eivec.col(i).real().noalias() = m_realQZ.matrixZ().transpose() * v; m_eivec.col(i).real().normalize(); m_eivec.col(i).imag().setConstant(0); } ++i; } else { // We need to extract the generalized eigenvalues of the pair of a general 2x2 block S and a positive diagonal 2x2 block T // Then taking beta=T_00*T_11, we can avoid any division, and alpha is the eigenvalues of A = (U^-1 * S * U) * diag(T_11,T_00): // T = [a 0] // [0 b] RealScalar a = mT.diagonal().coeff(i), b = mT.diagonal().coeff(i+1); const RealScalar beta = m_betas.coeffRef(i) = m_betas.coeffRef(i+1) = a*b; // ^^ NOTE: using diagonal()(i) instead of coeff(i,i) workarounds a MSVC bug. Matrix S2 = mS.template block<2,2>(i,i) * Matrix(b,a).asDiagonal(); Scalar p = Scalar(0.5) * (S2.coeff(0,0) - S2.coeff(1,1)); Scalar z = sqrt(abs(p * p + S2.coeff(1,0) * S2.coeff(0,1))); const ComplexScalar alpha = ComplexScalar(S2.coeff(1,1) + p, (beta > 0) ? z : -z); m_alphas.coeffRef(i) = conj(alpha); m_alphas.coeffRef(i+1) = alpha; if (computeEigenvectors) { // Compute eigenvector in position (i+1) and then position (i) is just the conjugate cv.setZero(); cv.coeffRef(i+1) = Scalar(1.0); // here, the "static_cast" workaound expression template issues. cv.coeffRef(i) = -(static_cast(beta*mS.coeffRef(i,i+1)) - alpha*mT.coeffRef(i,i+1)) / (static_cast(beta*mS.coeffRef(i,i)) - alpha*mT.coeffRef(i,i)); for (Index j = i-1; j >= 0; j--) { const Index st = j+1; const Index sz = i+1-j; if (j > 0 && mS.coeff(j, j-1) != Scalar(0)) { // 2x2 block Matrix rhs = (alpha*mT.template block<2,Dynamic>(j-1,st,2,sz) - beta*mS.template block<2,Dynamic>(j-1,st,2,sz)) .lazyProduct( cv.segment(st,sz) ); Matrix lhs = beta * mS.template block<2,2>(j-1,j-1) - alpha * mT.template block<2,2>(j-1,j-1); cv.template segment<2>(j-1) = lhs.partialPivLu().solve(rhs); j--; } else { cv.coeffRef(j) = cv.segment(st,sz).transpose().cwiseProduct(beta*mS.block(j,st,1,sz) - alpha*mT.block(j,st,1,sz)).sum() / (alpha*mT.coeffRef(j,j) - static_cast(beta*mS.coeffRef(j,j))); } } m_eivec.col(i+1).noalias() = (m_realQZ.matrixZ().transpose() * cv); m_eivec.col(i+1).normalize(); m_eivec.col(i) = m_eivec.col(i+1).conjugate(); } i += 2; } } m_valuesOkay = true; m_vectorsOkay = computeEigenvectors; } return *this; } } // end namespace Eigen #endif // EIGEN_GENERALIZEDEIGENSOLVER_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/ComplexSchur_LAPACKE.h0000644000175100001770000001012214107270226024520 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Complex Schur needed to complex unsymmetrical eigenvalues/eigenvectors. ******************************************************************************** */ #ifndef EIGEN_COMPLEX_SCHUR_LAPACKE_H #define EIGEN_COMPLEX_SCHUR_LAPACKE_H namespace Eigen { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_SCHUR_COMPLEX(EIGTYPE, LAPACKE_TYPE, LAPACKE_PREFIX, LAPACKE_PREFIX_U, EIGCOLROW, LAPACKE_COLROW) \ template<> template inline \ ComplexSchur >& \ ComplexSchur >::compute(const EigenBase& matrix, bool computeU) \ { \ typedef Matrix MatrixType; \ typedef MatrixType::RealScalar RealScalar; \ typedef std::complex ComplexScalar; \ \ eigen_assert(matrix.cols() == matrix.rows()); \ \ m_matUisUptodate = false; \ if(matrix.cols() == 1) \ { \ m_matT = matrix.derived().template cast(); \ if(computeU) m_matU = ComplexMatrixType::Identity(1,1); \ m_info = Success; \ m_isInitialized = true; \ m_matUisUptodate = computeU; \ return *this; \ } \ lapack_int n = internal::convert_index(matrix.cols()), sdim, info; \ lapack_int matrix_order = LAPACKE_COLROW; \ char jobvs, sort='N'; \ LAPACK_##LAPACKE_PREFIX_U##_SELECT1 select = 0; \ jobvs = (computeU) ? 'V' : 'N'; \ m_matU.resize(n, n); \ lapack_int ldvs = internal::convert_index(m_matU.outerStride()); \ m_matT = matrix; \ lapack_int lda = internal::convert_index(m_matT.outerStride()); \ Matrix w; \ w.resize(n, 1);\ info = LAPACKE_##LAPACKE_PREFIX##gees( matrix_order, jobvs, sort, select, n, (LAPACKE_TYPE*)m_matT.data(), lda, &sdim, (LAPACKE_TYPE*)w.data(), (LAPACKE_TYPE*)m_matU.data(), ldvs ); \ if(info == 0) \ m_info = Success; \ else \ m_info = NoConvergence; \ \ m_isInitialized = true; \ m_matUisUptodate = computeU; \ return *this; \ \ } EIGEN_LAPACKE_SCHUR_COMPLEX(dcomplex, lapack_complex_double, z, Z, ColMajor, LAPACK_COL_MAJOR) EIGEN_LAPACKE_SCHUR_COMPLEX(scomplex, lapack_complex_float, c, C, ColMajor, LAPACK_COL_MAJOR) EIGEN_LAPACKE_SCHUR_COMPLEX(dcomplex, lapack_complex_double, z, Z, RowMajor, LAPACK_ROW_MAJOR) EIGEN_LAPACKE_SCHUR_COMPLEX(scomplex, lapack_complex_float, c, C, RowMajor, LAPACK_ROW_MAJOR) } // end namespace Eigen #endif // EIGEN_COMPLEX_SCHUR_LAPACKE_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/HessenbergDecomposition.h0000644000175100001770000003401514107270226025615 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2009 Gael Guennebaud // Copyright (C) 2010 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_HESSENBERGDECOMPOSITION_H #define EIGEN_HESSENBERGDECOMPOSITION_H namespace Eigen { namespace internal { template struct HessenbergDecompositionMatrixHReturnType; template struct traits > { typedef MatrixType ReturnType; }; } /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class HessenbergDecomposition * * \brief Reduces a square matrix to Hessenberg form by an orthogonal similarity transformation * * \tparam _MatrixType the type of the matrix of which we are computing the Hessenberg decomposition * * This class performs an Hessenberg decomposition of a matrix \f$ A \f$. In * the real case, the Hessenberg decomposition consists of an orthogonal * matrix \f$ Q \f$ and a Hessenberg matrix \f$ H \f$ such that \f$ A = Q H * Q^T \f$. An orthogonal matrix is a matrix whose inverse equals its * transpose (\f$ Q^{-1} = Q^T \f$). A Hessenberg matrix has zeros below the * subdiagonal, so it is almost upper triangular. The Hessenberg decomposition * of a complex matrix is \f$ A = Q H Q^* \f$ with \f$ Q \f$ unitary (that is, * \f$ Q^{-1} = Q^* \f$). * * Call the function compute() to compute the Hessenberg decomposition of a * given matrix. Alternatively, you can use the * HessenbergDecomposition(const MatrixType&) constructor which computes the * Hessenberg decomposition at construction time. Once the decomposition is * computed, you can use the matrixH() and matrixQ() functions to construct * the matrices H and Q in the decomposition. * * The documentation for matrixH() contains an example of the typical use of * this class. * * \sa class ComplexSchur, class Tridiagonalization, \ref QR_Module "QR Module" */ template class HessenbergDecomposition { public: /** \brief Synonym for the template parameter \p _MatrixType. */ typedef _MatrixType MatrixType; enum { Size = MatrixType::RowsAtCompileTime, SizeMinusOne = Size == Dynamic ? Dynamic : Size - 1, Options = MatrixType::Options, MaxSize = MatrixType::MaxRowsAtCompileTime, MaxSizeMinusOne = MaxSize == Dynamic ? Dynamic : MaxSize - 1 }; /** \brief Scalar type for matrices of type #MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** \brief Type for vector of Householder coefficients. * * This is column vector with entries of type #Scalar. The length of the * vector is one less than the size of #MatrixType, if it is a fixed-side * type. */ typedef Matrix CoeffVectorType; /** \brief Return type of matrixQ() */ typedef HouseholderSequence::type> HouseholderSequenceType; typedef internal::HessenbergDecompositionMatrixHReturnType MatrixHReturnType; /** \brief Default constructor; the decomposition will be computed later. * * \param [in] size The size of the matrix whose Hessenberg decomposition will be computed. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). The \p size parameter is only * used as a hint. It is not an error to give a wrong \p size, but it may * impair performance. * * \sa compute() for an example. */ explicit HessenbergDecomposition(Index size = Size==Dynamic ? 2 : Size) : m_matrix(size,size), m_temp(size), m_isInitialized(false) { if(size>1) m_hCoeffs.resize(size-1); } /** \brief Constructor; computes Hessenberg decomposition of given matrix. * * \param[in] matrix Square matrix whose Hessenberg decomposition is to be computed. * * This constructor calls compute() to compute the Hessenberg * decomposition. * * \sa matrixH() for an example. */ template explicit HessenbergDecomposition(const EigenBase& matrix) : m_matrix(matrix.derived()), m_temp(matrix.rows()), m_isInitialized(false) { if(matrix.rows()<2) { m_isInitialized = true; return; } m_hCoeffs.resize(matrix.rows()-1,1); _compute(m_matrix, m_hCoeffs, m_temp); m_isInitialized = true; } /** \brief Computes Hessenberg decomposition of given matrix. * * \param[in] matrix Square matrix whose Hessenberg decomposition is to be computed. * \returns Reference to \c *this * * The Hessenberg decomposition is computed by bringing the columns of the * matrix successively in the required form using Householder reflections * (see, e.g., Algorithm 7.4.2 in Golub \& Van Loan, %Matrix * Computations). The cost is \f$ 10n^3/3 \f$ flops, where \f$ n \f$ * denotes the size of the given matrix. * * This method reuses of the allocated data in the HessenbergDecomposition * object. * * Example: \include HessenbergDecomposition_compute.cpp * Output: \verbinclude HessenbergDecomposition_compute.out */ template HessenbergDecomposition& compute(const EigenBase& matrix) { m_matrix = matrix.derived(); if(matrix.rows()<2) { m_isInitialized = true; return *this; } m_hCoeffs.resize(matrix.rows()-1,1); _compute(m_matrix, m_hCoeffs, m_temp); m_isInitialized = true; return *this; } /** \brief Returns the Householder coefficients. * * \returns a const reference to the vector of Householder coefficients * * \pre Either the constructor HessenbergDecomposition(const MatrixType&) * or the member function compute(const MatrixType&) has been called * before to compute the Hessenberg decomposition of a matrix. * * The Householder coefficients allow the reconstruction of the matrix * \f$ Q \f$ in the Hessenberg decomposition from the packed data. * * \sa packedMatrix(), \ref Householder_Module "Householder module" */ const CoeffVectorType& householderCoefficients() const { eigen_assert(m_isInitialized && "HessenbergDecomposition is not initialized."); return m_hCoeffs; } /** \brief Returns the internal representation of the decomposition * * \returns a const reference to a matrix with the internal representation * of the decomposition. * * \pre Either the constructor HessenbergDecomposition(const MatrixType&) * or the member function compute(const MatrixType&) has been called * before to compute the Hessenberg decomposition of a matrix. * * The returned matrix contains the following information: * - the upper part and lower sub-diagonal represent the Hessenberg matrix H * - the rest of the lower part contains the Householder vectors that, combined with * Householder coefficients returned by householderCoefficients(), * allows to reconstruct the matrix Q as * \f$ Q = H_{N-1} \ldots H_1 H_0 \f$. * Here, the matrices \f$ H_i \f$ are the Householder transformations * \f$ H_i = (I - h_i v_i v_i^T) \f$ * where \f$ h_i \f$ is the \f$ i \f$th Householder coefficient and * \f$ v_i \f$ is the Householder vector defined by * \f$ v_i = [ 0, \ldots, 0, 1, M(i+2,i), \ldots, M(N-1,i) ]^T \f$ * with M the matrix returned by this function. * * See LAPACK for further details on this packed storage. * * Example: \include HessenbergDecomposition_packedMatrix.cpp * Output: \verbinclude HessenbergDecomposition_packedMatrix.out * * \sa householderCoefficients() */ const MatrixType& packedMatrix() const { eigen_assert(m_isInitialized && "HessenbergDecomposition is not initialized."); return m_matrix; } /** \brief Reconstructs the orthogonal matrix Q in the decomposition * * \returns object representing the matrix Q * * \pre Either the constructor HessenbergDecomposition(const MatrixType&) * or the member function compute(const MatrixType&) has been called * before to compute the Hessenberg decomposition of a matrix. * * This function returns a light-weight object of template class * HouseholderSequence. You can either apply it directly to a matrix or * you can convert it to a matrix of type #MatrixType. * * \sa matrixH() for an example, class HouseholderSequence */ HouseholderSequenceType matrixQ() const { eigen_assert(m_isInitialized && "HessenbergDecomposition is not initialized."); return HouseholderSequenceType(m_matrix, m_hCoeffs.conjugate()) .setLength(m_matrix.rows() - 1) .setShift(1); } /** \brief Constructs the Hessenberg matrix H in the decomposition * * \returns expression object representing the matrix H * * \pre Either the constructor HessenbergDecomposition(const MatrixType&) * or the member function compute(const MatrixType&) has been called * before to compute the Hessenberg decomposition of a matrix. * * The object returned by this function constructs the Hessenberg matrix H * when it is assigned to a matrix or otherwise evaluated. The matrix H is * constructed from the packed matrix as returned by packedMatrix(): The * upper part (including the subdiagonal) of the packed matrix contains * the matrix H. It may sometimes be better to directly use the packed * matrix instead of constructing the matrix H. * * Example: \include HessenbergDecomposition_matrixH.cpp * Output: \verbinclude HessenbergDecomposition_matrixH.out * * \sa matrixQ(), packedMatrix() */ MatrixHReturnType matrixH() const { eigen_assert(m_isInitialized && "HessenbergDecomposition is not initialized."); return MatrixHReturnType(*this); } private: typedef Matrix VectorType; typedef typename NumTraits::Real RealScalar; static void _compute(MatrixType& matA, CoeffVectorType& hCoeffs, VectorType& temp); protected: MatrixType m_matrix; CoeffVectorType m_hCoeffs; VectorType m_temp; bool m_isInitialized; }; /** \internal * Performs a tridiagonal decomposition of \a matA in place. * * \param matA the input selfadjoint matrix * \param hCoeffs returned Householder coefficients * * The result is written in the lower triangular part of \a matA. * * Implemented from Golub's "%Matrix Computations", algorithm 8.3.1. * * \sa packedMatrix() */ template void HessenbergDecomposition::_compute(MatrixType& matA, CoeffVectorType& hCoeffs, VectorType& temp) { eigen_assert(matA.rows()==matA.cols()); Index n = matA.rows(); temp.resize(n); for (Index i = 0; i struct HessenbergDecompositionMatrixHReturnType : public ReturnByValue > { public: /** \brief Constructor. * * \param[in] hess Hessenberg decomposition */ HessenbergDecompositionMatrixHReturnType(const HessenbergDecomposition& hess) : m_hess(hess) { } /** \brief Hessenberg matrix in decomposition. * * \param[out] result Hessenberg matrix in decomposition \p hess which * was passed to the constructor */ template inline void evalTo(ResultType& result) const { result = m_hess.packedMatrix(); Index n = result.rows(); if (n>2) result.bottomLeftCorner(n-2, n-2).template triangularView().setZero(); } Index rows() const { return m_hess.packedMatrix().rows(); } Index cols() const { return m_hess.packedMatrix().cols(); } protected: const HessenbergDecomposition& m_hess; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_HESSENBERGDECOMPOSITION_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/EigenSolver.h0000644000175100001770000005467214107270226023230 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2010,2012 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_EIGENSOLVER_H #define EIGEN_EIGENSOLVER_H #include "./RealSchur.h" namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class EigenSolver * * \brief Computes eigenvalues and eigenvectors of general matrices * * \tparam _MatrixType the type of the matrix of which we are computing the * eigendecomposition; this is expected to be an instantiation of the Matrix * class template. Currently, only real matrices are supported. * * The eigenvalues and eigenvectors of a matrix \f$ A \f$ are scalars * \f$ \lambda \f$ and vectors \f$ v \f$ such that \f$ Av = \lambda v \f$. If * \f$ D \f$ is a diagonal matrix with the eigenvalues on the diagonal, and * \f$ V \f$ is a matrix with the eigenvectors as its columns, then \f$ A V = * V D \f$. The matrix \f$ V \f$ is almost always invertible, in which case we * have \f$ A = V D V^{-1} \f$. This is called the eigendecomposition. * * The eigenvalues and eigenvectors of a matrix may be complex, even when the * matrix is real. However, we can choose real matrices \f$ V \f$ and \f$ D * \f$ satisfying \f$ A V = V D \f$, just like the eigendecomposition, if the * matrix \f$ D \f$ is not required to be diagonal, but if it is allowed to * have blocks of the form * \f[ \begin{bmatrix} u & v \\ -v & u \end{bmatrix} \f] * (where \f$ u \f$ and \f$ v \f$ are real numbers) on the diagonal. These * blocks correspond to complex eigenvalue pairs \f$ u \pm iv \f$. We call * this variant of the eigendecomposition the pseudo-eigendecomposition. * * Call the function compute() to compute the eigenvalues and eigenvectors of * a given matrix. Alternatively, you can use the * EigenSolver(const MatrixType&, bool) constructor which computes the * eigenvalues and eigenvectors at construction time. Once the eigenvalue and * eigenvectors are computed, they can be retrieved with the eigenvalues() and * eigenvectors() functions. The pseudoEigenvalueMatrix() and * pseudoEigenvectors() methods allow the construction of the * pseudo-eigendecomposition. * * The documentation for EigenSolver(const MatrixType&, bool) contains an * example of the typical use of this class. * * \note The implementation is adapted from * JAMA (public domain). * Their code is based on EISPACK. * * \sa MatrixBase::eigenvalues(), class ComplexEigenSolver, class SelfAdjointEigenSolver */ template class EigenSolver { public: /** \brief Synonym for the template parameter \p _MatrixType. */ typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; /** \brief Scalar type for matrices of type #MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** \brief Complex scalar type for #MatrixType. * * This is \c std::complex if #Scalar is real (e.g., * \c float or \c double) and just \c Scalar if #Scalar is * complex. */ typedef std::complex ComplexScalar; /** \brief Type for vector of eigenvalues as returned by eigenvalues(). * * This is a column vector with entries of type #ComplexScalar. * The length of the vector is the size of #MatrixType. */ typedef Matrix EigenvalueType; /** \brief Type for matrix of eigenvectors as returned by eigenvectors(). * * This is a square matrix with entries of type #ComplexScalar. * The size is the same as the size of #MatrixType. */ typedef Matrix EigenvectorsType; /** \brief Default constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via EigenSolver::compute(const MatrixType&, bool). * * \sa compute() for an example. */ EigenSolver() : m_eivec(), m_eivalues(), m_isInitialized(false), m_eigenvectorsOk(false), m_realSchur(), m_matT(), m_tmp() {} /** \brief Default constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa EigenSolver() */ explicit EigenSolver(Index size) : m_eivec(size, size), m_eivalues(size), m_isInitialized(false), m_eigenvectorsOk(false), m_realSchur(size), m_matT(size, size), m_tmp(size) {} /** \brief Constructor; computes eigendecomposition of given matrix. * * \param[in] matrix Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are * computed. * * This constructor calls compute() to compute the eigenvalues * and eigenvectors. * * Example: \include EigenSolver_EigenSolver_MatrixType.cpp * Output: \verbinclude EigenSolver_EigenSolver_MatrixType.out * * \sa compute() */ template explicit EigenSolver(const EigenBase& matrix, bool computeEigenvectors = true) : m_eivec(matrix.rows(), matrix.cols()), m_eivalues(matrix.cols()), m_isInitialized(false), m_eigenvectorsOk(false), m_realSchur(matrix.cols()), m_matT(matrix.rows(), matrix.cols()), m_tmp(matrix.cols()) { compute(matrix.derived(), computeEigenvectors); } /** \brief Returns the eigenvectors of given matrix. * * \returns %Matrix whose columns are the (possibly complex) eigenvectors. * * \pre Either the constructor * EigenSolver(const MatrixType&,bool) or the member function * compute(const MatrixType&, bool) has been called before, and * \p computeEigenvectors was set to true (the default). * * Column \f$ k \f$ of the returned matrix is an eigenvector corresponding * to eigenvalue number \f$ k \f$ as returned by eigenvalues(). The * eigenvectors are normalized to have (Euclidean) norm equal to one. The * matrix returned by this function is the matrix \f$ V \f$ in the * eigendecomposition \f$ A = V D V^{-1} \f$, if it exists. * * Example: \include EigenSolver_eigenvectors.cpp * Output: \verbinclude EigenSolver_eigenvectors.out * * \sa eigenvalues(), pseudoEigenvectors() */ EigenvectorsType eigenvectors() const; /** \brief Returns the pseudo-eigenvectors of given matrix. * * \returns Const reference to matrix whose columns are the pseudo-eigenvectors. * * \pre Either the constructor * EigenSolver(const MatrixType&,bool) or the member function * compute(const MatrixType&, bool) has been called before, and * \p computeEigenvectors was set to true (the default). * * The real matrix \f$ V \f$ returned by this function and the * block-diagonal matrix \f$ D \f$ returned by pseudoEigenvalueMatrix() * satisfy \f$ AV = VD \f$. * * Example: \include EigenSolver_pseudoEigenvectors.cpp * Output: \verbinclude EigenSolver_pseudoEigenvectors.out * * \sa pseudoEigenvalueMatrix(), eigenvectors() */ const MatrixType& pseudoEigenvectors() const { eigen_assert(m_isInitialized && "EigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); return m_eivec; } /** \brief Returns the block-diagonal matrix in the pseudo-eigendecomposition. * * \returns A block-diagonal matrix. * * \pre Either the constructor * EigenSolver(const MatrixType&,bool) or the member function * compute(const MatrixType&, bool) has been called before. * * The matrix \f$ D \f$ returned by this function is real and * block-diagonal. The blocks on the diagonal are either 1-by-1 or 2-by-2 * blocks of the form * \f$ \begin{bmatrix} u & v \\ -v & u \end{bmatrix} \f$. * These blocks are not sorted in any particular order. * The matrix \f$ D \f$ and the matrix \f$ V \f$ returned by * pseudoEigenvectors() satisfy \f$ AV = VD \f$. * * \sa pseudoEigenvectors() for an example, eigenvalues() */ MatrixType pseudoEigenvalueMatrix() const; /** \brief Returns the eigenvalues of given matrix. * * \returns A const reference to the column vector containing the eigenvalues. * * \pre Either the constructor * EigenSolver(const MatrixType&,bool) or the member function * compute(const MatrixType&, bool) has been called before. * * The eigenvalues are repeated according to their algebraic multiplicity, * so there are as many eigenvalues as rows in the matrix. The eigenvalues * are not sorted in any particular order. * * Example: \include EigenSolver_eigenvalues.cpp * Output: \verbinclude EigenSolver_eigenvalues.out * * \sa eigenvectors(), pseudoEigenvalueMatrix(), * MatrixBase::eigenvalues() */ const EigenvalueType& eigenvalues() const { eigen_assert(m_isInitialized && "EigenSolver is not initialized."); return m_eivalues; } /** \brief Computes eigendecomposition of given matrix. * * \param[in] matrix Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are * computed. * \returns Reference to \c *this * * This function computes the eigenvalues of the real matrix \p matrix. * The eigenvalues() function can be used to retrieve them. If * \p computeEigenvectors is true, then the eigenvectors are also computed * and can be retrieved by calling eigenvectors(). * * The matrix is first reduced to real Schur form using the RealSchur * class. The Schur decomposition is then used to compute the eigenvalues * and eigenvectors. * * The cost of the computation is dominated by the cost of the * Schur decomposition, which is very approximately \f$ 25n^3 \f$ * (where \f$ n \f$ is the size of the matrix) if \p computeEigenvectors * is true, and \f$ 10n^3 \f$ if \p computeEigenvectors is false. * * This method reuses of the allocated data in the EigenSolver object. * * Example: \include EigenSolver_compute.cpp * Output: \verbinclude EigenSolver_compute.out */ template EigenSolver& compute(const EigenBase& matrix, bool computeEigenvectors = true); /** \returns NumericalIssue if the input contains INF or NaN values or overflow occurred. Returns Success otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "EigenSolver is not initialized."); return m_info; } /** \brief Sets the maximum number of iterations allowed. */ EigenSolver& setMaxIterations(Index maxIters) { m_realSchur.setMaxIterations(maxIters); return *this; } /** \brief Returns the maximum number of iterations. */ Index getMaxIterations() { return m_realSchur.getMaxIterations(); } private: void doComputeEigenvectors(); protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); EIGEN_STATIC_ASSERT(!NumTraits::IsComplex, NUMERIC_TYPE_MUST_BE_REAL); } MatrixType m_eivec; EigenvalueType m_eivalues; bool m_isInitialized; bool m_eigenvectorsOk; ComputationInfo m_info; RealSchur m_realSchur; MatrixType m_matT; typedef Matrix ColumnVectorType; ColumnVectorType m_tmp; }; template MatrixType EigenSolver::pseudoEigenvalueMatrix() const { eigen_assert(m_isInitialized && "EigenSolver is not initialized."); const RealScalar precision = RealScalar(2)*NumTraits::epsilon(); Index n = m_eivalues.rows(); MatrixType matD = MatrixType::Zero(n,n); for (Index i=0; i(i,i) << numext::real(m_eivalues.coeff(i)), numext::imag(m_eivalues.coeff(i)), -numext::imag(m_eivalues.coeff(i)), numext::real(m_eivalues.coeff(i)); ++i; } } return matD; } template typename EigenSolver::EigenvectorsType EigenSolver::eigenvectors() const { eigen_assert(m_isInitialized && "EigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); const RealScalar precision = RealScalar(2)*NumTraits::epsilon(); Index n = m_eivec.cols(); EigenvectorsType matV(n,n); for (Index j=0; j(); matV.col(j).normalize(); } else { // we have a pair of complex eigen values for (Index i=0; i template EigenSolver& EigenSolver::compute(const EigenBase& matrix, bool computeEigenvectors) { check_template_parameters(); using std::sqrt; using std::abs; using numext::isfinite; eigen_assert(matrix.cols() == matrix.rows()); // Reduce to real Schur form. m_realSchur.compute(matrix.derived(), computeEigenvectors); m_info = m_realSchur.info(); if (m_info == Success) { m_matT = m_realSchur.matrixT(); if (computeEigenvectors) m_eivec = m_realSchur.matrixU(); // Compute eigenvalues from matT m_eivalues.resize(matrix.cols()); Index i = 0; while (i < matrix.cols()) { if (i == matrix.cols() - 1 || m_matT.coeff(i+1, i) == Scalar(0)) { m_eivalues.coeffRef(i) = m_matT.coeff(i, i); if(!(isfinite)(m_eivalues.coeffRef(i))) { m_isInitialized = true; m_eigenvectorsOk = false; m_info = NumericalIssue; return *this; } ++i; } else { Scalar p = Scalar(0.5) * (m_matT.coeff(i, i) - m_matT.coeff(i+1, i+1)); Scalar z; // Compute z = sqrt(abs(p * p + m_matT.coeff(i+1, i) * m_matT.coeff(i, i+1))); // without overflow { Scalar t0 = m_matT.coeff(i+1, i); Scalar t1 = m_matT.coeff(i, i+1); Scalar maxval = numext::maxi(abs(p),numext::maxi(abs(t0),abs(t1))); t0 /= maxval; t1 /= maxval; Scalar p0 = p/maxval; z = maxval * sqrt(abs(p0 * p0 + t0 * t1)); } m_eivalues.coeffRef(i) = ComplexScalar(m_matT.coeff(i+1, i+1) + p, z); m_eivalues.coeffRef(i+1) = ComplexScalar(m_matT.coeff(i+1, i+1) + p, -z); if(!((isfinite)(m_eivalues.coeffRef(i)) && (isfinite)(m_eivalues.coeffRef(i+1)))) { m_isInitialized = true; m_eigenvectorsOk = false; m_info = NumericalIssue; return *this; } i += 2; } } // Compute eigenvectors. if (computeEigenvectors) doComputeEigenvectors(); } m_isInitialized = true; m_eigenvectorsOk = computeEigenvectors; return *this; } template void EigenSolver::doComputeEigenvectors() { using std::abs; const Index size = m_eivec.cols(); const Scalar eps = NumTraits::epsilon(); // inefficient! this is already computed in RealSchur Scalar norm(0); for (Index j = 0; j < size; ++j) { norm += m_matT.row(j).segment((std::max)(j-1,Index(0)), size-(std::max)(j-1,Index(0))).cwiseAbs().sum(); } // Backsubstitute to find vectors of upper triangular form if (norm == Scalar(0)) { return; } for (Index n = size-1; n >= 0; n--) { Scalar p = m_eivalues.coeff(n).real(); Scalar q = m_eivalues.coeff(n).imag(); // Scalar vector if (q == Scalar(0)) { Scalar lastr(0), lastw(0); Index l = n; m_matT.coeffRef(n,n) = Scalar(1); for (Index i = n-1; i >= 0; i--) { Scalar w = m_matT.coeff(i,i) - p; Scalar r = m_matT.row(i).segment(l,n-l+1).dot(m_matT.col(n).segment(l, n-l+1)); if (m_eivalues.coeff(i).imag() < Scalar(0)) { lastw = w; lastr = r; } else { l = i; if (m_eivalues.coeff(i).imag() == Scalar(0)) { if (w != Scalar(0)) m_matT.coeffRef(i,n) = -r / w; else m_matT.coeffRef(i,n) = -r / (eps * norm); } else // Solve real equations { Scalar x = m_matT.coeff(i,i+1); Scalar y = m_matT.coeff(i+1,i); Scalar denom = (m_eivalues.coeff(i).real() - p) * (m_eivalues.coeff(i).real() - p) + m_eivalues.coeff(i).imag() * m_eivalues.coeff(i).imag(); Scalar t = (x * lastr - lastw * r) / denom; m_matT.coeffRef(i,n) = t; if (abs(x) > abs(lastw)) m_matT.coeffRef(i+1,n) = (-r - w * t) / x; else m_matT.coeffRef(i+1,n) = (-lastr - y * t) / lastw; } // Overflow control Scalar t = abs(m_matT.coeff(i,n)); if ((eps * t) * t > Scalar(1)) m_matT.col(n).tail(size-i) /= t; } } } else if (q < Scalar(0) && n > 0) // Complex vector { Scalar lastra(0), lastsa(0), lastw(0); Index l = n-1; // Last vector component imaginary so matrix is triangular if (abs(m_matT.coeff(n,n-1)) > abs(m_matT.coeff(n-1,n))) { m_matT.coeffRef(n-1,n-1) = q / m_matT.coeff(n,n-1); m_matT.coeffRef(n-1,n) = -(m_matT.coeff(n,n) - p) / m_matT.coeff(n,n-1); } else { ComplexScalar cc = ComplexScalar(Scalar(0),-m_matT.coeff(n-1,n)) / ComplexScalar(m_matT.coeff(n-1,n-1)-p,q); m_matT.coeffRef(n-1,n-1) = numext::real(cc); m_matT.coeffRef(n-1,n) = numext::imag(cc); } m_matT.coeffRef(n,n-1) = Scalar(0); m_matT.coeffRef(n,n) = Scalar(1); for (Index i = n-2; i >= 0; i--) { Scalar ra = m_matT.row(i).segment(l, n-l+1).dot(m_matT.col(n-1).segment(l, n-l+1)); Scalar sa = m_matT.row(i).segment(l, n-l+1).dot(m_matT.col(n).segment(l, n-l+1)); Scalar w = m_matT.coeff(i,i) - p; if (m_eivalues.coeff(i).imag() < Scalar(0)) { lastw = w; lastra = ra; lastsa = sa; } else { l = i; if (m_eivalues.coeff(i).imag() == RealScalar(0)) { ComplexScalar cc = ComplexScalar(-ra,-sa) / ComplexScalar(w,q); m_matT.coeffRef(i,n-1) = numext::real(cc); m_matT.coeffRef(i,n) = numext::imag(cc); } else { // Solve complex equations Scalar x = m_matT.coeff(i,i+1); Scalar y = m_matT.coeff(i+1,i); Scalar vr = (m_eivalues.coeff(i).real() - p) * (m_eivalues.coeff(i).real() - p) + m_eivalues.coeff(i).imag() * m_eivalues.coeff(i).imag() - q * q; Scalar vi = (m_eivalues.coeff(i).real() - p) * Scalar(2) * q; if ((vr == Scalar(0)) && (vi == Scalar(0))) vr = eps * norm * (abs(w) + abs(q) + abs(x) + abs(y) + abs(lastw)); ComplexScalar cc = ComplexScalar(x*lastra-lastw*ra+q*sa,x*lastsa-lastw*sa-q*ra) / ComplexScalar(vr,vi); m_matT.coeffRef(i,n-1) = numext::real(cc); m_matT.coeffRef(i,n) = numext::imag(cc); if (abs(x) > (abs(lastw) + abs(q))) { m_matT.coeffRef(i+1,n-1) = (-ra - w * m_matT.coeff(i,n-1) + q * m_matT.coeff(i,n)) / x; m_matT.coeffRef(i+1,n) = (-sa - w * m_matT.coeff(i,n) - q * m_matT.coeff(i,n-1)) / x; } else { cc = ComplexScalar(-lastra-y*m_matT.coeff(i,n-1),-lastsa-y*m_matT.coeff(i,n)) / ComplexScalar(lastw,q); m_matT.coeffRef(i+1,n-1) = numext::real(cc); m_matT.coeffRef(i+1,n) = numext::imag(cc); } } // Overflow control Scalar t = numext::maxi(abs(m_matT.coeff(i,n-1)),abs(m_matT.coeff(i,n))); if ((eps * t) * t > Scalar(1)) m_matT.block(i, n-1, size-i, 2) /= t; } } // We handled a pair of complex conjugate eigenvalues, so need to skip them both n--; } else { eigen_assert(0 && "Internal bug in EigenSolver (INF or NaN has not been detected)"); // this should not happen } } // Back transformation to get eigenvectors of original matrix for (Index j = size-1; j >= 0; j--) { m_tmp.noalias() = m_eivec.leftCols(j+1) * m_matT.col(j).segment(0, j+1); m_eivec.col(j) = m_tmp; } } } // end namespace Eigen #endif // EIGEN_EIGENSOLVER_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/SelfAdjointEigenSolver_LAPACKE.h0000644000175100001770000001001014107270226026445 0ustar runnerdocker/* Copyright (c) 2011, Intel Corporation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ******************************************************************************** * Content : Eigen bindings to LAPACKe * Self-adjoint eigenvalues/eigenvectors. ******************************************************************************** */ #ifndef EIGEN_SAEIGENSOLVER_LAPACKE_H #define EIGEN_SAEIGENSOLVER_LAPACKE_H namespace Eigen { /** \internal Specialization for the data types supported by LAPACKe */ #define EIGEN_LAPACKE_EIG_SELFADJ_2(EIGTYPE, LAPACKE_TYPE, LAPACKE_RTYPE, LAPACKE_NAME, EIGCOLROW ) \ template<> template inline \ SelfAdjointEigenSolver >& \ SelfAdjointEigenSolver >::compute(const EigenBase& matrix, int options) \ { \ eigen_assert(matrix.cols() == matrix.rows()); \ eigen_assert((options&~(EigVecMask|GenEigMask))==0 \ && (options&EigVecMask)!=EigVecMask \ && "invalid option parameter"); \ bool computeEigenvectors = (options&ComputeEigenvectors)==ComputeEigenvectors; \ lapack_int n = internal::convert_index(matrix.cols()), lda, info; \ m_eivalues.resize(n,1); \ m_subdiag.resize(n-1); \ m_eivec = matrix; \ \ if(n==1) \ { \ m_eivalues.coeffRef(0,0) = numext::real(m_eivec.coeff(0,0)); \ if(computeEigenvectors) m_eivec.setOnes(n,n); \ m_info = Success; \ m_isInitialized = true; \ m_eigenvectorsOk = computeEigenvectors; \ return *this; \ } \ \ lda = internal::convert_index(m_eivec.outerStride()); \ char jobz, uplo='L'/*, range='A'*/; \ jobz = computeEigenvectors ? 'V' : 'N'; \ \ info = LAPACKE_##LAPACKE_NAME( LAPACK_COL_MAJOR, jobz, uplo, n, (LAPACKE_TYPE*)m_eivec.data(), lda, (LAPACKE_RTYPE*)m_eivalues.data() ); \ m_info = (info==0) ? Success : NoConvergence; \ m_isInitialized = true; \ m_eigenvectorsOk = computeEigenvectors; \ return *this; \ } #define EIGEN_LAPACKE_EIG_SELFADJ(EIGTYPE, LAPACKE_TYPE, LAPACKE_RTYPE, LAPACKE_NAME ) \ EIGEN_LAPACKE_EIG_SELFADJ_2(EIGTYPE, LAPACKE_TYPE, LAPACKE_RTYPE, LAPACKE_NAME, ColMajor ) \ EIGEN_LAPACKE_EIG_SELFADJ_2(EIGTYPE, LAPACKE_TYPE, LAPACKE_RTYPE, LAPACKE_NAME, RowMajor ) EIGEN_LAPACKE_EIG_SELFADJ(double, double, double, dsyev) EIGEN_LAPACKE_EIG_SELFADJ(float, float, float, ssyev) EIGEN_LAPACKE_EIG_SELFADJ(dcomplex, lapack_complex_double, double, zheev) EIGEN_LAPACKE_EIG_SELFADJ(scomplex, lapack_complex_float, float, cheev) } // end namespace Eigen #endif // EIGEN_SAEIGENSOLVER_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/ComplexSchur.h0000644000175100001770000004157214107270226023415 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Claire Maurice // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2010,2012 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COMPLEX_SCHUR_H #define EIGEN_COMPLEX_SCHUR_H #include "./HessenbergDecomposition.h" namespace Eigen { namespace internal { template struct complex_schur_reduce_to_hessenberg; } /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class ComplexSchur * * \brief Performs a complex Schur decomposition of a real or complex square matrix * * \tparam _MatrixType the type of the matrix of which we are * computing the Schur decomposition; this is expected to be an * instantiation of the Matrix class template. * * Given a real or complex square matrix A, this class computes the * Schur decomposition: \f$ A = U T U^*\f$ where U is a unitary * complex matrix, and T is a complex upper triangular matrix. The * diagonal of the matrix T corresponds to the eigenvalues of the * matrix A. * * Call the function compute() to compute the Schur decomposition of * a given matrix. Alternatively, you can use the * ComplexSchur(const MatrixType&, bool) constructor which computes * the Schur decomposition at construction time. Once the * decomposition is computed, you can use the matrixU() and matrixT() * functions to retrieve the matrices U and V in the decomposition. * * \note This code is inspired from Jampack * * \sa class RealSchur, class EigenSolver, class ComplexEigenSolver */ template class ComplexSchur { public: typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; /** \brief Scalar type for matrices of type \p _MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** \brief Complex scalar type for \p _MatrixType. * * This is \c std::complex if #Scalar is real (e.g., * \c float or \c double) and just \c Scalar if #Scalar is * complex. */ typedef std::complex ComplexScalar; /** \brief Type for the matrices in the Schur decomposition. * * This is a square matrix with entries of type #ComplexScalar. * The size is the same as the size of \p _MatrixType. */ typedef Matrix ComplexMatrixType; /** \brief Default constructor. * * \param [in] size Positive integer, size of the matrix whose Schur decomposition will be computed. * * The default constructor is useful in cases in which the user * intends to perform decompositions via compute(). The \p size * parameter is only used as a hint. It is not an error to give a * wrong \p size, but it may impair performance. * * \sa compute() for an example. */ explicit ComplexSchur(Index size = RowsAtCompileTime==Dynamic ? 1 : RowsAtCompileTime) : m_matT(size,size), m_matU(size,size), m_hess(size), m_isInitialized(false), m_matUisUptodate(false), m_maxIters(-1) {} /** \brief Constructor; computes Schur decomposition of given matrix. * * \param[in] matrix Square matrix whose Schur decomposition is to be computed. * \param[in] computeU If true, both T and U are computed; if false, only T is computed. * * This constructor calls compute() to compute the Schur decomposition. * * \sa matrixT() and matrixU() for examples. */ template explicit ComplexSchur(const EigenBase& matrix, bool computeU = true) : m_matT(matrix.rows(),matrix.cols()), m_matU(matrix.rows(),matrix.cols()), m_hess(matrix.rows()), m_isInitialized(false), m_matUisUptodate(false), m_maxIters(-1) { compute(matrix.derived(), computeU); } /** \brief Returns the unitary matrix in the Schur decomposition. * * \returns A const reference to the matrix U. * * It is assumed that either the constructor * ComplexSchur(const MatrixType& matrix, bool computeU) or the * member function compute(const MatrixType& matrix, bool computeU) * has been called before to compute the Schur decomposition of a * matrix, and that \p computeU was set to true (the default * value). * * Example: \include ComplexSchur_matrixU.cpp * Output: \verbinclude ComplexSchur_matrixU.out */ const ComplexMatrixType& matrixU() const { eigen_assert(m_isInitialized && "ComplexSchur is not initialized."); eigen_assert(m_matUisUptodate && "The matrix U has not been computed during the ComplexSchur decomposition."); return m_matU; } /** \brief Returns the triangular matrix in the Schur decomposition. * * \returns A const reference to the matrix T. * * It is assumed that either the constructor * ComplexSchur(const MatrixType& matrix, bool computeU) or the * member function compute(const MatrixType& matrix, bool computeU) * has been called before to compute the Schur decomposition of a * matrix. * * Note that this function returns a plain square matrix. If you want to reference * only the upper triangular part, use: * \code schur.matrixT().triangularView() \endcode * * Example: \include ComplexSchur_matrixT.cpp * Output: \verbinclude ComplexSchur_matrixT.out */ const ComplexMatrixType& matrixT() const { eigen_assert(m_isInitialized && "ComplexSchur is not initialized."); return m_matT; } /** \brief Computes Schur decomposition of given matrix. * * \param[in] matrix Square matrix whose Schur decomposition is to be computed. * \param[in] computeU If true, both T and U are computed; if false, only T is computed. * \returns Reference to \c *this * * The Schur decomposition is computed by first reducing the * matrix to Hessenberg form using the class * HessenbergDecomposition. The Hessenberg matrix is then reduced * to triangular form by performing QR iterations with a single * shift. The cost of computing the Schur decomposition depends * on the number of iterations; as a rough guide, it may be taken * on the number of iterations; as a rough guide, it may be taken * to be \f$25n^3\f$ complex flops, or \f$10n^3\f$ complex flops * if \a computeU is false. * * Example: \include ComplexSchur_compute.cpp * Output: \verbinclude ComplexSchur_compute.out * * \sa compute(const MatrixType&, bool, Index) */ template ComplexSchur& compute(const EigenBase& matrix, bool computeU = true); /** \brief Compute Schur decomposition from a given Hessenberg matrix * \param[in] matrixH Matrix in Hessenberg form H * \param[in] matrixQ orthogonal matrix Q that transform a matrix A to H : A = Q H Q^T * \param computeU Computes the matriX U of the Schur vectors * \return Reference to \c *this * * This routine assumes that the matrix is already reduced in Hessenberg form matrixH * using either the class HessenbergDecomposition or another mean. * It computes the upper quasi-triangular matrix T of the Schur decomposition of H * When computeU is true, this routine computes the matrix U such that * A = U T U^T = (QZ) T (QZ)^T = Q H Q^T where A is the initial matrix * * NOTE Q is referenced if computeU is true; so, if the initial orthogonal matrix * is not available, the user should give an identity matrix (Q.setIdentity()) * * \sa compute(const MatrixType&, bool) */ template ComplexSchur& computeFromHessenberg(const HessMatrixType& matrixH, const OrthMatrixType& matrixQ, bool computeU=true); /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, \c NoConvergence otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "ComplexSchur is not initialized."); return m_info; } /** \brief Sets the maximum number of iterations allowed. * * If not specified by the user, the maximum number of iterations is m_maxIterationsPerRow times the size * of the matrix. */ ComplexSchur& setMaxIterations(Index maxIters) { m_maxIters = maxIters; return *this; } /** \brief Returns the maximum number of iterations. */ Index getMaxIterations() { return m_maxIters; } /** \brief Maximum number of iterations per row. * * If not otherwise specified, the maximum number of iterations is this number times the size of the * matrix. It is currently set to 30. */ static const int m_maxIterationsPerRow = 30; protected: ComplexMatrixType m_matT, m_matU; HessenbergDecomposition m_hess; ComputationInfo m_info; bool m_isInitialized; bool m_matUisUptodate; Index m_maxIters; private: bool subdiagonalEntryIsNeglegible(Index i); ComplexScalar computeShift(Index iu, Index iter); void reduceToTriangularForm(bool computeU); friend struct internal::complex_schur_reduce_to_hessenberg::IsComplex>; }; /** If m_matT(i+1,i) is neglegible in floating point arithmetic * compared to m_matT(i,i) and m_matT(j,j), then set it to zero and * return true, else return false. */ template inline bool ComplexSchur::subdiagonalEntryIsNeglegible(Index i) { RealScalar d = numext::norm1(m_matT.coeff(i,i)) + numext::norm1(m_matT.coeff(i+1,i+1)); RealScalar sd = numext::norm1(m_matT.coeff(i+1,i)); if (internal::isMuchSmallerThan(sd, d, NumTraits::epsilon())) { m_matT.coeffRef(i+1,i) = ComplexScalar(0); return true; } return false; } /** Compute the shift in the current QR iteration. */ template typename ComplexSchur::ComplexScalar ComplexSchur::computeShift(Index iu, Index iter) { using std::abs; if (iter == 10 || iter == 20) { // exceptional shift, taken from http://www.netlib.org/eispack/comqr.f return abs(numext::real(m_matT.coeff(iu,iu-1))) + abs(numext::real(m_matT.coeff(iu-1,iu-2))); } // compute the shift as one of the eigenvalues of t, the 2x2 // diagonal block on the bottom of the active submatrix Matrix t = m_matT.template block<2,2>(iu-1,iu-1); RealScalar normt = t.cwiseAbs().sum(); t /= normt; // the normalization by sf is to avoid under/overflow ComplexScalar b = t.coeff(0,1) * t.coeff(1,0); ComplexScalar c = t.coeff(0,0) - t.coeff(1,1); ComplexScalar disc = sqrt(c*c + RealScalar(4)*b); ComplexScalar det = t.coeff(0,0) * t.coeff(1,1) - b; ComplexScalar trace = t.coeff(0,0) + t.coeff(1,1); ComplexScalar eival1 = (trace + disc) / RealScalar(2); ComplexScalar eival2 = (trace - disc) / RealScalar(2); RealScalar eival1_norm = numext::norm1(eival1); RealScalar eival2_norm = numext::norm1(eival2); // A division by zero can only occur if eival1==eival2==0. // In this case, det==0, and all we have to do is checking that eival2_norm!=0 if(eival1_norm > eival2_norm) eival2 = det / eival1; else if(eival2_norm!=RealScalar(0)) eival1 = det / eival2; // choose the eigenvalue closest to the bottom entry of the diagonal if(numext::norm1(eival1-t.coeff(1,1)) < numext::norm1(eival2-t.coeff(1,1))) return normt * eival1; else return normt * eival2; } template template ComplexSchur& ComplexSchur::compute(const EigenBase& matrix, bool computeU) { m_matUisUptodate = false; eigen_assert(matrix.cols() == matrix.rows()); if(matrix.cols() == 1) { m_matT = matrix.derived().template cast(); if(computeU) m_matU = ComplexMatrixType::Identity(1,1); m_info = Success; m_isInitialized = true; m_matUisUptodate = computeU; return *this; } internal::complex_schur_reduce_to_hessenberg::IsComplex>::run(*this, matrix.derived(), computeU); computeFromHessenberg(m_matT, m_matU, computeU); return *this; } template template ComplexSchur& ComplexSchur::computeFromHessenberg(const HessMatrixType& matrixH, const OrthMatrixType& matrixQ, bool computeU) { m_matT = matrixH; if(computeU) m_matU = matrixQ; reduceToTriangularForm(computeU); return *this; } namespace internal { /* Reduce given matrix to Hessenberg form */ template struct complex_schur_reduce_to_hessenberg { // this is the implementation for the case IsComplex = true static void run(ComplexSchur& _this, const MatrixType& matrix, bool computeU) { _this.m_hess.compute(matrix); _this.m_matT = _this.m_hess.matrixH(); if(computeU) _this.m_matU = _this.m_hess.matrixQ(); } }; template struct complex_schur_reduce_to_hessenberg { static void run(ComplexSchur& _this, const MatrixType& matrix, bool computeU) { typedef typename ComplexSchur::ComplexScalar ComplexScalar; // Note: m_hess is over RealScalar; m_matT and m_matU is over ComplexScalar _this.m_hess.compute(matrix); _this.m_matT = _this.m_hess.matrixH().template cast(); if(computeU) { // This may cause an allocation which seems to be avoidable MatrixType Q = _this.m_hess.matrixQ(); _this.m_matU = Q.template cast(); } } }; } // end namespace internal // Reduce the Hessenberg matrix m_matT to triangular form by QR iteration. template void ComplexSchur::reduceToTriangularForm(bool computeU) { Index maxIters = m_maxIters; if (maxIters == -1) maxIters = m_maxIterationsPerRow * m_matT.rows(); // The matrix m_matT is divided in three parts. // Rows 0,...,il-1 are decoupled from the rest because m_matT(il,il-1) is zero. // Rows il,...,iu is the part we are working on (the active submatrix). // Rows iu+1,...,end are already brought in triangular form. Index iu = m_matT.cols() - 1; Index il; Index iter = 0; // number of iterations we are working on the (iu,iu) element Index totalIter = 0; // number of iterations for whole matrix while(true) { // find iu, the bottom row of the active submatrix while(iu > 0) { if(!subdiagonalEntryIsNeglegible(iu-1)) break; iter = 0; --iu; } // if iu is zero then we are done; the whole matrix is triangularized if(iu==0) break; // if we spent too many iterations, we give up iter++; totalIter++; if(totalIter > maxIters) break; // find il, the top row of the active submatrix il = iu-1; while(il > 0 && !subdiagonalEntryIsNeglegible(il-1)) { --il; } /* perform the QR step using Givens rotations. The first rotation creates a bulge; the (il+2,il) element becomes nonzero. This bulge is chased down to the bottom of the active submatrix. */ ComplexScalar shift = computeShift(iu, iter); JacobiRotation rot; rot.makeGivens(m_matT.coeff(il,il) - shift, m_matT.coeff(il+1,il)); m_matT.rightCols(m_matT.cols()-il).applyOnTheLeft(il, il+1, rot.adjoint()); m_matT.topRows((std::min)(il+2,iu)+1).applyOnTheRight(il, il+1, rot); if(computeU) m_matU.applyOnTheRight(il, il+1, rot); for(Index i=il+1 ; i // Copyright (C) 2010 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MATRIXBASEEIGENVALUES_H #define EIGEN_MATRIXBASEEIGENVALUES_H namespace Eigen { namespace internal { template struct eigenvalues_selector { // this is the implementation for the case IsComplex = true static inline typename MatrixBase::EigenvaluesReturnType const run(const MatrixBase& m) { typedef typename Derived::PlainObject PlainObject; PlainObject m_eval(m); return ComplexEigenSolver(m_eval, false).eigenvalues(); } }; template struct eigenvalues_selector { static inline typename MatrixBase::EigenvaluesReturnType const run(const MatrixBase& m) { typedef typename Derived::PlainObject PlainObject; PlainObject m_eval(m); return EigenSolver(m_eval, false).eigenvalues(); } }; } // end namespace internal /** \brief Computes the eigenvalues of a matrix * \returns Column vector containing the eigenvalues. * * \eigenvalues_module * This function computes the eigenvalues with the help of the EigenSolver * class (for real matrices) or the ComplexEigenSolver class (for complex * matrices). * * The eigenvalues are repeated according to their algebraic multiplicity, * so there are as many eigenvalues as rows in the matrix. * * The SelfAdjointView class provides a better algorithm for selfadjoint * matrices. * * Example: \include MatrixBase_eigenvalues.cpp * Output: \verbinclude MatrixBase_eigenvalues.out * * \sa EigenSolver::eigenvalues(), ComplexEigenSolver::eigenvalues(), * SelfAdjointView::eigenvalues() */ template inline typename MatrixBase::EigenvaluesReturnType MatrixBase::eigenvalues() const { return internal::eigenvalues_selector::IsComplex>::run(derived()); } /** \brief Computes the eigenvalues of a matrix * \returns Column vector containing the eigenvalues. * * \eigenvalues_module * This function computes the eigenvalues with the help of the * SelfAdjointEigenSolver class. The eigenvalues are repeated according to * their algebraic multiplicity, so there are as many eigenvalues as rows in * the matrix. * * Example: \include SelfAdjointView_eigenvalues.cpp * Output: \verbinclude SelfAdjointView_eigenvalues.out * * \sa SelfAdjointEigenSolver::eigenvalues(), MatrixBase::eigenvalues() */ template EIGEN_DEVICE_FUNC inline typename SelfAdjointView::EigenvaluesReturnType SelfAdjointView::eigenvalues() const { PlainObject thisAsMatrix(*this); return SelfAdjointEigenSolver(thisAsMatrix, false).eigenvalues(); } /** \brief Computes the L2 operator norm * \returns Operator norm of the matrix. * * \eigenvalues_module * This function computes the L2 operator norm of a matrix, which is also * known as the spectral norm. The norm of a matrix \f$ A \f$ is defined to be * \f[ \|A\|_2 = \max_x \frac{\|Ax\|_2}{\|x\|_2} \f] * where the maximum is over all vectors and the norm on the right is the * Euclidean vector norm. The norm equals the largest singular value, which is * the square root of the largest eigenvalue of the positive semi-definite * matrix \f$ A^*A \f$. * * The current implementation uses the eigenvalues of \f$ A^*A \f$, as computed * by SelfAdjointView::eigenvalues(), to compute the operator norm of a * matrix. The SelfAdjointView class provides a better algorithm for * selfadjoint matrices. * * Example: \include MatrixBase_operatorNorm.cpp * Output: \verbinclude MatrixBase_operatorNorm.out * * \sa SelfAdjointView::eigenvalues(), SelfAdjointView::operatorNorm() */ template inline typename MatrixBase::RealScalar MatrixBase::operatorNorm() const { using std::sqrt; typename Derived::PlainObject m_eval(derived()); // FIXME if it is really guaranteed that the eigenvalues are already sorted, // then we don't need to compute a maxCoeff() here, comparing the 1st and last ones is enough. return sqrt((m_eval*m_eval.adjoint()) .eval() .template selfadjointView() .eigenvalues() .maxCoeff() ); } /** \brief Computes the L2 operator norm * \returns Operator norm of the matrix. * * \eigenvalues_module * This function computes the L2 operator norm of a self-adjoint matrix. For a * self-adjoint matrix, the operator norm is the largest eigenvalue. * * The current implementation uses the eigenvalues of the matrix, as computed * by eigenvalues(), to compute the operator norm of the matrix. * * Example: \include SelfAdjointView_operatorNorm.cpp * Output: \verbinclude SelfAdjointView_operatorNorm.out * * \sa eigenvalues(), MatrixBase::operatorNorm() */ template EIGEN_DEVICE_FUNC inline typename SelfAdjointView::RealScalar SelfAdjointView::operatorNorm() const { return eigenvalues().cwiseAbs().maxCoeff(); } } // end namespace Eigen #endif piqp-octave/src/eigen/Eigen/src/Eigenvalues/RealSchur.h0000644000175100001770000005112614107270226022665 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2010,2012 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_REAL_SCHUR_H #define EIGEN_REAL_SCHUR_H #include "./HessenbergDecomposition.h" namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class RealSchur * * \brief Performs a real Schur decomposition of a square matrix * * \tparam _MatrixType the type of the matrix of which we are computing the * real Schur decomposition; this is expected to be an instantiation of the * Matrix class template. * * Given a real square matrix A, this class computes the real Schur * decomposition: \f$ A = U T U^T \f$ where U is a real orthogonal matrix and * T is a real quasi-triangular matrix. An orthogonal matrix is a matrix whose * inverse is equal to its transpose, \f$ U^{-1} = U^T \f$. A quasi-triangular * matrix is a block-triangular matrix whose diagonal consists of 1-by-1 * blocks and 2-by-2 blocks with complex eigenvalues. The eigenvalues of the * blocks on the diagonal of T are the same as the eigenvalues of the matrix * A, and thus the real Schur decomposition is used in EigenSolver to compute * the eigendecomposition of a matrix. * * Call the function compute() to compute the real Schur decomposition of a * given matrix. Alternatively, you can use the RealSchur(const MatrixType&, bool) * constructor which computes the real Schur decomposition at construction * time. Once the decomposition is computed, you can use the matrixU() and * matrixT() functions to retrieve the matrices U and T in the decomposition. * * The documentation of RealSchur(const MatrixType&, bool) contains an example * of the typical use of this class. * * \note The implementation is adapted from * JAMA (public domain). * Their code is based on EISPACK. * * \sa class ComplexSchur, class EigenSolver, class ComplexEigenSolver */ template class RealSchur { public: typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef typename MatrixType::Scalar Scalar; typedef std::complex::Real> ComplexScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix EigenvalueType; typedef Matrix ColumnVectorType; /** \brief Default constructor. * * \param [in] size Positive integer, size of the matrix whose Schur decomposition will be computed. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). The \p size parameter is only * used as a hint. It is not an error to give a wrong \p size, but it may * impair performance. * * \sa compute() for an example. */ explicit RealSchur(Index size = RowsAtCompileTime==Dynamic ? 1 : RowsAtCompileTime) : m_matT(size, size), m_matU(size, size), m_workspaceVector(size), m_hess(size), m_isInitialized(false), m_matUisUptodate(false), m_maxIters(-1) { } /** \brief Constructor; computes real Schur decomposition of given matrix. * * \param[in] matrix Square matrix whose Schur decomposition is to be computed. * \param[in] computeU If true, both T and U are computed; if false, only T is computed. * * This constructor calls compute() to compute the Schur decomposition. * * Example: \include RealSchur_RealSchur_MatrixType.cpp * Output: \verbinclude RealSchur_RealSchur_MatrixType.out */ template explicit RealSchur(const EigenBase& matrix, bool computeU = true) : m_matT(matrix.rows(),matrix.cols()), m_matU(matrix.rows(),matrix.cols()), m_workspaceVector(matrix.rows()), m_hess(matrix.rows()), m_isInitialized(false), m_matUisUptodate(false), m_maxIters(-1) { compute(matrix.derived(), computeU); } /** \brief Returns the orthogonal matrix in the Schur decomposition. * * \returns A const reference to the matrix U. * * \pre Either the constructor RealSchur(const MatrixType&, bool) or the * member function compute(const MatrixType&, bool) has been called before * to compute the Schur decomposition of a matrix, and \p computeU was set * to true (the default value). * * \sa RealSchur(const MatrixType&, bool) for an example */ const MatrixType& matrixU() const { eigen_assert(m_isInitialized && "RealSchur is not initialized."); eigen_assert(m_matUisUptodate && "The matrix U has not been computed during the RealSchur decomposition."); return m_matU; } /** \brief Returns the quasi-triangular matrix in the Schur decomposition. * * \returns A const reference to the matrix T. * * \pre Either the constructor RealSchur(const MatrixType&, bool) or the * member function compute(const MatrixType&, bool) has been called before * to compute the Schur decomposition of a matrix. * * \sa RealSchur(const MatrixType&, bool) for an example */ const MatrixType& matrixT() const { eigen_assert(m_isInitialized && "RealSchur is not initialized."); return m_matT; } /** \brief Computes Schur decomposition of given matrix. * * \param[in] matrix Square matrix whose Schur decomposition is to be computed. * \param[in] computeU If true, both T and U are computed; if false, only T is computed. * \returns Reference to \c *this * * The Schur decomposition is computed by first reducing the matrix to * Hessenberg form using the class HessenbergDecomposition. The Hessenberg * matrix is then reduced to triangular form by performing Francis QR * iterations with implicit double shift. The cost of computing the Schur * decomposition depends on the number of iterations; as a rough guide, it * may be taken to be \f$25n^3\f$ flops if \a computeU is true and * \f$10n^3\f$ flops if \a computeU is false. * * Example: \include RealSchur_compute.cpp * Output: \verbinclude RealSchur_compute.out * * \sa compute(const MatrixType&, bool, Index) */ template RealSchur& compute(const EigenBase& matrix, bool computeU = true); /** \brief Computes Schur decomposition of a Hessenberg matrix H = Z T Z^T * \param[in] matrixH Matrix in Hessenberg form H * \param[in] matrixQ orthogonal matrix Q that transform a matrix A to H : A = Q H Q^T * \param computeU Computes the matriX U of the Schur vectors * \return Reference to \c *this * * This routine assumes that the matrix is already reduced in Hessenberg form matrixH * using either the class HessenbergDecomposition or another mean. * It computes the upper quasi-triangular matrix T of the Schur decomposition of H * When computeU is true, this routine computes the matrix U such that * A = U T U^T = (QZ) T (QZ)^T = Q H Q^T where A is the initial matrix * * NOTE Q is referenced if computeU is true; so, if the initial orthogonal matrix * is not available, the user should give an identity matrix (Q.setIdentity()) * * \sa compute(const MatrixType&, bool) */ template RealSchur& computeFromHessenberg(const HessMatrixType& matrixH, const OrthMatrixType& matrixQ, bool computeU); /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, \c NoConvergence otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "RealSchur is not initialized."); return m_info; } /** \brief Sets the maximum number of iterations allowed. * * If not specified by the user, the maximum number of iterations is m_maxIterationsPerRow times the size * of the matrix. */ RealSchur& setMaxIterations(Index maxIters) { m_maxIters = maxIters; return *this; } /** \brief Returns the maximum number of iterations. */ Index getMaxIterations() { return m_maxIters; } /** \brief Maximum number of iterations per row. * * If not otherwise specified, the maximum number of iterations is this number times the size of the * matrix. It is currently set to 40. */ static const int m_maxIterationsPerRow = 40; private: MatrixType m_matT; MatrixType m_matU; ColumnVectorType m_workspaceVector; HessenbergDecomposition m_hess; ComputationInfo m_info; bool m_isInitialized; bool m_matUisUptodate; Index m_maxIters; typedef Matrix Vector3s; Scalar computeNormOfT(); Index findSmallSubdiagEntry(Index iu, const Scalar& considerAsZero); void splitOffTwoRows(Index iu, bool computeU, const Scalar& exshift); void computeShift(Index iu, Index iter, Scalar& exshift, Vector3s& shiftInfo); void initFrancisQRStep(Index il, Index iu, const Vector3s& shiftInfo, Index& im, Vector3s& firstHouseholderVector); void performFrancisQRStep(Index il, Index im, Index iu, bool computeU, const Vector3s& firstHouseholderVector, Scalar* workspace); }; template template RealSchur& RealSchur::compute(const EigenBase& matrix, bool computeU) { const Scalar considerAsZero = (std::numeric_limits::min)(); eigen_assert(matrix.cols() == matrix.rows()); Index maxIters = m_maxIters; if (maxIters == -1) maxIters = m_maxIterationsPerRow * matrix.rows(); Scalar scale = matrix.derived().cwiseAbs().maxCoeff(); if(scale template RealSchur& RealSchur::computeFromHessenberg(const HessMatrixType& matrixH, const OrthMatrixType& matrixQ, bool computeU) { using std::abs; m_matT = matrixH; m_workspaceVector.resize(m_matT.cols()); if(computeU && !internal::is_same_dense(m_matU,matrixQ)) m_matU = matrixQ; Index maxIters = m_maxIters; if (maxIters == -1) maxIters = m_maxIterationsPerRow * matrixH.rows(); Scalar* workspace = &m_workspaceVector.coeffRef(0); // The matrix m_matT is divided in three parts. // Rows 0,...,il-1 are decoupled from the rest because m_matT(il,il-1) is zero. // Rows il,...,iu is the part we are working on (the active window). // Rows iu+1,...,end are already brought in triangular form. Index iu = m_matT.cols() - 1; Index iter = 0; // iteration count for current eigenvalue Index totalIter = 0; // iteration count for whole matrix Scalar exshift(0); // sum of exceptional shifts Scalar norm = computeNormOfT(); // sub-diagonal entries smaller than considerAsZero will be treated as zero. // We use eps^2 to enable more precision in small eigenvalues. Scalar considerAsZero = numext::maxi( norm * numext::abs2(NumTraits::epsilon()), (std::numeric_limits::min)() ); if(norm!=Scalar(0)) { while (iu >= 0) { Index il = findSmallSubdiagEntry(iu,considerAsZero); // Check for convergence if (il == iu) // One root found { m_matT.coeffRef(iu,iu) = m_matT.coeff(iu,iu) + exshift; if (iu > 0) m_matT.coeffRef(iu, iu-1) = Scalar(0); iu--; iter = 0; } else if (il == iu-1) // Two roots found { splitOffTwoRows(iu, computeU, exshift); iu -= 2; iter = 0; } else // No convergence yet { // The firstHouseholderVector vector has to be initialized to something to get rid of a silly GCC warning (-O1 -Wall -DNDEBUG ) Vector3s firstHouseholderVector = Vector3s::Zero(), shiftInfo; computeShift(iu, iter, exshift, shiftInfo); iter = iter + 1; totalIter = totalIter + 1; if (totalIter > maxIters) break; Index im; initFrancisQRStep(il, iu, shiftInfo, im, firstHouseholderVector); performFrancisQRStep(il, im, iu, computeU, firstHouseholderVector, workspace); } } } if(totalIter <= maxIters) m_info = Success; else m_info = NoConvergence; m_isInitialized = true; m_matUisUptodate = computeU; return *this; } /** \internal Computes and returns vector L1 norm of T */ template inline typename MatrixType::Scalar RealSchur::computeNormOfT() { const Index size = m_matT.cols(); // FIXME to be efficient the following would requires a triangular reduxion code // Scalar norm = m_matT.upper().cwiseAbs().sum() // + m_matT.bottomLeftCorner(size-1,size-1).diagonal().cwiseAbs().sum(); Scalar norm(0); for (Index j = 0; j < size; ++j) norm += m_matT.col(j).segment(0, (std::min)(size,j+2)).cwiseAbs().sum(); return norm; } /** \internal Look for single small sub-diagonal element and returns its index */ template inline Index RealSchur::findSmallSubdiagEntry(Index iu, const Scalar& considerAsZero) { using std::abs; Index res = iu; while (res > 0) { Scalar s = abs(m_matT.coeff(res-1,res-1)) + abs(m_matT.coeff(res,res)); s = numext::maxi(s * NumTraits::epsilon(), considerAsZero); if (abs(m_matT.coeff(res,res-1)) <= s) break; res--; } return res; } /** \internal Update T given that rows iu-1 and iu decouple from the rest. */ template inline void RealSchur::splitOffTwoRows(Index iu, bool computeU, const Scalar& exshift) { using std::sqrt; using std::abs; const Index size = m_matT.cols(); // The eigenvalues of the 2x2 matrix [a b; c d] are // trace +/- sqrt(discr/4) where discr = tr^2 - 4*det, tr = a + d, det = ad - bc Scalar p = Scalar(0.5) * (m_matT.coeff(iu-1,iu-1) - m_matT.coeff(iu,iu)); Scalar q = p * p + m_matT.coeff(iu,iu-1) * m_matT.coeff(iu-1,iu); // q = tr^2 / 4 - det = discr/4 m_matT.coeffRef(iu,iu) += exshift; m_matT.coeffRef(iu-1,iu-1) += exshift; if (q >= Scalar(0)) // Two real eigenvalues { Scalar z = sqrt(abs(q)); JacobiRotation rot; if (p >= Scalar(0)) rot.makeGivens(p + z, m_matT.coeff(iu, iu-1)); else rot.makeGivens(p - z, m_matT.coeff(iu, iu-1)); m_matT.rightCols(size-iu+1).applyOnTheLeft(iu-1, iu, rot.adjoint()); m_matT.topRows(iu+1).applyOnTheRight(iu-1, iu, rot); m_matT.coeffRef(iu, iu-1) = Scalar(0); if (computeU) m_matU.applyOnTheRight(iu-1, iu, rot); } if (iu > 1) m_matT.coeffRef(iu-1, iu-2) = Scalar(0); } /** \internal Form shift in shiftInfo, and update exshift if an exceptional shift is performed. */ template inline void RealSchur::computeShift(Index iu, Index iter, Scalar& exshift, Vector3s& shiftInfo) { using std::sqrt; using std::abs; shiftInfo.coeffRef(0) = m_matT.coeff(iu,iu); shiftInfo.coeffRef(1) = m_matT.coeff(iu-1,iu-1); shiftInfo.coeffRef(2) = m_matT.coeff(iu,iu-1) * m_matT.coeff(iu-1,iu); // Wilkinson's original ad hoc shift if (iter == 10) { exshift += shiftInfo.coeff(0); for (Index i = 0; i <= iu; ++i) m_matT.coeffRef(i,i) -= shiftInfo.coeff(0); Scalar s = abs(m_matT.coeff(iu,iu-1)) + abs(m_matT.coeff(iu-1,iu-2)); shiftInfo.coeffRef(0) = Scalar(0.75) * s; shiftInfo.coeffRef(1) = Scalar(0.75) * s; shiftInfo.coeffRef(2) = Scalar(-0.4375) * s * s; } // MATLAB's new ad hoc shift if (iter == 30) { Scalar s = (shiftInfo.coeff(1) - shiftInfo.coeff(0)) / Scalar(2.0); s = s * s + shiftInfo.coeff(2); if (s > Scalar(0)) { s = sqrt(s); if (shiftInfo.coeff(1) < shiftInfo.coeff(0)) s = -s; s = s + (shiftInfo.coeff(1) - shiftInfo.coeff(0)) / Scalar(2.0); s = shiftInfo.coeff(0) - shiftInfo.coeff(2) / s; exshift += s; for (Index i = 0; i <= iu; ++i) m_matT.coeffRef(i,i) -= s; shiftInfo.setConstant(Scalar(0.964)); } } } /** \internal Compute index im at which Francis QR step starts and the first Householder vector. */ template inline void RealSchur::initFrancisQRStep(Index il, Index iu, const Vector3s& shiftInfo, Index& im, Vector3s& firstHouseholderVector) { using std::abs; Vector3s& v = firstHouseholderVector; // alias to save typing for (im = iu-2; im >= il; --im) { const Scalar Tmm = m_matT.coeff(im,im); const Scalar r = shiftInfo.coeff(0) - Tmm; const Scalar s = shiftInfo.coeff(1) - Tmm; v.coeffRef(0) = (r * s - shiftInfo.coeff(2)) / m_matT.coeff(im+1,im) + m_matT.coeff(im,im+1); v.coeffRef(1) = m_matT.coeff(im+1,im+1) - Tmm - r - s; v.coeffRef(2) = m_matT.coeff(im+2,im+1); if (im == il) { break; } const Scalar lhs = m_matT.coeff(im,im-1) * (abs(v.coeff(1)) + abs(v.coeff(2))); const Scalar rhs = v.coeff(0) * (abs(m_matT.coeff(im-1,im-1)) + abs(Tmm) + abs(m_matT.coeff(im+1,im+1))); if (abs(lhs) < NumTraits::epsilon() * rhs) break; } } /** \internal Perform a Francis QR step involving rows il:iu and columns im:iu. */ template inline void RealSchur::performFrancisQRStep(Index il, Index im, Index iu, bool computeU, const Vector3s& firstHouseholderVector, Scalar* workspace) { eigen_assert(im >= il); eigen_assert(im <= iu-2); const Index size = m_matT.cols(); for (Index k = im; k <= iu-2; ++k) { bool firstIteration = (k == im); Vector3s v; if (firstIteration) v = firstHouseholderVector; else v = m_matT.template block<3,1>(k,k-1); Scalar tau, beta; Matrix ess; v.makeHouseholder(ess, tau, beta); if (beta != Scalar(0)) // if v is not zero { if (firstIteration && k > il) m_matT.coeffRef(k,k-1) = -m_matT.coeff(k,k-1); else if (!firstIteration) m_matT.coeffRef(k,k-1) = beta; // These Householder transformations form the O(n^3) part of the algorithm m_matT.block(k, k, 3, size-k).applyHouseholderOnTheLeft(ess, tau, workspace); m_matT.block(0, k, (std::min)(iu,k+3) + 1, 3).applyHouseholderOnTheRight(ess, tau, workspace); if (computeU) m_matU.block(0, k, size, 3).applyHouseholderOnTheRight(ess, tau, workspace); } } Matrix v = m_matT.template block<2,1>(iu-1, iu-2); Scalar tau, beta; Matrix ess; v.makeHouseholder(ess, tau, beta); if (beta != Scalar(0)) // if v is not zero { m_matT.coeffRef(iu-1, iu-2) = beta; m_matT.block(iu-1, iu-1, 2, size-iu+1).applyHouseholderOnTheLeft(ess, tau, workspace); m_matT.block(0, iu-1, iu+1, 2).applyHouseholderOnTheRight(ess, tau, workspace); if (computeU) m_matU.block(0, iu-1, size, 2).applyHouseholderOnTheRight(ess, tau, workspace); } // clean up pollution due to round-off errors for (Index i = im+2; i <= iu; ++i) { m_matT.coeffRef(i,i-2) = Scalar(0); if (i > im+2) m_matT.coeffRef(i,i-3) = Scalar(0); } } } // end namespace Eigen #endif // EIGEN_REAL_SCHUR_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/ComplexEigenSolver.h0000644000175100001770000003041714107270226024547 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Claire Maurice // Copyright (C) 2009 Gael Guennebaud // Copyright (C) 2010,2012 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COMPLEX_EIGEN_SOLVER_H #define EIGEN_COMPLEX_EIGEN_SOLVER_H #include "./ComplexSchur.h" namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class ComplexEigenSolver * * \brief Computes eigenvalues and eigenvectors of general complex matrices * * \tparam _MatrixType the type of the matrix of which we are * computing the eigendecomposition; this is expected to be an * instantiation of the Matrix class template. * * The eigenvalues and eigenvectors of a matrix \f$ A \f$ are scalars * \f$ \lambda \f$ and vectors \f$ v \f$ such that \f$ Av = \lambda v * \f$. If \f$ D \f$ is a diagonal matrix with the eigenvalues on * the diagonal, and \f$ V \f$ is a matrix with the eigenvectors as * its columns, then \f$ A V = V D \f$. The matrix \f$ V \f$ is * almost always invertible, in which case we have \f$ A = V D V^{-1} * \f$. This is called the eigendecomposition. * * The main function in this class is compute(), which computes the * eigenvalues and eigenvectors of a given function. The * documentation for that function contains an example showing the * main features of the class. * * \sa class EigenSolver, class SelfAdjointEigenSolver */ template class ComplexEigenSolver { public: /** \brief Synonym for the template parameter \p _MatrixType. */ typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; /** \brief Scalar type for matrices of type #MatrixType. */ typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 /** \brief Complex scalar type for #MatrixType. * * This is \c std::complex if #Scalar is real (e.g., * \c float or \c double) and just \c Scalar if #Scalar is * complex. */ typedef std::complex ComplexScalar; /** \brief Type for vector of eigenvalues as returned by eigenvalues(). * * This is a column vector with entries of type #ComplexScalar. * The length of the vector is the size of #MatrixType. */ typedef Matrix EigenvalueType; /** \brief Type for matrix of eigenvectors as returned by eigenvectors(). * * This is a square matrix with entries of type #ComplexScalar. * The size is the same as the size of #MatrixType. */ typedef Matrix EigenvectorType; /** \brief Default constructor. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). */ ComplexEigenSolver() : m_eivec(), m_eivalues(), m_schur(), m_isInitialized(false), m_eigenvectorsOk(false), m_matX() {} /** \brief Default Constructor with memory preallocation * * Like the default constructor but with preallocation of the internal data * according to the specified problem \a size. * \sa ComplexEigenSolver() */ explicit ComplexEigenSolver(Index size) : m_eivec(size, size), m_eivalues(size), m_schur(size), m_isInitialized(false), m_eigenvectorsOk(false), m_matX(size, size) {} /** \brief Constructor; computes eigendecomposition of given matrix. * * \param[in] matrix Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are * computed. * * This constructor calls compute() to compute the eigendecomposition. */ template explicit ComplexEigenSolver(const EigenBase& matrix, bool computeEigenvectors = true) : m_eivec(matrix.rows(),matrix.cols()), m_eivalues(matrix.cols()), m_schur(matrix.rows()), m_isInitialized(false), m_eigenvectorsOk(false), m_matX(matrix.rows(),matrix.cols()) { compute(matrix.derived(), computeEigenvectors); } /** \brief Returns the eigenvectors of given matrix. * * \returns A const reference to the matrix whose columns are the eigenvectors. * * \pre Either the constructor * ComplexEigenSolver(const MatrixType& matrix, bool) or the member * function compute(const MatrixType& matrix, bool) has been called before * to compute the eigendecomposition of a matrix, and * \p computeEigenvectors was set to true (the default). * * This function returns a matrix whose columns are the eigenvectors. Column * \f$ k \f$ is an eigenvector corresponding to eigenvalue number \f$ k * \f$ as returned by eigenvalues(). The eigenvectors are normalized to * have (Euclidean) norm equal to one. The matrix returned by this * function is the matrix \f$ V \f$ in the eigendecomposition \f$ A = V D * V^{-1} \f$, if it exists. * * Example: \include ComplexEigenSolver_eigenvectors.cpp * Output: \verbinclude ComplexEigenSolver_eigenvectors.out */ const EigenvectorType& eigenvectors() const { eigen_assert(m_isInitialized && "ComplexEigenSolver is not initialized."); eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues."); return m_eivec; } /** \brief Returns the eigenvalues of given matrix. * * \returns A const reference to the column vector containing the eigenvalues. * * \pre Either the constructor * ComplexEigenSolver(const MatrixType& matrix, bool) or the member * function compute(const MatrixType& matrix, bool) has been called before * to compute the eigendecomposition of a matrix. * * This function returns a column vector containing the * eigenvalues. Eigenvalues are repeated according to their * algebraic multiplicity, so there are as many eigenvalues as * rows in the matrix. The eigenvalues are not sorted in any particular * order. * * Example: \include ComplexEigenSolver_eigenvalues.cpp * Output: \verbinclude ComplexEigenSolver_eigenvalues.out */ const EigenvalueType& eigenvalues() const { eigen_assert(m_isInitialized && "ComplexEigenSolver is not initialized."); return m_eivalues; } /** \brief Computes eigendecomposition of given matrix. * * \param[in] matrix Square matrix whose eigendecomposition is to be computed. * \param[in] computeEigenvectors If true, both the eigenvectors and the * eigenvalues are computed; if false, only the eigenvalues are * computed. * \returns Reference to \c *this * * This function computes the eigenvalues of the complex matrix \p matrix. * The eigenvalues() function can be used to retrieve them. If * \p computeEigenvectors is true, then the eigenvectors are also computed * and can be retrieved by calling eigenvectors(). * * The matrix is first reduced to Schur form using the * ComplexSchur class. The Schur decomposition is then used to * compute the eigenvalues and eigenvectors. * * The cost of the computation is dominated by the cost of the * Schur decomposition, which is \f$ O(n^3) \f$ where \f$ n \f$ * is the size of the matrix. * * Example: \include ComplexEigenSolver_compute.cpp * Output: \verbinclude ComplexEigenSolver_compute.out */ template ComplexEigenSolver& compute(const EigenBase& matrix, bool computeEigenvectors = true); /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, \c NoConvergence otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "ComplexEigenSolver is not initialized."); return m_schur.info(); } /** \brief Sets the maximum number of iterations allowed. */ ComplexEigenSolver& setMaxIterations(Index maxIters) { m_schur.setMaxIterations(maxIters); return *this; } /** \brief Returns the maximum number of iterations. */ Index getMaxIterations() { return m_schur.getMaxIterations(); } protected: static void check_template_parameters() { EIGEN_STATIC_ASSERT_NON_INTEGER(Scalar); } EigenvectorType m_eivec; EigenvalueType m_eivalues; ComplexSchur m_schur; bool m_isInitialized; bool m_eigenvectorsOk; EigenvectorType m_matX; private: void doComputeEigenvectors(RealScalar matrixnorm); void sortEigenvalues(bool computeEigenvectors); }; template template ComplexEigenSolver& ComplexEigenSolver::compute(const EigenBase& matrix, bool computeEigenvectors) { check_template_parameters(); // this code is inspired from Jampack eigen_assert(matrix.cols() == matrix.rows()); // Do a complex Schur decomposition, A = U T U^* // The eigenvalues are on the diagonal of T. m_schur.compute(matrix.derived(), computeEigenvectors); if(m_schur.info() == Success) { m_eivalues = m_schur.matrixT().diagonal(); if(computeEigenvectors) doComputeEigenvectors(m_schur.matrixT().norm()); sortEigenvalues(computeEigenvectors); } m_isInitialized = true; m_eigenvectorsOk = computeEigenvectors; return *this; } template void ComplexEigenSolver::doComputeEigenvectors(RealScalar matrixnorm) { const Index n = m_eivalues.size(); matrixnorm = numext::maxi(matrixnorm,(std::numeric_limits::min)()); // Compute X such that T = X D X^(-1), where D is the diagonal of T. // The matrix X is unit triangular. m_matX = EigenvectorType::Zero(n, n); for(Index k=n-1 ; k>=0 ; k--) { m_matX.coeffRef(k,k) = ComplexScalar(1.0,0.0); // Compute X(i,k) using the (i,k) entry of the equation X T = D X for(Index i=k-1 ; i>=0 ; i--) { m_matX.coeffRef(i,k) = -m_schur.matrixT().coeff(i,k); if(k-i-1>0) m_matX.coeffRef(i,k) -= (m_schur.matrixT().row(i).segment(i+1,k-i-1) * m_matX.col(k).segment(i+1,k-i-1)).value(); ComplexScalar z = m_schur.matrixT().coeff(i,i) - m_schur.matrixT().coeff(k,k); if(z==ComplexScalar(0)) { // If the i-th and k-th eigenvalue are equal, then z equals 0. // Use a small value instead, to prevent division by zero. numext::real_ref(z) = NumTraits::epsilon() * matrixnorm; } m_matX.coeffRef(i,k) = m_matX.coeff(i,k) / z; } } // Compute V as V = U X; now A = U T U^* = U X D X^(-1) U^* = V D V^(-1) m_eivec.noalias() = m_schur.matrixU() * m_matX; // .. and normalize the eigenvectors for(Index k=0 ; k void ComplexEigenSolver::sortEigenvalues(bool computeEigenvectors) { const Index n = m_eivalues.size(); for (Index i=0; i // Copyright (C) 2010 Jitse Niesen // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_TRIDIAGONALIZATION_H #define EIGEN_TRIDIAGONALIZATION_H namespace Eigen { namespace internal { template struct TridiagonalizationMatrixTReturnType; template struct traits > : public traits { typedef typename MatrixType::PlainObject ReturnType; // FIXME shall it be a BandMatrix? enum { Flags = 0 }; }; template EIGEN_DEVICE_FUNC void tridiagonalization_inplace(MatrixType& matA, CoeffVectorType& hCoeffs); } /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class Tridiagonalization * * \brief Tridiagonal decomposition of a selfadjoint matrix * * \tparam _MatrixType the type of the matrix of which we are computing the * tridiagonal decomposition; this is expected to be an instantiation of the * Matrix class template. * * This class performs a tridiagonal decomposition of a selfadjoint matrix \f$ A \f$ such that: * \f$ A = Q T Q^* \f$ where \f$ Q \f$ is unitary and \f$ T \f$ a real symmetric tridiagonal matrix. * * A tridiagonal matrix is a matrix which has nonzero elements only on the * main diagonal and the first diagonal below and above it. The Hessenberg * decomposition of a selfadjoint matrix is in fact a tridiagonal * decomposition. This class is used in SelfAdjointEigenSolver to compute the * eigenvalues and eigenvectors of a selfadjoint matrix. * * Call the function compute() to compute the tridiagonal decomposition of a * given matrix. Alternatively, you can use the Tridiagonalization(const MatrixType&) * constructor which computes the tridiagonal Schur decomposition at * construction time. Once the decomposition is computed, you can use the * matrixQ() and matrixT() functions to retrieve the matrices Q and T in the * decomposition. * * The documentation of Tridiagonalization(const MatrixType&) contains an * example of the typical use of this class. * * \sa class HessenbergDecomposition, class SelfAdjointEigenSolver */ template class Tridiagonalization { public: /** \brief Synonym for the template parameter \p _MatrixType. */ typedef _MatrixType MatrixType; typedef typename MatrixType::Scalar Scalar; typedef typename NumTraits::Real RealScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 enum { Size = MatrixType::RowsAtCompileTime, SizeMinusOne = Size == Dynamic ? Dynamic : (Size > 1 ? Size - 1 : 1), Options = MatrixType::Options, MaxSize = MatrixType::MaxRowsAtCompileTime, MaxSizeMinusOne = MaxSize == Dynamic ? Dynamic : (MaxSize > 1 ? MaxSize - 1 : 1) }; typedef Matrix CoeffVectorType; typedef typename internal::plain_col_type::type DiagonalType; typedef Matrix SubDiagonalType; typedef typename internal::remove_all::type MatrixTypeRealView; typedef internal::TridiagonalizationMatrixTReturnType MatrixTReturnType; typedef typename internal::conditional::IsComplex, typename internal::add_const_on_value_type::RealReturnType>::type, const Diagonal >::type DiagonalReturnType; typedef typename internal::conditional::IsComplex, typename internal::add_const_on_value_type::RealReturnType>::type, const Diagonal >::type SubDiagonalReturnType; /** \brief Return type of matrixQ() */ typedef HouseholderSequence::type> HouseholderSequenceType; /** \brief Default constructor. * * \param [in] size Positive integer, size of the matrix whose tridiagonal * decomposition will be computed. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). The \p size parameter is only * used as a hint. It is not an error to give a wrong \p size, but it may * impair performance. * * \sa compute() for an example. */ explicit Tridiagonalization(Index size = Size==Dynamic ? 2 : Size) : m_matrix(size,size), m_hCoeffs(size > 1 ? size-1 : 1), m_isInitialized(false) {} /** \brief Constructor; computes tridiagonal decomposition of given matrix. * * \param[in] matrix Selfadjoint matrix whose tridiagonal decomposition * is to be computed. * * This constructor calls compute() to compute the tridiagonal decomposition. * * Example: \include Tridiagonalization_Tridiagonalization_MatrixType.cpp * Output: \verbinclude Tridiagonalization_Tridiagonalization_MatrixType.out */ template explicit Tridiagonalization(const EigenBase& matrix) : m_matrix(matrix.derived()), m_hCoeffs(matrix.cols() > 1 ? matrix.cols()-1 : 1), m_isInitialized(false) { internal::tridiagonalization_inplace(m_matrix, m_hCoeffs); m_isInitialized = true; } /** \brief Computes tridiagonal decomposition of given matrix. * * \param[in] matrix Selfadjoint matrix whose tridiagonal decomposition * is to be computed. * \returns Reference to \c *this * * The tridiagonal decomposition is computed by bringing the columns of * the matrix successively in the required form using Householder * reflections. The cost is \f$ 4n^3/3 \f$ flops, where \f$ n \f$ denotes * the size of the given matrix. * * This method reuses of the allocated data in the Tridiagonalization * object, if the size of the matrix does not change. * * Example: \include Tridiagonalization_compute.cpp * Output: \verbinclude Tridiagonalization_compute.out */ template Tridiagonalization& compute(const EigenBase& matrix) { m_matrix = matrix.derived(); m_hCoeffs.resize(matrix.rows()-1, 1); internal::tridiagonalization_inplace(m_matrix, m_hCoeffs); m_isInitialized = true; return *this; } /** \brief Returns the Householder coefficients. * * \returns a const reference to the vector of Householder coefficients * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * The Householder coefficients allow the reconstruction of the matrix * \f$ Q \f$ in the tridiagonal decomposition from the packed data. * * Example: \include Tridiagonalization_householderCoefficients.cpp * Output: \verbinclude Tridiagonalization_householderCoefficients.out * * \sa packedMatrix(), \ref Householder_Module "Householder module" */ inline CoeffVectorType householderCoefficients() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return m_hCoeffs; } /** \brief Returns the internal representation of the decomposition * * \returns a const reference to a matrix with the internal representation * of the decomposition. * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * The returned matrix contains the following information: * - the strict upper triangular part is equal to the input matrix A. * - the diagonal and lower sub-diagonal represent the real tridiagonal * symmetric matrix T. * - the rest of the lower part contains the Householder vectors that, * combined with Householder coefficients returned by * householderCoefficients(), allows to reconstruct the matrix Q as * \f$ Q = H_{N-1} \ldots H_1 H_0 \f$. * Here, the matrices \f$ H_i \f$ are the Householder transformations * \f$ H_i = (I - h_i v_i v_i^T) \f$ * where \f$ h_i \f$ is the \f$ i \f$th Householder coefficient and * \f$ v_i \f$ is the Householder vector defined by * \f$ v_i = [ 0, \ldots, 0, 1, M(i+2,i), \ldots, M(N-1,i) ]^T \f$ * with M the matrix returned by this function. * * See LAPACK for further details on this packed storage. * * Example: \include Tridiagonalization_packedMatrix.cpp * Output: \verbinclude Tridiagonalization_packedMatrix.out * * \sa householderCoefficients() */ inline const MatrixType& packedMatrix() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return m_matrix; } /** \brief Returns the unitary matrix Q in the decomposition * * \returns object representing the matrix Q * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * This function returns a light-weight object of template class * HouseholderSequence. You can either apply it directly to a matrix or * you can convert it to a matrix of type #MatrixType. * * \sa Tridiagonalization(const MatrixType&) for an example, * matrixT(), class HouseholderSequence */ HouseholderSequenceType matrixQ() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return HouseholderSequenceType(m_matrix, m_hCoeffs.conjugate()) .setLength(m_matrix.rows() - 1) .setShift(1); } /** \brief Returns an expression of the tridiagonal matrix T in the decomposition * * \returns expression object representing the matrix T * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * Currently, this function can be used to extract the matrix T from internal * data and copy it to a dense matrix object. In most cases, it may be * sufficient to directly use the packed matrix or the vector expressions * returned by diagonal() and subDiagonal() instead of creating a new * dense copy matrix with this function. * * \sa Tridiagonalization(const MatrixType&) for an example, * matrixQ(), packedMatrix(), diagonal(), subDiagonal() */ MatrixTReturnType matrixT() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return MatrixTReturnType(m_matrix.real()); } /** \brief Returns the diagonal of the tridiagonal matrix T in the decomposition. * * \returns expression representing the diagonal of T * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * Example: \include Tridiagonalization_diagonal.cpp * Output: \verbinclude Tridiagonalization_diagonal.out * * \sa matrixT(), subDiagonal() */ DiagonalReturnType diagonal() const; /** \brief Returns the subdiagonal of the tridiagonal matrix T in the decomposition. * * \returns expression representing the subdiagonal of T * * \pre Either the constructor Tridiagonalization(const MatrixType&) or * the member function compute(const MatrixType&) has been called before * to compute the tridiagonal decomposition of a matrix. * * \sa diagonal() for an example, matrixT() */ SubDiagonalReturnType subDiagonal() const; protected: MatrixType m_matrix; CoeffVectorType m_hCoeffs; bool m_isInitialized; }; template typename Tridiagonalization::DiagonalReturnType Tridiagonalization::diagonal() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return m_matrix.diagonal().real(); } template typename Tridiagonalization::SubDiagonalReturnType Tridiagonalization::subDiagonal() const { eigen_assert(m_isInitialized && "Tridiagonalization is not initialized."); return m_matrix.template diagonal<-1>().real(); } namespace internal { /** \internal * Performs a tridiagonal decomposition of the selfadjoint matrix \a matA in-place. * * \param[in,out] matA On input the selfadjoint matrix. Only the \b lower triangular part is referenced. * On output, the strict upper part is left unchanged, and the lower triangular part * represents the T and Q matrices in packed format has detailed below. * \param[out] hCoeffs returned Householder coefficients (see below) * * On output, the tridiagonal selfadjoint matrix T is stored in the diagonal * and lower sub-diagonal of the matrix \a matA. * The unitary matrix Q is represented in a compact way as a product of * Householder reflectors \f$ H_i \f$ such that: * \f$ Q = H_{N-1} \ldots H_1 H_0 \f$. * The Householder reflectors are defined as * \f$ H_i = (I - h_i v_i v_i^T) \f$ * where \f$ h_i = hCoeffs[i]\f$ is the \f$ i \f$th Householder coefficient and * \f$ v_i \f$ is the Householder vector defined by * \f$ v_i = [ 0, \ldots, 0, 1, matA(i+2,i), \ldots, matA(N-1,i) ]^T \f$. * * Implemented from Golub's "Matrix Computations", algorithm 8.3.1. * * \sa Tridiagonalization::packedMatrix() */ template EIGEN_DEVICE_FUNC void tridiagonalization_inplace(MatrixType& matA, CoeffVectorType& hCoeffs) { using numext::conj; typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; Index n = matA.rows(); eigen_assert(n==matA.cols()); eigen_assert(n==hCoeffs.size()+1 || n==1); for (Index i = 0; i() * (conj(h) * matA.col(i).tail(remainingSize))); hCoeffs.tail(n-i-1) += (conj(h)*RealScalar(-0.5)*(hCoeffs.tail(remainingSize).dot(matA.col(i).tail(remainingSize)))) * matA.col(i).tail(n-i-1); matA.bottomRightCorner(remainingSize, remainingSize).template selfadjointView() .rankUpdate(matA.col(i).tail(remainingSize), hCoeffs.tail(remainingSize), Scalar(-1)); matA.col(i).coeffRef(i+1) = beta; hCoeffs.coeffRef(i) = h; } } // forward declaration, implementation at the end of this file template::IsComplex> struct tridiagonalization_inplace_selector; /** \brief Performs a full tridiagonalization in place * * \param[in,out] mat On input, the selfadjoint matrix whose tridiagonal * decomposition is to be computed. Only the lower triangular part referenced. * The rest is left unchanged. On output, the orthogonal matrix Q * in the decomposition if \p extractQ is true. * \param[out] diag The diagonal of the tridiagonal matrix T in the * decomposition. * \param[out] subdiag The subdiagonal of the tridiagonal matrix T in * the decomposition. * \param[in] extractQ If true, the orthogonal matrix Q in the * decomposition is computed and stored in \p mat. * * Computes the tridiagonal decomposition of the selfadjoint matrix \p mat in place * such that \f$ mat = Q T Q^* \f$ where \f$ Q \f$ is unitary and \f$ T \f$ a real * symmetric tridiagonal matrix. * * The tridiagonal matrix T is passed to the output parameters \p diag and \p subdiag. If * \p extractQ is true, then the orthogonal matrix Q is passed to \p mat. Otherwise the lower * part of the matrix \p mat is destroyed. * * The vectors \p diag and \p subdiag are not resized. The function * assumes that they are already of the correct size. The length of the * vector \p diag should equal the number of rows in \p mat, and the * length of the vector \p subdiag should be one left. * * This implementation contains an optimized path for 3-by-3 matrices * which is especially useful for plane fitting. * * \note Currently, it requires two temporary vectors to hold the intermediate * Householder coefficients, and to reconstruct the matrix Q from the Householder * reflectors. * * Example (this uses the same matrix as the example in * Tridiagonalization::Tridiagonalization(const MatrixType&)): * \include Tridiagonalization_decomposeInPlace.cpp * Output: \verbinclude Tridiagonalization_decomposeInPlace.out * * \sa class Tridiagonalization */ template EIGEN_DEVICE_FUNC void tridiagonalization_inplace(MatrixType& mat, DiagonalType& diag, SubDiagonalType& subdiag, CoeffVectorType& hcoeffs, bool extractQ) { eigen_assert(mat.cols()==mat.rows() && diag.size()==mat.rows() && subdiag.size()==mat.rows()-1); tridiagonalization_inplace_selector::run(mat, diag, subdiag, hcoeffs, extractQ); } /** \internal * General full tridiagonalization */ template struct tridiagonalization_inplace_selector { typedef typename Tridiagonalization::CoeffVectorType CoeffVectorType; typedef typename Tridiagonalization::HouseholderSequenceType HouseholderSequenceType; template static EIGEN_DEVICE_FUNC void run(MatrixType& mat, DiagonalType& diag, SubDiagonalType& subdiag, CoeffVectorType& hCoeffs, bool extractQ) { tridiagonalization_inplace(mat, hCoeffs); diag = mat.diagonal().real(); subdiag = mat.template diagonal<-1>().real(); if(extractQ) mat = HouseholderSequenceType(mat, hCoeffs.conjugate()) .setLength(mat.rows() - 1) .setShift(1); } }; /** \internal * Specialization for 3x3 real matrices. * Especially useful for plane fitting. */ template struct tridiagonalization_inplace_selector { typedef typename MatrixType::Scalar Scalar; typedef typename MatrixType::RealScalar RealScalar; template static void run(MatrixType& mat, DiagonalType& diag, SubDiagonalType& subdiag, CoeffVectorType&, bool extractQ) { using std::sqrt; const RealScalar tol = (std::numeric_limits::min)(); diag[0] = mat(0,0); RealScalar v1norm2 = numext::abs2(mat(2,0)); if(v1norm2 <= tol) { diag[1] = mat(1,1); diag[2] = mat(2,2); subdiag[0] = mat(1,0); subdiag[1] = mat(2,1); if (extractQ) mat.setIdentity(); } else { RealScalar beta = sqrt(numext::abs2(mat(1,0)) + v1norm2); RealScalar invBeta = RealScalar(1)/beta; Scalar m01 = mat(1,0) * invBeta; Scalar m02 = mat(2,0) * invBeta; Scalar q = RealScalar(2)*m01*mat(2,1) + m02*(mat(2,2) - mat(1,1)); diag[1] = mat(1,1) + m02*q; diag[2] = mat(2,2) - m02*q; subdiag[0] = beta; subdiag[1] = mat(2,1) - m01 * q; if (extractQ) { mat << 1, 0, 0, 0, m01, m02, 0, m02, -m01; } } } }; /** \internal * Trivial specialization for 1x1 matrices */ template struct tridiagonalization_inplace_selector { typedef typename MatrixType::Scalar Scalar; template static EIGEN_DEVICE_FUNC void run(MatrixType& mat, DiagonalType& diag, SubDiagonalType&, CoeffVectorType&, bool extractQ) { diag(0,0) = numext::real(mat(0,0)); if(extractQ) mat(0,0) = Scalar(1); } }; /** \internal * \eigenvalues_module \ingroup Eigenvalues_Module * * \brief Expression type for return value of Tridiagonalization::matrixT() * * \tparam MatrixType type of underlying dense matrix */ template struct TridiagonalizationMatrixTReturnType : public ReturnByValue > { public: /** \brief Constructor. * * \param[in] mat The underlying dense matrix */ TridiagonalizationMatrixTReturnType(const MatrixType& mat) : m_matrix(mat) { } template inline void evalTo(ResultType& result) const { result.setZero(); result.template diagonal<1>() = m_matrix.template diagonal<-1>().conjugate(); result.diagonal() = m_matrix.diagonal(); result.template diagonal<-1>() = m_matrix.template diagonal<-1>(); } EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_matrix.rows(); } EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_matrix.cols(); } protected: typename MatrixType::Nested m_matrix; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_TRIDIAGONALIZATION_H piqp-octave/src/eigen/Eigen/src/Eigenvalues/RealQZ.h0000644000175100001770000005613014107270226022133 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Alexey Korepanov // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_REAL_QZ_H #define EIGEN_REAL_QZ_H namespace Eigen { /** \eigenvalues_module \ingroup Eigenvalues_Module * * * \class RealQZ * * \brief Performs a real QZ decomposition of a pair of square matrices * * \tparam _MatrixType the type of the matrix of which we are computing the * real QZ decomposition; this is expected to be an instantiation of the * Matrix class template. * * Given a real square matrices A and B, this class computes the real QZ * decomposition: \f$ A = Q S Z \f$, \f$ B = Q T Z \f$ where Q and Z are * real orthogonal matrixes, T is upper-triangular matrix, and S is upper * quasi-triangular matrix. An orthogonal matrix is a matrix whose * inverse is equal to its transpose, \f$ U^{-1} = U^T \f$. A quasi-triangular * matrix is a block-triangular matrix whose diagonal consists of 1-by-1 * blocks and 2-by-2 blocks where further reduction is impossible due to * complex eigenvalues. * * The eigenvalues of the pencil \f$ A - z B \f$ can be obtained from * 1x1 and 2x2 blocks on the diagonals of S and T. * * Call the function compute() to compute the real QZ decomposition of a * given pair of matrices. Alternatively, you can use the * RealQZ(const MatrixType& B, const MatrixType& B, bool computeQZ) * constructor which computes the real QZ decomposition at construction * time. Once the decomposition is computed, you can use the matrixS(), * matrixT(), matrixQ() and matrixZ() functions to retrieve the matrices * S, T, Q and Z in the decomposition. If computeQZ==false, some time * is saved by not computing matrices Q and Z. * * Example: \include RealQZ_compute.cpp * Output: \include RealQZ_compute.out * * \note The implementation is based on the algorithm in "Matrix Computations" * by Gene H. Golub and Charles F. Van Loan, and a paper "An algorithm for * generalized eigenvalue problems" by C.B.Moler and G.W.Stewart. * * \sa class RealSchur, class ComplexSchur, class EigenSolver, class ComplexEigenSolver */ template class RealQZ { public: typedef _MatrixType MatrixType; enum { RowsAtCompileTime = MatrixType::RowsAtCompileTime, ColsAtCompileTime = MatrixType::ColsAtCompileTime, Options = MatrixType::Options, MaxRowsAtCompileTime = MatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime }; typedef typename MatrixType::Scalar Scalar; typedef std::complex::Real> ComplexScalar; typedef Eigen::Index Index; ///< \deprecated since Eigen 3.3 typedef Matrix EigenvalueType; typedef Matrix ColumnVectorType; /** \brief Default constructor. * * \param [in] size Positive integer, size of the matrix whose QZ decomposition will be computed. * * The default constructor is useful in cases in which the user intends to * perform decompositions via compute(). The \p size parameter is only * used as a hint. It is not an error to give a wrong \p size, but it may * impair performance. * * \sa compute() for an example. */ explicit RealQZ(Index size = RowsAtCompileTime==Dynamic ? 1 : RowsAtCompileTime) : m_S(size, size), m_T(size, size), m_Q(size, size), m_Z(size, size), m_workspace(size*2), m_maxIters(400), m_isInitialized(false), m_computeQZ(true) {} /** \brief Constructor; computes real QZ decomposition of given matrices * * \param[in] A Matrix A. * \param[in] B Matrix B. * \param[in] computeQZ If false, A and Z are not computed. * * This constructor calls compute() to compute the QZ decomposition. */ RealQZ(const MatrixType& A, const MatrixType& B, bool computeQZ = true) : m_S(A.rows(),A.cols()), m_T(A.rows(),A.cols()), m_Q(A.rows(),A.cols()), m_Z(A.rows(),A.cols()), m_workspace(A.rows()*2), m_maxIters(400), m_isInitialized(false), m_computeQZ(true) { compute(A, B, computeQZ); } /** \brief Returns matrix Q in the QZ decomposition. * * \returns A const reference to the matrix Q. */ const MatrixType& matrixQ() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); eigen_assert(m_computeQZ && "The matrices Q and Z have not been computed during the QZ decomposition."); return m_Q; } /** \brief Returns matrix Z in the QZ decomposition. * * \returns A const reference to the matrix Z. */ const MatrixType& matrixZ() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); eigen_assert(m_computeQZ && "The matrices Q and Z have not been computed during the QZ decomposition."); return m_Z; } /** \brief Returns matrix S in the QZ decomposition. * * \returns A const reference to the matrix S. */ const MatrixType& matrixS() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); return m_S; } /** \brief Returns matrix S in the QZ decomposition. * * \returns A const reference to the matrix S. */ const MatrixType& matrixT() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); return m_T; } /** \brief Computes QZ decomposition of given matrix. * * \param[in] A Matrix A. * \param[in] B Matrix B. * \param[in] computeQZ If false, A and Z are not computed. * \returns Reference to \c *this */ RealQZ& compute(const MatrixType& A, const MatrixType& B, bool computeQZ = true); /** \brief Reports whether previous computation was successful. * * \returns \c Success if computation was successful, \c NoConvergence otherwise. */ ComputationInfo info() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); return m_info; } /** \brief Returns number of performed QR-like iterations. */ Index iterations() const { eigen_assert(m_isInitialized && "RealQZ is not initialized."); return m_global_iter; } /** Sets the maximal number of iterations allowed to converge to one eigenvalue * or decouple the problem. */ RealQZ& setMaxIterations(Index maxIters) { m_maxIters = maxIters; return *this; } private: MatrixType m_S, m_T, m_Q, m_Z; Matrix m_workspace; ComputationInfo m_info; Index m_maxIters; bool m_isInitialized; bool m_computeQZ; Scalar m_normOfT, m_normOfS; Index m_global_iter; typedef Matrix Vector3s; typedef Matrix Vector2s; typedef Matrix Matrix2s; typedef JacobiRotation JRs; void hessenbergTriangular(); void computeNorms(); Index findSmallSubdiagEntry(Index iu); Index findSmallDiagEntry(Index f, Index l); void splitOffTwoRows(Index i); void pushDownZero(Index z, Index f, Index l); void step(Index f, Index l, Index iter); }; // RealQZ /** \internal Reduces S and T to upper Hessenberg - triangular form */ template void RealQZ::hessenbergTriangular() { const Index dim = m_S.cols(); // perform QR decomposition of T, overwrite T with R, save Q HouseholderQR qrT(m_T); m_T = qrT.matrixQR(); m_T.template triangularView().setZero(); m_Q = qrT.householderQ(); // overwrite S with Q* S m_S.applyOnTheLeft(m_Q.adjoint()); // init Z as Identity if (m_computeQZ) m_Z = MatrixType::Identity(dim,dim); // reduce S to upper Hessenberg with Givens rotations for (Index j=0; j<=dim-3; j++) { for (Index i=dim-1; i>=j+2; i--) { JRs G; // kill S(i,j) if(m_S.coeff(i,j) != 0) { G.makeGivens(m_S.coeff(i-1,j), m_S.coeff(i,j), &m_S.coeffRef(i-1, j)); m_S.coeffRef(i,j) = Scalar(0.0); m_S.rightCols(dim-j-1).applyOnTheLeft(i-1,i,G.adjoint()); m_T.rightCols(dim-i+1).applyOnTheLeft(i-1,i,G.adjoint()); // update Q if (m_computeQZ) m_Q.applyOnTheRight(i-1,i,G); } // kill T(i,i-1) if(m_T.coeff(i,i-1)!=Scalar(0)) { G.makeGivens(m_T.coeff(i,i), m_T.coeff(i,i-1), &m_T.coeffRef(i,i)); m_T.coeffRef(i,i-1) = Scalar(0.0); m_S.applyOnTheRight(i,i-1,G); m_T.topRows(i).applyOnTheRight(i,i-1,G); // update Z if (m_computeQZ) m_Z.applyOnTheLeft(i,i-1,G.adjoint()); } } } } /** \internal Computes vector L1 norms of S and T when in Hessenberg-Triangular form already */ template inline void RealQZ::computeNorms() { const Index size = m_S.cols(); m_normOfS = Scalar(0.0); m_normOfT = Scalar(0.0); for (Index j = 0; j < size; ++j) { m_normOfS += m_S.col(j).segment(0, (std::min)(size,j+2)).cwiseAbs().sum(); m_normOfT += m_T.row(j).segment(j, size - j).cwiseAbs().sum(); } } /** \internal Look for single small sub-diagonal element S(res, res-1) and return res (or 0) */ template inline Index RealQZ::findSmallSubdiagEntry(Index iu) { using std::abs; Index res = iu; while (res > 0) { Scalar s = abs(m_S.coeff(res-1,res-1)) + abs(m_S.coeff(res,res)); if (s == Scalar(0.0)) s = m_normOfS; if (abs(m_S.coeff(res,res-1)) < NumTraits::epsilon() * s) break; res--; } return res; } /** \internal Look for single small diagonal element T(res, res) for res between f and l, and return res (or f-1) */ template inline Index RealQZ::findSmallDiagEntry(Index f, Index l) { using std::abs; Index res = l; while (res >= f) { if (abs(m_T.coeff(res,res)) <= NumTraits::epsilon() * m_normOfT) break; res--; } return res; } /** \internal decouple 2x2 diagonal block in rows i, i+1 if eigenvalues are real */ template inline void RealQZ::splitOffTwoRows(Index i) { using std::abs; using std::sqrt; const Index dim=m_S.cols(); if (abs(m_S.coeff(i+1,i))==Scalar(0)) return; Index j = findSmallDiagEntry(i,i+1); if (j==i-1) { // block of (S T^{-1}) Matrix2s STi = m_T.template block<2,2>(i,i).template triangularView(). template solve(m_S.template block<2,2>(i,i)); Scalar p = Scalar(0.5)*(STi(0,0)-STi(1,1)); Scalar q = p*p + STi(1,0)*STi(0,1); if (q>=0) { Scalar z = sqrt(q); // one QR-like iteration for ABi - lambda I // is enough - when we know exact eigenvalue in advance, // convergence is immediate JRs G; if (p>=0) G.makeGivens(p + z, STi(1,0)); else G.makeGivens(p - z, STi(1,0)); m_S.rightCols(dim-i).applyOnTheLeft(i,i+1,G.adjoint()); m_T.rightCols(dim-i).applyOnTheLeft(i,i+1,G.adjoint()); // update Q if (m_computeQZ) m_Q.applyOnTheRight(i,i+1,G); G.makeGivens(m_T.coeff(i+1,i+1), m_T.coeff(i+1,i)); m_S.topRows(i+2).applyOnTheRight(i+1,i,G); m_T.topRows(i+2).applyOnTheRight(i+1,i,G); // update Z if (m_computeQZ) m_Z.applyOnTheLeft(i+1,i,G.adjoint()); m_S.coeffRef(i+1,i) = Scalar(0.0); m_T.coeffRef(i+1,i) = Scalar(0.0); } } else { pushDownZero(j,i,i+1); } } /** \internal use zero in T(z,z) to zero S(l,l-1), working in block f..l */ template inline void RealQZ::pushDownZero(Index z, Index f, Index l) { JRs G; const Index dim = m_S.cols(); for (Index zz=z; zzf ? (zz-1) : zz; G.makeGivens(m_T.coeff(zz, zz+1), m_T.coeff(zz+1, zz+1)); m_S.rightCols(dim-firstColS).applyOnTheLeft(zz,zz+1,G.adjoint()); m_T.rightCols(dim-zz).applyOnTheLeft(zz,zz+1,G.adjoint()); m_T.coeffRef(zz+1,zz+1) = Scalar(0.0); // update Q if (m_computeQZ) m_Q.applyOnTheRight(zz,zz+1,G); // kill S(zz+1, zz-1) if (zz>f) { G.makeGivens(m_S.coeff(zz+1, zz), m_S.coeff(zz+1,zz-1)); m_S.topRows(zz+2).applyOnTheRight(zz, zz-1,G); m_T.topRows(zz+1).applyOnTheRight(zz, zz-1,G); m_S.coeffRef(zz+1,zz-1) = Scalar(0.0); // update Z if (m_computeQZ) m_Z.applyOnTheLeft(zz,zz-1,G.adjoint()); } } // finally kill S(l,l-1) G.makeGivens(m_S.coeff(l,l), m_S.coeff(l,l-1)); m_S.applyOnTheRight(l,l-1,G); m_T.applyOnTheRight(l,l-1,G); m_S.coeffRef(l,l-1)=Scalar(0.0); // update Z if (m_computeQZ) m_Z.applyOnTheLeft(l,l-1,G.adjoint()); } /** \internal QR-like iterative step for block f..l */ template inline void RealQZ::step(Index f, Index l, Index iter) { using std::abs; const Index dim = m_S.cols(); // x, y, z Scalar x, y, z; if (iter==10) { // Wilkinson ad hoc shift const Scalar a11=m_S.coeff(f+0,f+0), a12=m_S.coeff(f+0,f+1), a21=m_S.coeff(f+1,f+0), a22=m_S.coeff(f+1,f+1), a32=m_S.coeff(f+2,f+1), b12=m_T.coeff(f+0,f+1), b11i=Scalar(1.0)/m_T.coeff(f+0,f+0), b22i=Scalar(1.0)/m_T.coeff(f+1,f+1), a87=m_S.coeff(l-1,l-2), a98=m_S.coeff(l-0,l-1), b77i=Scalar(1.0)/m_T.coeff(l-2,l-2), b88i=Scalar(1.0)/m_T.coeff(l-1,l-1); Scalar ss = abs(a87*b77i) + abs(a98*b88i), lpl = Scalar(1.5)*ss, ll = ss*ss; x = ll + a11*a11*b11i*b11i - lpl*a11*b11i + a12*a21*b11i*b22i - a11*a21*b12*b11i*b11i*b22i; y = a11*a21*b11i*b11i - lpl*a21*b11i + a21*a22*b11i*b22i - a21*a21*b12*b11i*b11i*b22i; z = a21*a32*b11i*b22i; } else if (iter==16) { // another exceptional shift x = m_S.coeff(f,f)/m_T.coeff(f,f)-m_S.coeff(l,l)/m_T.coeff(l,l) + m_S.coeff(l,l-1)*m_T.coeff(l-1,l) / (m_T.coeff(l-1,l-1)*m_T.coeff(l,l)); y = m_S.coeff(f+1,f)/m_T.coeff(f,f); z = 0; } else if (iter>23 && !(iter%8)) { // extremely exceptional shift x = internal::random(-1.0,1.0); y = internal::random(-1.0,1.0); z = internal::random(-1.0,1.0); } else { // Compute the shifts: (x,y,z,0...) = (AB^-1 - l1 I) (AB^-1 - l2 I) e1 // where l1 and l2 are the eigenvalues of the 2x2 matrix C = U V^-1 where // U and V are 2x2 bottom right sub matrices of A and B. Thus: // = AB^-1AB^-1 + l1 l2 I - (l1+l2)(AB^-1) // = AB^-1AB^-1 + det(M) - tr(M)(AB^-1) // Since we are only interested in having x, y, z with a correct ratio, we have: const Scalar a11 = m_S.coeff(f,f), a12 = m_S.coeff(f,f+1), a21 = m_S.coeff(f+1,f), a22 = m_S.coeff(f+1,f+1), a32 = m_S.coeff(f+2,f+1), a88 = m_S.coeff(l-1,l-1), a89 = m_S.coeff(l-1,l), a98 = m_S.coeff(l,l-1), a99 = m_S.coeff(l,l), b11 = m_T.coeff(f,f), b12 = m_T.coeff(f,f+1), b22 = m_T.coeff(f+1,f+1), b88 = m_T.coeff(l-1,l-1), b89 = m_T.coeff(l-1,l), b99 = m_T.coeff(l,l); x = ( (a88/b88 - a11/b11)*(a99/b99 - a11/b11) - (a89/b99)*(a98/b88) + (a98/b88)*(b89/b99)*(a11/b11) ) * (b11/a21) + a12/b22 - (a11/b11)*(b12/b22); y = (a22/b22-a11/b11) - (a21/b11)*(b12/b22) - (a88/b88-a11/b11) - (a99/b99-a11/b11) + (a98/b88)*(b89/b99); z = a32/b22; } JRs G; for (Index k=f; k<=l-2; k++) { // variables for Householder reflections Vector2s essential2; Scalar tau, beta; Vector3s hr(x,y,z); // Q_k to annihilate S(k+1,k-1) and S(k+2,k-1) hr.makeHouseholderInPlace(tau, beta); essential2 = hr.template bottomRows<2>(); Index fc=(std::max)(k-1,Index(0)); // first col to update m_S.template middleRows<3>(k).rightCols(dim-fc).applyHouseholderOnTheLeft(essential2, tau, m_workspace.data()); m_T.template middleRows<3>(k).rightCols(dim-fc).applyHouseholderOnTheLeft(essential2, tau, m_workspace.data()); if (m_computeQZ) m_Q.template middleCols<3>(k).applyHouseholderOnTheRight(essential2, tau, m_workspace.data()); if (k>f) m_S.coeffRef(k+2,k-1) = m_S.coeffRef(k+1,k-1) = Scalar(0.0); // Z_{k1} to annihilate T(k+2,k+1) and T(k+2,k) hr << m_T.coeff(k+2,k+2),m_T.coeff(k+2,k),m_T.coeff(k+2,k+1); hr.makeHouseholderInPlace(tau, beta); essential2 = hr.template bottomRows<2>(); { Index lr = (std::min)(k+4,dim); // last row to update Map > tmp(m_workspace.data(),lr); // S tmp = m_S.template middleCols<2>(k).topRows(lr) * essential2; tmp += m_S.col(k+2).head(lr); m_S.col(k+2).head(lr) -= tau*tmp; m_S.template middleCols<2>(k).topRows(lr) -= (tau*tmp) * essential2.adjoint(); // T tmp = m_T.template middleCols<2>(k).topRows(lr) * essential2; tmp += m_T.col(k+2).head(lr); m_T.col(k+2).head(lr) -= tau*tmp; m_T.template middleCols<2>(k).topRows(lr) -= (tau*tmp) * essential2.adjoint(); } if (m_computeQZ) { // Z Map > tmp(m_workspace.data(),dim); tmp = essential2.adjoint()*(m_Z.template middleRows<2>(k)); tmp += m_Z.row(k+2); m_Z.row(k+2) -= tau*tmp; m_Z.template middleRows<2>(k) -= essential2 * (tau*tmp); } m_T.coeffRef(k+2,k) = m_T.coeffRef(k+2,k+1) = Scalar(0.0); // Z_{k2} to annihilate T(k+1,k) G.makeGivens(m_T.coeff(k+1,k+1), m_T.coeff(k+1,k)); m_S.applyOnTheRight(k+1,k,G); m_T.applyOnTheRight(k+1,k,G); // update Z if (m_computeQZ) m_Z.applyOnTheLeft(k+1,k,G.adjoint()); m_T.coeffRef(k+1,k) = Scalar(0.0); // update x,y,z x = m_S.coeff(k+1,k); y = m_S.coeff(k+2,k); if (k < l-2) z = m_S.coeff(k+3,k); } // loop over k // Q_{n-1} to annihilate y = S(l,l-2) G.makeGivens(x,y); m_S.applyOnTheLeft(l-1,l,G.adjoint()); m_T.applyOnTheLeft(l-1,l,G.adjoint()); if (m_computeQZ) m_Q.applyOnTheRight(l-1,l,G); m_S.coeffRef(l,l-2) = Scalar(0.0); // Z_{n-1} to annihilate T(l,l-1) G.makeGivens(m_T.coeff(l,l),m_T.coeff(l,l-1)); m_S.applyOnTheRight(l,l-1,G); m_T.applyOnTheRight(l,l-1,G); if (m_computeQZ) m_Z.applyOnTheLeft(l,l-1,G.adjoint()); m_T.coeffRef(l,l-1) = Scalar(0.0); } template RealQZ& RealQZ::compute(const MatrixType& A_in, const MatrixType& B_in, bool computeQZ) { const Index dim = A_in.cols(); eigen_assert (A_in.rows()==dim && A_in.cols()==dim && B_in.rows()==dim && B_in.cols()==dim && "Need square matrices of the same dimension"); m_isInitialized = true; m_computeQZ = computeQZ; m_S = A_in; m_T = B_in; m_workspace.resize(dim*2); m_global_iter = 0; // entrance point: hessenberg triangular decomposition hessenbergTriangular(); // compute L1 vector norms of T, S into m_normOfS, m_normOfT computeNorms(); Index l = dim-1, f, local_iter = 0; while (l>0 && local_iter0) m_S.coeffRef(f,f-1) = Scalar(0.0); if (f == l) // One root found { l--; local_iter = 0; } else if (f == l-1) // Two roots found { splitOffTwoRows(f); l -= 2; local_iter = 0; } else // No convergence yet { // if there's zero on diagonal of T, we can isolate an eigenvalue with Givens rotations Index z = findSmallDiagEntry(f,l); if (z>=f) { // zero found pushDownZero(z,f,l); } else { // We are sure now that S.block(f,f, l-f+1,l-f+1) is underuced upper-Hessenberg // and T.block(f,f, l-f+1,l-f+1) is invertible uper-triangular, which allows to // apply a QR-like iteration to rows and columns f..l. step(f,l, local_iter); local_iter++; m_global_iter++; } } } // check if we converged before reaching iterations limit m_info = (local_iter j_left, j_right; internal::real_2x2_jacobi_svd(m_T, i, i+1, &j_left, &j_right); // Apply resulting Jacobi rotations m_S.applyOnTheLeft(i,i+1,j_left); m_S.applyOnTheRight(i,i+1,j_right); m_T.applyOnTheLeft(i,i+1,j_left); m_T.applyOnTheRight(i,i+1,j_right); m_T(i+1,i) = m_T(i,i+1) = Scalar(0); if(m_computeQZ) { m_Q.applyOnTheRight(i,i+1,j_left.transpose()); m_Z.applyOnTheLeft(i,i+1,j_right.transpose()); } i++; } } } return *this; } // end compute } // end namespace Eigen #endif //EIGEN_REAL_QZ piqp-octave/src/eigen/Eigen/src/Core/0000755000175100001770000000000014107270226017240 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Core/MathFunctionsImpl.h0000644000175100001770000001576414107270226023032 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2014 Pedro Gonnet (pedro.gonnet@gmail.com) // Copyright (C) 2016 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MATHFUNCTIONSIMPL_H #define EIGEN_MATHFUNCTIONSIMPL_H namespace Eigen { namespace internal { /** \internal \returns the hyperbolic tan of \a a (coeff-wise) Doesn't do anything fancy, just a 13/6-degree rational interpolant which is accurate up to a couple of ulps in the (approximate) range [-8, 8], outside of which tanh(x) = +/-1 in single precision. The input is clamped to the range [-c, c]. The value c is chosen as the smallest value where the approximation evaluates to exactly 1. In the reange [-0.0004, 0.0004] the approxmation tanh(x) ~= x is used for better accuracy as x tends to zero. This implementation works on both scalars and packets. */ template T generic_fast_tanh_float(const T& a_x) { // Clamp the inputs to the range [-c, c] #ifdef EIGEN_VECTORIZE_FMA const T plus_clamp = pset1(7.99881172180175781f); const T minus_clamp = pset1(-7.99881172180175781f); #else const T plus_clamp = pset1(7.90531110763549805f); const T minus_clamp = pset1(-7.90531110763549805f); #endif const T tiny = pset1(0.0004f); const T x = pmax(pmin(a_x, plus_clamp), minus_clamp); const T tiny_mask = pcmp_lt(pabs(a_x), tiny); // The monomial coefficients of the numerator polynomial (odd). const T alpha_1 = pset1(4.89352455891786e-03f); const T alpha_3 = pset1(6.37261928875436e-04f); const T alpha_5 = pset1(1.48572235717979e-05f); const T alpha_7 = pset1(5.12229709037114e-08f); const T alpha_9 = pset1(-8.60467152213735e-11f); const T alpha_11 = pset1(2.00018790482477e-13f); const T alpha_13 = pset1(-2.76076847742355e-16f); // The monomial coefficients of the denominator polynomial (even). const T beta_0 = pset1(4.89352518554385e-03f); const T beta_2 = pset1(2.26843463243900e-03f); const T beta_4 = pset1(1.18534705686654e-04f); const T beta_6 = pset1(1.19825839466702e-06f); // Since the polynomials are odd/even, we need x^2. const T x2 = pmul(x, x); // Evaluate the numerator polynomial p. T p = pmadd(x2, alpha_13, alpha_11); p = pmadd(x2, p, alpha_9); p = pmadd(x2, p, alpha_7); p = pmadd(x2, p, alpha_5); p = pmadd(x2, p, alpha_3); p = pmadd(x2, p, alpha_1); p = pmul(x, p); // Evaluate the denominator polynomial q. T q = pmadd(x2, beta_6, beta_4); q = pmadd(x2, q, beta_2); q = pmadd(x2, q, beta_0); // Divide the numerator by the denominator. return pselect(tiny_mask, x, pdiv(p, q)); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE RealScalar positive_real_hypot(const RealScalar& x, const RealScalar& y) { // IEEE IEC 6059 special cases. if ((numext::isinf)(x) || (numext::isinf)(y)) return NumTraits::infinity(); if ((numext::isnan)(x) || (numext::isnan)(y)) return NumTraits::quiet_NaN(); EIGEN_USING_STD(sqrt); RealScalar p, qp; p = numext::maxi(x,y); if(p==RealScalar(0)) return RealScalar(0); qp = numext::mini(y,x) / p; return p * sqrt(RealScalar(1) + qp*qp); } template struct hypot_impl { typedef typename NumTraits::Real RealScalar; static EIGEN_DEVICE_FUNC inline RealScalar run(const Scalar& x, const Scalar& y) { EIGEN_USING_STD(abs); return positive_real_hypot(abs(x), abs(y)); } }; // Generic complex sqrt implementation that correctly handles corner cases // according to https://en.cppreference.com/w/cpp/numeric/complex/sqrt template EIGEN_DEVICE_FUNC std::complex complex_sqrt(const std::complex& z) { // Computes the principal sqrt of the input. // // For a complex square root of the number x + i*y. We want to find real // numbers u and v such that // (u + i*v)^2 = x + i*y <=> // u^2 - v^2 + i*2*u*v = x + i*v. // By equating the real and imaginary parts we get: // u^2 - v^2 = x // 2*u*v = y. // // For x >= 0, this has the numerically stable solution // u = sqrt(0.5 * (x + sqrt(x^2 + y^2))) // v = y / (2 * u) // and for x < 0, // v = sign(y) * sqrt(0.5 * (-x + sqrt(x^2 + y^2))) // u = y / (2 * v) // // Letting w = sqrt(0.5 * (|x| + |z|)), // if x == 0: u = w, v = sign(y) * w // if x > 0: u = w, v = y / (2 * w) // if x < 0: u = |y| / (2 * w), v = sign(y) * w const T x = numext::real(z); const T y = numext::imag(z); const T zero = T(0); const T w = numext::sqrt(T(0.5) * (numext::abs(x) + numext::hypot(x, y))); return (numext::isinf)(y) ? std::complex(NumTraits::infinity(), y) : x == zero ? std::complex(w, y < zero ? -w : w) : x > zero ? std::complex(w, y / (2 * w)) : std::complex(numext::abs(y) / (2 * w), y < zero ? -w : w ); } // Generic complex rsqrt implementation. template EIGEN_DEVICE_FUNC std::complex complex_rsqrt(const std::complex& z) { // Computes the principal reciprocal sqrt of the input. // // For a complex reciprocal square root of the number z = x + i*y. We want to // find real numbers u and v such that // (u + i*v)^2 = 1 / (x + i*y) <=> // u^2 - v^2 + i*2*u*v = x/|z|^2 - i*v/|z|^2. // By equating the real and imaginary parts we get: // u^2 - v^2 = x/|z|^2 // 2*u*v = y/|z|^2. // // For x >= 0, this has the numerically stable solution // u = sqrt(0.5 * (x + |z|)) / |z| // v = -y / (2 * u * |z|) // and for x < 0, // v = -sign(y) * sqrt(0.5 * (-x + |z|)) / |z| // u = -y / (2 * v * |z|) // // Letting w = sqrt(0.5 * (|x| + |z|)), // if x == 0: u = w / |z|, v = -sign(y) * w / |z| // if x > 0: u = w / |z|, v = -y / (2 * w * |z|) // if x < 0: u = |y| / (2 * w * |z|), v = -sign(y) * w / |z| const T x = numext::real(z); const T y = numext::imag(z); const T zero = T(0); const T abs_z = numext::hypot(x, y); const T w = numext::sqrt(T(0.5) * (numext::abs(x) + abs_z)); const T woz = w / abs_z; // Corner cases consistent with 1/sqrt(z) on gcc/clang. return abs_z == zero ? std::complex(NumTraits::infinity(), NumTraits::quiet_NaN()) : ((numext::isinf)(x) || (numext::isinf)(y)) ? std::complex(zero, zero) : x == zero ? std::complex(woz, y < zero ? woz : -woz) : x > zero ? std::complex(woz, -y / (2 * w * abs_z)) : std::complex(numext::abs(y) / (2 * w * abs_z), y < zero ? woz : -woz ); } template EIGEN_DEVICE_FUNC std::complex complex_log(const std::complex& z) { // Computes complex log. T a = numext::abs(z); EIGEN_USING_STD(atan2); T b = atan2(z.imag(), z.real()); return std::complex(numext::log(a), b); } } // end namespace internal } // end namespace Eigen #endif // EIGEN_MATHFUNCTIONSIMPL_H piqp-octave/src/eigen/Eigen/src/Core/Product.h0000644000175100001770000001625014107270226021035 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2011 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_PRODUCT_H #define EIGEN_PRODUCT_H namespace Eigen { template class ProductImpl; namespace internal { template struct traits > { typedef typename remove_all::type LhsCleaned; typedef typename remove_all::type RhsCleaned; typedef traits LhsTraits; typedef traits RhsTraits; typedef MatrixXpr XprKind; typedef typename ScalarBinaryOpTraits::Scalar, typename traits::Scalar>::ReturnType Scalar; typedef typename product_promote_storage_type::ret>::ret StorageKind; typedef typename promote_index_type::type StorageIndex; enum { RowsAtCompileTime = LhsTraits::RowsAtCompileTime, ColsAtCompileTime = RhsTraits::ColsAtCompileTime, MaxRowsAtCompileTime = LhsTraits::MaxRowsAtCompileTime, MaxColsAtCompileTime = RhsTraits::MaxColsAtCompileTime, // FIXME: only needed by GeneralMatrixMatrixTriangular InnerSize = EIGEN_SIZE_MIN_PREFER_FIXED(LhsTraits::ColsAtCompileTime, RhsTraits::RowsAtCompileTime), // The storage order is somewhat arbitrary here. The correct one will be determined through the evaluator. Flags = (MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1) ? RowMajorBit : (MaxColsAtCompileTime==1 && MaxRowsAtCompileTime!=1) ? 0 : ( ((LhsTraits::Flags&NoPreferredStorageOrderBit) && (RhsTraits::Flags&RowMajorBit)) || ((RhsTraits::Flags&NoPreferredStorageOrderBit) && (LhsTraits::Flags&RowMajorBit)) ) ? RowMajorBit : NoPreferredStorageOrderBit }; }; } // end namespace internal /** \class Product * \ingroup Core_Module * * \brief Expression of the product of two arbitrary matrices or vectors * * \tparam _Lhs the type of the left-hand side expression * \tparam _Rhs the type of the right-hand side expression * * This class represents an expression of the product of two arbitrary matrices. * * The other template parameters are: * \tparam Option can be DefaultProduct, AliasFreeProduct, or LazyProduct * */ template class Product : public ProductImpl<_Lhs,_Rhs,Option, typename internal::product_promote_storage_type::StorageKind, typename internal::traits<_Rhs>::StorageKind, internal::product_type<_Lhs,_Rhs>::ret>::ret> { public: typedef _Lhs Lhs; typedef _Rhs Rhs; typedef typename ProductImpl< Lhs, Rhs, Option, typename internal::product_promote_storage_type::StorageKind, typename internal::traits::StorageKind, internal::product_type::ret>::ret>::Base Base; EIGEN_GENERIC_PUBLIC_INTERFACE(Product) typedef typename internal::ref_selector::type LhsNested; typedef typename internal::ref_selector::type RhsNested; typedef typename internal::remove_all::type LhsNestedCleaned; typedef typename internal::remove_all::type RhsNestedCleaned; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Product(const Lhs& lhs, const Rhs& rhs) : m_lhs(lhs), m_rhs(rhs) { eigen_assert(lhs.cols() == rhs.rows() && "invalid matrix product" && "if you wanted a coeff-wise or a dot product use the respective explicit functions"); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE EIGEN_CONSTEXPR Index rows() const EIGEN_NOEXCEPT { return m_lhs.rows(); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE EIGEN_CONSTEXPR Index cols() const EIGEN_NOEXCEPT { return m_rhs.cols(); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const LhsNestedCleaned& lhs() const { return m_lhs; } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const RhsNestedCleaned& rhs() const { return m_rhs; } protected: LhsNested m_lhs; RhsNested m_rhs; }; namespace internal { template::ret> class dense_product_base : public internal::dense_xpr_base >::type {}; /** Conversion to scalar for inner-products */ template class dense_product_base : public internal::dense_xpr_base >::type { typedef Product ProductXpr; typedef typename internal::dense_xpr_base::type Base; public: using Base::derived; typedef typename Base::Scalar Scalar; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE operator const Scalar() const { return internal::evaluator(derived()).coeff(0,0); } }; } // namespace internal // Generic API dispatcher template class ProductImpl : public internal::generic_xpr_base, MatrixXpr, StorageKind>::type { public: typedef typename internal::generic_xpr_base, MatrixXpr, StorageKind>::type Base; }; template class ProductImpl : public internal::dense_product_base { typedef Product Derived; public: typedef typename internal::dense_product_base Base; EIGEN_DENSE_PUBLIC_INTERFACE(Derived) protected: enum { IsOneByOne = (RowsAtCompileTime == 1 || RowsAtCompileTime == Dynamic) && (ColsAtCompileTime == 1 || ColsAtCompileTime == Dynamic), EnableCoeff = IsOneByOne || Option==LazyProduct }; public: EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar coeff(Index row, Index col) const { EIGEN_STATIC_ASSERT(EnableCoeff, THIS_METHOD_IS_ONLY_FOR_INNER_OR_LAZY_PRODUCTS); eigen_assert( (Option==LazyProduct) || (this->rows() == 1 && this->cols() == 1) ); return internal::evaluator(derived()).coeff(row,col); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar coeff(Index i) const { EIGEN_STATIC_ASSERT(EnableCoeff, THIS_METHOD_IS_ONLY_FOR_INNER_OR_LAZY_PRODUCTS); eigen_assert( (Option==LazyProduct) || (this->rows() == 1 && this->cols() == 1) ); return internal::evaluator(derived()).coeff(i); } }; } // end namespace Eigen #endif // EIGEN_PRODUCT_H piqp-octave/src/eigen/Eigen/src/Core/StlIterators.h0000644000175100001770000005221114107270226022051 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2018 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STLITERATORS_H #define EIGEN_STLITERATORS_H namespace Eigen { namespace internal { template struct indexed_based_stl_iterator_traits; template class indexed_based_stl_iterator_base { protected: typedef indexed_based_stl_iterator_traits traits; typedef typename traits::XprType XprType; typedef indexed_based_stl_iterator_base non_const_iterator; typedef indexed_based_stl_iterator_base const_iterator; typedef typename internal::conditional::value,non_const_iterator,const_iterator>::type other_iterator; // NOTE: in C++03 we cannot declare friend classes through typedefs because we need to write friend class: friend class indexed_based_stl_iterator_base; friend class indexed_based_stl_iterator_base; public: typedef Index difference_type; typedef std::random_access_iterator_tag iterator_category; indexed_based_stl_iterator_base() EIGEN_NO_THROW : mp_xpr(0), m_index(0) {} indexed_based_stl_iterator_base(XprType& xpr, Index index) EIGEN_NO_THROW : mp_xpr(&xpr), m_index(index) {} indexed_based_stl_iterator_base(const non_const_iterator& other) EIGEN_NO_THROW : mp_xpr(other.mp_xpr), m_index(other.m_index) {} indexed_based_stl_iterator_base& operator=(const non_const_iterator& other) { mp_xpr = other.mp_xpr; m_index = other.m_index; return *this; } Derived& operator++() { ++m_index; return derived(); } Derived& operator--() { --m_index; return derived(); } Derived operator++(int) { Derived prev(derived()); operator++(); return prev;} Derived operator--(int) { Derived prev(derived()); operator--(); return prev;} friend Derived operator+(const indexed_based_stl_iterator_base& a, Index b) { Derived ret(a.derived()); ret += b; return ret; } friend Derived operator-(const indexed_based_stl_iterator_base& a, Index b) { Derived ret(a.derived()); ret -= b; return ret; } friend Derived operator+(Index a, const indexed_based_stl_iterator_base& b) { Derived ret(b.derived()); ret += a; return ret; } friend Derived operator-(Index a, const indexed_based_stl_iterator_base& b) { Derived ret(b.derived()); ret -= a; return ret; } Derived& operator+=(Index b) { m_index += b; return derived(); } Derived& operator-=(Index b) { m_index -= b; return derived(); } difference_type operator-(const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index - other.m_index; } difference_type operator-(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index - other.m_index; } bool operator==(const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index == other.m_index; } bool operator!=(const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index != other.m_index; } bool operator< (const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index < other.m_index; } bool operator<=(const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index <= other.m_index; } bool operator> (const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index > other.m_index; } bool operator>=(const indexed_based_stl_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index >= other.m_index; } bool operator==(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index == other.m_index; } bool operator!=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index != other.m_index; } bool operator< (const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index < other.m_index; } bool operator<=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index <= other.m_index; } bool operator> (const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index > other.m_index; } bool operator>=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index >= other.m_index; } protected: Derived& derived() { return static_cast(*this); } const Derived& derived() const { return static_cast(*this); } XprType *mp_xpr; Index m_index; }; template class indexed_based_stl_reverse_iterator_base { protected: typedef indexed_based_stl_iterator_traits traits; typedef typename traits::XprType XprType; typedef indexed_based_stl_reverse_iterator_base non_const_iterator; typedef indexed_based_stl_reverse_iterator_base const_iterator; typedef typename internal::conditional::value,non_const_iterator,const_iterator>::type other_iterator; // NOTE: in C++03 we cannot declare friend classes through typedefs because we need to write friend class: friend class indexed_based_stl_reverse_iterator_base; friend class indexed_based_stl_reverse_iterator_base; public: typedef Index difference_type; typedef std::random_access_iterator_tag iterator_category; indexed_based_stl_reverse_iterator_base() : mp_xpr(0), m_index(0) {} indexed_based_stl_reverse_iterator_base(XprType& xpr, Index index) : mp_xpr(&xpr), m_index(index) {} indexed_based_stl_reverse_iterator_base(const non_const_iterator& other) : mp_xpr(other.mp_xpr), m_index(other.m_index) {} indexed_based_stl_reverse_iterator_base& operator=(const non_const_iterator& other) { mp_xpr = other.mp_xpr; m_index = other.m_index; return *this; } Derived& operator++() { --m_index; return derived(); } Derived& operator--() { ++m_index; return derived(); } Derived operator++(int) { Derived prev(derived()); operator++(); return prev;} Derived operator--(int) { Derived prev(derived()); operator--(); return prev;} friend Derived operator+(const indexed_based_stl_reverse_iterator_base& a, Index b) { Derived ret(a.derived()); ret += b; return ret; } friend Derived operator-(const indexed_based_stl_reverse_iterator_base& a, Index b) { Derived ret(a.derived()); ret -= b; return ret; } friend Derived operator+(Index a, const indexed_based_stl_reverse_iterator_base& b) { Derived ret(b.derived()); ret += a; return ret; } friend Derived operator-(Index a, const indexed_based_stl_reverse_iterator_base& b) { Derived ret(b.derived()); ret -= a; return ret; } Derived& operator+=(Index b) { m_index -= b; return derived(); } Derived& operator-=(Index b) { m_index += b; return derived(); } difference_type operator-(const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return other.m_index - m_index; } difference_type operator-(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return other.m_index - m_index; } bool operator==(const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index == other.m_index; } bool operator!=(const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index != other.m_index; } bool operator< (const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index > other.m_index; } bool operator<=(const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index >= other.m_index; } bool operator> (const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index < other.m_index; } bool operator>=(const indexed_based_stl_reverse_iterator_base& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index <= other.m_index; } bool operator==(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index == other.m_index; } bool operator!=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index != other.m_index; } bool operator< (const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index > other.m_index; } bool operator<=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index >= other.m_index; } bool operator> (const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index < other.m_index; } bool operator>=(const other_iterator& other) const { eigen_assert(mp_xpr == other.mp_xpr); return m_index <= other.m_index; } protected: Derived& derived() { return static_cast(*this); } const Derived& derived() const { return static_cast(*this); } XprType *mp_xpr; Index m_index; }; template class pointer_based_stl_iterator { enum { is_lvalue = internal::is_lvalue::value }; typedef pointer_based_stl_iterator::type> non_const_iterator; typedef pointer_based_stl_iterator::type> const_iterator; typedef typename internal::conditional::value,non_const_iterator,const_iterator>::type other_iterator; // NOTE: in C++03 we cannot declare friend classes through typedefs because we need to write friend class: friend class pointer_based_stl_iterator::type>; friend class pointer_based_stl_iterator::type>; public: typedef Index difference_type; typedef typename XprType::Scalar value_type; typedef std::random_access_iterator_tag iterator_category; typedef typename internal::conditional::type pointer; typedef typename internal::conditional::type reference; pointer_based_stl_iterator() EIGEN_NO_THROW : m_ptr(0) {} pointer_based_stl_iterator(XprType& xpr, Index index) EIGEN_NO_THROW : m_incr(xpr.innerStride()) { m_ptr = xpr.data() + index * m_incr.value(); } pointer_based_stl_iterator(const non_const_iterator& other) EIGEN_NO_THROW : m_ptr(other.m_ptr), m_incr(other.m_incr) {} pointer_based_stl_iterator& operator=(const non_const_iterator& other) EIGEN_NO_THROW { m_ptr = other.m_ptr; m_incr.setValue(other.m_incr); return *this; } reference operator*() const { return *m_ptr; } reference operator[](Index i) const { return *(m_ptr+i*m_incr.value()); } pointer operator->() const { return m_ptr; } pointer_based_stl_iterator& operator++() { m_ptr += m_incr.value(); return *this; } pointer_based_stl_iterator& operator--() { m_ptr -= m_incr.value(); return *this; } pointer_based_stl_iterator operator++(int) { pointer_based_stl_iterator prev(*this); operator++(); return prev;} pointer_based_stl_iterator operator--(int) { pointer_based_stl_iterator prev(*this); operator--(); return prev;} friend pointer_based_stl_iterator operator+(const pointer_based_stl_iterator& a, Index b) { pointer_based_stl_iterator ret(a); ret += b; return ret; } friend pointer_based_stl_iterator operator-(const pointer_based_stl_iterator& a, Index b) { pointer_based_stl_iterator ret(a); ret -= b; return ret; } friend pointer_based_stl_iterator operator+(Index a, const pointer_based_stl_iterator& b) { pointer_based_stl_iterator ret(b); ret += a; return ret; } friend pointer_based_stl_iterator operator-(Index a, const pointer_based_stl_iterator& b) { pointer_based_stl_iterator ret(b); ret -= a; return ret; } pointer_based_stl_iterator& operator+=(Index b) { m_ptr += b*m_incr.value(); return *this; } pointer_based_stl_iterator& operator-=(Index b) { m_ptr -= b*m_incr.value(); return *this; } difference_type operator-(const pointer_based_stl_iterator& other) const { return (m_ptr - other.m_ptr)/m_incr.value(); } difference_type operator-(const other_iterator& other) const { return (m_ptr - other.m_ptr)/m_incr.value(); } bool operator==(const pointer_based_stl_iterator& other) const { return m_ptr == other.m_ptr; } bool operator!=(const pointer_based_stl_iterator& other) const { return m_ptr != other.m_ptr; } bool operator< (const pointer_based_stl_iterator& other) const { return m_ptr < other.m_ptr; } bool operator<=(const pointer_based_stl_iterator& other) const { return m_ptr <= other.m_ptr; } bool operator> (const pointer_based_stl_iterator& other) const { return m_ptr > other.m_ptr; } bool operator>=(const pointer_based_stl_iterator& other) const { return m_ptr >= other.m_ptr; } bool operator==(const other_iterator& other) const { return m_ptr == other.m_ptr; } bool operator!=(const other_iterator& other) const { return m_ptr != other.m_ptr; } bool operator< (const other_iterator& other) const { return m_ptr < other.m_ptr; } bool operator<=(const other_iterator& other) const { return m_ptr <= other.m_ptr; } bool operator> (const other_iterator& other) const { return m_ptr > other.m_ptr; } bool operator>=(const other_iterator& other) const { return m_ptr >= other.m_ptr; } protected: pointer m_ptr; internal::variable_if_dynamic m_incr; }; template struct indexed_based_stl_iterator_traits > { typedef _XprType XprType; typedef generic_randaccess_stl_iterator::type> non_const_iterator; typedef generic_randaccess_stl_iterator::type> const_iterator; }; template class generic_randaccess_stl_iterator : public indexed_based_stl_iterator_base > { public: typedef typename XprType::Scalar value_type; protected: enum { has_direct_access = (internal::traits::Flags & DirectAccessBit) ? 1 : 0, is_lvalue = internal::is_lvalue::value }; typedef indexed_based_stl_iterator_base Base; using Base::m_index; using Base::mp_xpr; // TODO currently const Transpose/Reshape expressions never returns const references, // so lets return by value too. //typedef typename internal::conditional::type read_only_ref_t; typedef const value_type read_only_ref_t; public: typedef typename internal::conditional::type pointer; typedef typename internal::conditional::type reference; generic_randaccess_stl_iterator() : Base() {} generic_randaccess_stl_iterator(XprType& xpr, Index index) : Base(xpr,index) {} generic_randaccess_stl_iterator(const typename Base::non_const_iterator& other) : Base(other) {} using Base::operator=; reference operator*() const { return (*mp_xpr)(m_index); } reference operator[](Index i) const { return (*mp_xpr)(m_index+i); } pointer operator->() const { return &((*mp_xpr)(m_index)); } }; template struct indexed_based_stl_iterator_traits > { typedef _XprType XprType; typedef subvector_stl_iterator::type, Direction> non_const_iterator; typedef subvector_stl_iterator::type, Direction> const_iterator; }; template class subvector_stl_iterator : public indexed_based_stl_iterator_base > { protected: enum { is_lvalue = internal::is_lvalue::value }; typedef indexed_based_stl_iterator_base Base; using Base::m_index; using Base::mp_xpr; typedef typename internal::conditional::type SubVectorType; typedef typename internal::conditional::type ConstSubVectorType; public: typedef typename internal::conditional::type reference; typedef typename reference::PlainObject value_type; private: class subvector_stl_iterator_ptr { public: subvector_stl_iterator_ptr(const reference &subvector) : m_subvector(subvector) {} reference* operator->() { return &m_subvector; } private: reference m_subvector; }; public: typedef subvector_stl_iterator_ptr pointer; subvector_stl_iterator() : Base() {} subvector_stl_iterator(XprType& xpr, Index index) : Base(xpr,index) {} reference operator*() const { return (*mp_xpr).template subVector(m_index); } reference operator[](Index i) const { return (*mp_xpr).template subVector(m_index+i); } pointer operator->() const { return (*mp_xpr).template subVector(m_index); } }; template struct indexed_based_stl_iterator_traits > { typedef _XprType XprType; typedef subvector_stl_reverse_iterator::type, Direction> non_const_iterator; typedef subvector_stl_reverse_iterator::type, Direction> const_iterator; }; template class subvector_stl_reverse_iterator : public indexed_based_stl_reverse_iterator_base > { protected: enum { is_lvalue = internal::is_lvalue::value }; typedef indexed_based_stl_reverse_iterator_base Base; using Base::m_index; using Base::mp_xpr; typedef typename internal::conditional::type SubVectorType; typedef typename internal::conditional::type ConstSubVectorType; public: typedef typename internal::conditional::type reference; typedef typename reference::PlainObject value_type; private: class subvector_stl_reverse_iterator_ptr { public: subvector_stl_reverse_iterator_ptr(const reference &subvector) : m_subvector(subvector) {} reference* operator->() { return &m_subvector; } private: reference m_subvector; }; public: typedef subvector_stl_reverse_iterator_ptr pointer; subvector_stl_reverse_iterator() : Base() {} subvector_stl_reverse_iterator(XprType& xpr, Index index) : Base(xpr,index) {} reference operator*() const { return (*mp_xpr).template subVector(m_index); } reference operator[](Index i) const { return (*mp_xpr).template subVector(m_index+i); } pointer operator->() const { return (*mp_xpr).template subVector(m_index); } }; } // namespace internal /** returns an iterator to the first element of the 1D vector or array * \only_for_vectors * \sa end(), cbegin() */ template inline typename DenseBase::iterator DenseBase::begin() { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return iterator(derived(), 0); } /** const version of begin() */ template inline typename DenseBase::const_iterator DenseBase::begin() const { return cbegin(); } /** returns a read-only const_iterator to the first element of the 1D vector or array * \only_for_vectors * \sa cend(), begin() */ template inline typename DenseBase::const_iterator DenseBase::cbegin() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return const_iterator(derived(), 0); } /** returns an iterator to the element following the last element of the 1D vector or array * \only_for_vectors * \sa begin(), cend() */ template inline typename DenseBase::iterator DenseBase::end() { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return iterator(derived(), size()); } /** const version of end() */ template inline typename DenseBase::const_iterator DenseBase::end() const { return cend(); } /** returns a read-only const_iterator to the element following the last element of the 1D vector or array * \only_for_vectors * \sa begin(), cend() */ template inline typename DenseBase::const_iterator DenseBase::cend() const { EIGEN_STATIC_ASSERT_VECTOR_ONLY(Derived); return const_iterator(derived(), size()); } } // namespace Eigen #endif // EIGEN_STLITERATORS_H piqp-octave/src/eigen/Eigen/src/Core/CommaInitializer.h0000644000175100001770000001353514107270226022660 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_COMMAINITIALIZER_H #define EIGEN_COMMAINITIALIZER_H namespace Eigen { /** \class CommaInitializer * \ingroup Core_Module * * \brief Helper class used by the comma initializer operator * * This class is internally used to implement the comma initializer feature. It is * the return type of MatrixBase::operator<<, and most of the time this is the only * way it is used. * * \sa \blank \ref MatrixBaseCommaInitRef "MatrixBase::operator<<", CommaInitializer::finished() */ template struct CommaInitializer { typedef typename XprType::Scalar Scalar; EIGEN_DEVICE_FUNC inline CommaInitializer(XprType& xpr, const Scalar& s) : m_xpr(xpr), m_row(0), m_col(1), m_currentBlockRows(1) { eigen_assert(m_xpr.rows() > 0 && m_xpr.cols() > 0 && "Cannot comma-initialize a 0x0 matrix (operator<<)"); m_xpr.coeffRef(0,0) = s; } template EIGEN_DEVICE_FUNC inline CommaInitializer(XprType& xpr, const DenseBase& other) : m_xpr(xpr), m_row(0), m_col(other.cols()), m_currentBlockRows(other.rows()) { eigen_assert(m_xpr.rows() >= other.rows() && m_xpr.cols() >= other.cols() && "Cannot comma-initialize a 0x0 matrix (operator<<)"); m_xpr.block(0, 0, other.rows(), other.cols()) = other; } /* Copy/Move constructor which transfers ownership. This is crucial in * absence of return value optimization to avoid assertions during destruction. */ // FIXME in C++11 mode this could be replaced by a proper RValue constructor EIGEN_DEVICE_FUNC inline CommaInitializer(const CommaInitializer& o) : m_xpr(o.m_xpr), m_row(o.m_row), m_col(o.m_col), m_currentBlockRows(o.m_currentBlockRows) { // Mark original object as finished. In absence of R-value references we need to const_cast: const_cast(o).m_row = m_xpr.rows(); const_cast(o).m_col = m_xpr.cols(); const_cast(o).m_currentBlockRows = 0; } /* inserts a scalar value in the target matrix */ EIGEN_DEVICE_FUNC CommaInitializer& operator,(const Scalar& s) { if (m_col==m_xpr.cols()) { m_row+=m_currentBlockRows; m_col = 0; m_currentBlockRows = 1; eigen_assert(m_row EIGEN_DEVICE_FUNC CommaInitializer& operator,(const DenseBase& other) { if (m_col==m_xpr.cols() && (other.cols()!=0 || other.rows()!=m_currentBlockRows)) { m_row+=m_currentBlockRows; m_col = 0; m_currentBlockRows = other.rows(); eigen_assert(m_row+m_currentBlockRows<=m_xpr.rows() && "Too many rows passed to comma initializer (operator<<)"); } eigen_assert((m_col + other.cols() <= m_xpr.cols()) && "Too many coefficients passed to comma initializer (operator<<)"); eigen_assert(m_currentBlockRows==other.rows()); m_xpr.template block (m_row, m_col, other.rows(), other.cols()) = other; m_col += other.cols(); return *this; } EIGEN_DEVICE_FUNC inline ~CommaInitializer() #if defined VERIFY_RAISES_ASSERT && (!defined EIGEN_NO_ASSERTION_CHECKING) && defined EIGEN_EXCEPTIONS EIGEN_EXCEPTION_SPEC(Eigen::eigen_assert_exception) #endif { finished(); } /** \returns the built matrix once all its coefficients have been set. * Calling finished is 100% optional. Its purpose is to write expressions * like this: * \code * quaternion.fromRotationMatrix((Matrix3f() << axis0, axis1, axis2).finished()); * \endcode */ EIGEN_DEVICE_FUNC inline XprType& finished() { eigen_assert(((m_row+m_currentBlockRows) == m_xpr.rows() || m_xpr.cols() == 0) && m_col == m_xpr.cols() && "Too few coefficients passed to comma initializer (operator<<)"); return m_xpr; } XprType& m_xpr; // target expression Index m_row; // current row id Index m_col; // current col id Index m_currentBlockRows; // current block height }; /** \anchor MatrixBaseCommaInitRef * Convenient operator to set the coefficients of a matrix. * * The coefficients must be provided in a row major order and exactly match * the size of the matrix. Otherwise an assertion is raised. * * Example: \include MatrixBase_set.cpp * Output: \verbinclude MatrixBase_set.out * * \note According the c++ standard, the argument expressions of this comma initializer are evaluated in arbitrary order. * * \sa CommaInitializer::finished(), class CommaInitializer */ template EIGEN_DEVICE_FUNC inline CommaInitializer DenseBase::operator<< (const Scalar& s) { return CommaInitializer(*static_cast(this), s); } /** \sa operator<<(const Scalar&) */ template template EIGEN_DEVICE_FUNC inline CommaInitializer DenseBase::operator<<(const DenseBase& other) { return CommaInitializer(*static_cast(this), other); } } // end namespace Eigen #endif // EIGEN_COMMAINITIALIZER_H piqp-octave/src/eigen/Eigen/src/Core/functors/0000755000175100001770000000000014107270226021103 5ustar runnerdockerpiqp-octave/src/eigen/Eigen/src/Core/functors/BinaryFunctors.h0000644000175100001770000005067114107270226024235 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_BINARY_FUNCTORS_H #define EIGEN_BINARY_FUNCTORS_H namespace Eigen { namespace internal { //---------- associative binary functors ---------- template struct binary_op_base { typedef Arg1 first_argument_type; typedef Arg2 second_argument_type; }; /** \internal * \brief Template functor to compute the sum of two scalars * * \sa class CwiseBinaryOp, MatrixBase::operator+, class VectorwiseOp, DenseBase::sum() */ template struct scalar_sum_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_sum_op) #else scalar_sum_op() { EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return a + b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& a, const Packet& b) const { return internal::padd(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type predux(const Packet& a) const { return internal::predux(a); } }; template struct functor_traits > { enum { Cost = (int(NumTraits::AddCost) + int(NumTraits::AddCost)) / 2, // rough estimate! PacketAccess = is_same::value && packet_traits::HasAdd && packet_traits::HasAdd // TODO vectorize mixed sum }; }; template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool scalar_sum_op::operator() (const bool& a, const bool& b) const { return a || b; } /** \internal * \brief Template functor to compute the product of two scalars * * \sa class CwiseBinaryOp, Cwise::operator*(), class VectorwiseOp, MatrixBase::redux() */ template struct scalar_product_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_product_op) #else scalar_product_op() { EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return a * b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& a, const Packet& b) const { return internal::pmul(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type predux(const Packet& a) const { return internal::predux_mul(a); } }; template struct functor_traits > { enum { Cost = (int(NumTraits::MulCost) + int(NumTraits::MulCost))/2, // rough estimate! PacketAccess = is_same::value && packet_traits::HasMul && packet_traits::HasMul // TODO vectorize mixed product }; }; template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool scalar_product_op::operator() (const bool& a, const bool& b) const { return a && b; } /** \internal * \brief Template functor to compute the conjugate product of two scalars * * This is a short cut for conj(x) * y which is needed for optimization purpose; in Eigen2 support mode, this becomes x * conj(y) */ template struct scalar_conj_product_op : binary_op_base { enum { Conj = NumTraits::IsComplex }; typedef typename ScalarBinaryOpTraits::ReturnType result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_conj_product_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return conj_helper().pmul(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& a, const Packet& b) const { return conj_helper().pmul(a,b); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = internal::is_same::value && packet_traits::HasMul }; }; /** \internal * \brief Template functor to compute the min of two scalars * * \sa class CwiseBinaryOp, MatrixBase::cwiseMin, class VectorwiseOp, MatrixBase::minCoeff() */ template struct scalar_min_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_min_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return internal::pmin(a, b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& a, const Packet& b) const { return internal::pmin(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type predux(const Packet& a) const { return internal::predux_min(a); } }; template struct functor_traits > { enum { Cost = (NumTraits::AddCost+NumTraits::AddCost)/2, PacketAccess = internal::is_same::value && packet_traits::HasMin }; }; /** \internal * \brief Template functor to compute the max of two scalars * * \sa class CwiseBinaryOp, MatrixBase::cwiseMax, class VectorwiseOp, MatrixBase::maxCoeff() */ template struct scalar_max_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_max_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return internal::pmax(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& a, const Packet& b) const { return internal::pmax(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type predux(const Packet& a) const { return internal::predux_max(a); } }; template struct functor_traits > { enum { Cost = (NumTraits::AddCost+NumTraits::AddCost)/2, PacketAccess = internal::is_same::value && packet_traits::HasMax }; }; /** \internal * \brief Template functors for comparison of two scalars * \todo Implement packet-comparisons */ template struct scalar_cmp_op; template struct functor_traits > { enum { Cost = (NumTraits::AddCost+NumTraits::AddCost)/2, PacketAccess = false }; }; template struct result_of(LhsScalar,RhsScalar)> { typedef bool type; }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a==b;} }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a<=b;} }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a>b;} }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a>=b;} }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return !(a<=b || b<=a);} }; template struct scalar_cmp_op : binary_op_base { typedef bool result_type; EIGEN_EMPTY_STRUCT_CTOR(scalar_cmp_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator()(const LhsScalar& a, const RhsScalar& b) const {return a!=b;} }; /** \internal * \brief Template functor to compute the hypot of two \b positive \b and \b real scalars * * \sa MatrixBase::stableNorm(), class Redux */ template struct scalar_hypot_op : binary_op_base { EIGEN_EMPTY_STRUCT_CTOR(scalar_hypot_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar &x, const Scalar &y) const { // This functor is used by hypotNorm only for which it is faster to first apply abs // on all coefficients prior to reduction through hypot. // This way we avoid calling abs on positive and real entries, and this also permits // to seamlessly handle complexes. Otherwise we would have to handle both real and complexes // through the same functor... return internal::positive_real_hypot(x,y); } }; template struct functor_traits > { enum { Cost = 3 * NumTraits::AddCost + 2 * NumTraits::MulCost + 2 * scalar_div_cost::value, PacketAccess = false }; }; /** \internal * \brief Template functor to compute the pow of two scalars * See the specification of pow in https://en.cppreference.com/w/cpp/numeric/math/pow */ template struct scalar_pow_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_pow_op) #else scalar_pow_op() { typedef Scalar LhsScalar; typedef Exponent RhsScalar; EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC inline result_type operator() (const Scalar& a, const Exponent& b) const { return numext::pow(a, b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return generic_pow(a,b); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = (!NumTraits::IsComplex && !NumTraits::IsInteger && packet_traits::HasExp && packet_traits::HasLog && packet_traits::HasRound && packet_traits::HasCmp && // Temporarly disable packet access for half/bfloat16 until // accuracy is improved. !is_same::value && !is_same::value ) }; }; //---------- non associative binary functors ---------- /** \internal * \brief Template functor to compute the difference of two scalars * * \sa class CwiseBinaryOp, MatrixBase::operator- */ template struct scalar_difference_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_difference_op) #else scalar_difference_op() { EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return a - b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::psub(a,b); } }; template struct functor_traits > { enum { Cost = (int(NumTraits::AddCost) + int(NumTraits::AddCost)) / 2, PacketAccess = is_same::value && packet_traits::HasSub && packet_traits::HasSub }; }; /** \internal * \brief Template functor to compute the quotient of two scalars * * \sa class CwiseBinaryOp, Cwise::operator/() */ template struct scalar_quotient_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_quotient_op) #else scalar_quotient_op() { EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return a / b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::pdiv(a,b); } }; template struct functor_traits > { typedef typename scalar_quotient_op::result_type result_type; enum { PacketAccess = is_same::value && packet_traits::HasDiv && packet_traits::HasDiv, Cost = scalar_div_cost::value }; }; /** \internal * \brief Template functor to compute the and of two booleans * * \sa class CwiseBinaryOp, ArrayBase::operator&& */ struct scalar_boolean_and_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_boolean_and_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator() (const bool& a, const bool& b) const { return a && b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::pand(a,b); } }; template<> struct functor_traits { enum { Cost = NumTraits::AddCost, PacketAccess = true }; }; /** \internal * \brief Template functor to compute the or of two booleans * * \sa class CwiseBinaryOp, ArrayBase::operator|| */ struct scalar_boolean_or_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_boolean_or_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator() (const bool& a, const bool& b) const { return a || b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::por(a,b); } }; template<> struct functor_traits { enum { Cost = NumTraits::AddCost, PacketAccess = true }; }; /** \internal * \brief Template functor to compute the xor of two booleans * * \sa class CwiseBinaryOp, ArrayBase::operator^ */ struct scalar_boolean_xor_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_boolean_xor_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator() (const bool& a, const bool& b) const { return a ^ b; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::pxor(a,b); } }; template<> struct functor_traits { enum { Cost = NumTraits::AddCost, PacketAccess = true }; }; /** \internal * \brief Template functor to compute the absolute difference of two scalars * * \sa class CwiseBinaryOp, MatrixBase::absolute_difference */ template struct scalar_absolute_difference_op : binary_op_base { typedef typename ScalarBinaryOpTraits::ReturnType result_type; #ifndef EIGEN_SCALAR_BINARY_OP_PLUGIN EIGEN_EMPTY_STRUCT_CTOR(scalar_absolute_difference_op) #else scalar_absolute_difference_op() { EIGEN_SCALAR_BINARY_OP_PLUGIN } #endif EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const LhsScalar& a, const RhsScalar& b) const { return numext::absdiff(a,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a, const Packet& b) const { return internal::pabsdiff(a,b); } }; template struct functor_traits > { enum { Cost = (NumTraits::AddCost+NumTraits::AddCost)/2, PacketAccess = is_same::value && packet_traits::HasAbsDiff }; }; //---------- binary functors bound to a constant, thus appearing as a unary functor ---------- // The following two classes permits to turn any binary functor into a unary one with one argument bound to a constant value. // They are analogues to std::binder1st/binder2nd but with the following differences: // - they are compatible with packetOp // - they are portable across C++ versions (the std::binder* are deprecated in C++11) template struct bind1st_op : BinaryOp { typedef typename BinaryOp::first_argument_type first_argument_type; typedef typename BinaryOp::second_argument_type second_argument_type; typedef typename BinaryOp::result_type result_type; EIGEN_DEVICE_FUNC explicit bind1st_op(const first_argument_type &val) : m_value(val) {} EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const second_argument_type& b) const { return BinaryOp::operator()(m_value,b); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& b) const { return BinaryOp::packetOp(internal::pset1(m_value), b); } first_argument_type m_value; }; template struct functor_traits > : functor_traits {}; template struct bind2nd_op : BinaryOp { typedef typename BinaryOp::first_argument_type first_argument_type; typedef typename BinaryOp::second_argument_type second_argument_type; typedef typename BinaryOp::result_type result_type; EIGEN_DEVICE_FUNC explicit bind2nd_op(const second_argument_type &val) : m_value(val) {} EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const first_argument_type& a) const { return BinaryOp::operator()(a,m_value); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return BinaryOp::packetOp(a,internal::pset1(m_value)); } second_argument_type m_value; }; template struct functor_traits > : functor_traits {}; } // end namespace internal } // end namespace Eigen #endif // EIGEN_BINARY_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/functors/TernaryFunctors.h0000644000175100001770000000113714107270226024426 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2016 Eugene Brevdo // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_TERNARY_FUNCTORS_H #define EIGEN_TERNARY_FUNCTORS_H namespace Eigen { namespace internal { //---------- associative ternary functors ---------- } // end namespace internal } // end namespace Eigen #endif // EIGEN_TERNARY_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/functors/StlFunctors.h0000644000175100001770000001160614107270226023546 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_STL_FUNCTORS_H #define EIGEN_STL_FUNCTORS_H namespace Eigen { // Portable replacements for certain functors. namespace numext { template struct equal_to { typedef bool result_type; EIGEN_DEVICE_FUNC bool operator()(const T& lhs, const T& rhs) const { return lhs == rhs; } }; template struct not_equal_to { typedef bool result_type; EIGEN_DEVICE_FUNC bool operator()(const T& lhs, const T& rhs) const { return lhs != rhs; } }; } namespace internal { // default functor traits for STL functors: template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > : functor_traits > {}; template struct functor_traits > { enum { Cost = 1, PacketAccess = false }; }; template struct functor_traits > : functor_traits > {}; #if (EIGEN_COMP_CXXVER < 11) // std::binder* are deprecated since c++11 and will be removed in c++17 template struct functor_traits > { enum { Cost = functor_traits::Cost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = functor_traits::Cost, PacketAccess = false }; }; #endif #if (EIGEN_COMP_CXXVER < 17) // std::unary_negate is deprecated since c++17 and will be removed in c++20 template struct functor_traits > { enum { Cost = 1 + functor_traits::Cost, PacketAccess = false }; }; // std::binary_negate is deprecated since c++17 and will be removed in c++20 template struct functor_traits > { enum { Cost = 1 + functor_traits::Cost, PacketAccess = false }; }; #endif #ifdef EIGEN_STDEXT_SUPPORT template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; template struct functor_traits > > { enum { Cost = 0, PacketAccess = false }; }; template struct functor_traits > > { enum { Cost = 0, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = functor_traits::Cost + functor_traits::Cost, PacketAccess = false }; }; template struct functor_traits > { enum { Cost = functor_traits::Cost + functor_traits::Cost + functor_traits::Cost, PacketAccess = false }; }; #endif // EIGEN_STDEXT_SUPPORT // allow to add new functors and specializations of functor_traits from outside Eigen. // this macro is really needed because functor_traits must be specialized after it is declared but before it is used... #ifdef EIGEN_FUNCTORS_PLUGIN #include EIGEN_FUNCTORS_PLUGIN #endif } // end namespace internal } // end namespace Eigen #endif // EIGEN_STL_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/functors/NullaryFunctors.h0000644000175100001770000002021614107270226024427 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2016 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_NULLARY_FUNCTORS_H #define EIGEN_NULLARY_FUNCTORS_H namespace Eigen { namespace internal { template struct scalar_constant_op { EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE scalar_constant_op(const scalar_constant_op& other) : m_other(other.m_other) { } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE scalar_constant_op(const Scalar& other) : m_other(other) { } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() () const { return m_other; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const PacketType packetOp() const { return internal::pset1(m_other); } const Scalar m_other; }; template struct functor_traits > { enum { Cost = 0 /* as the constant value should be loaded in register only once for the whole expression */, PacketAccess = packet_traits::Vectorizable, IsRepeatable = true }; }; template struct scalar_identity_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_identity_op) template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (IndexType row, IndexType col) const { return row==col ? Scalar(1) : Scalar(0); } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = false, IsRepeatable = true }; }; template struct linspaced_op_impl; template struct linspaced_op_impl { typedef typename NumTraits::Real RealScalar; EIGEN_DEVICE_FUNC linspaced_op_impl(const Scalar& low, const Scalar& high, Index num_steps) : m_low(low), m_high(high), m_size1(num_steps==1 ? 1 : num_steps-1), m_step(num_steps==1 ? Scalar() : Scalar((high-low)/RealScalar(num_steps-1))), m_flip(numext::abs(high) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (IndexType i) const { if(m_flip) return (i==0)? m_low : Scalar(m_high - RealScalar(m_size1-i)*m_step); else return (i==m_size1)? m_high : Scalar(m_low + RealScalar(i)*m_step); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(IndexType i) const { // Principle: // [low, ..., low] + ( [step, ..., step] * ( [i, ..., i] + [0, ..., size] ) ) if(m_flip) { Packet pi = plset(Scalar(i-m_size1)); Packet res = padd(pset1(m_high), pmul(pset1(m_step), pi)); if (EIGEN_PREDICT_TRUE(i != 0)) return res; Packet mask = pcmp_lt(pset1(0), plset(0)); return pselect(mask, res, pset1(m_low)); } else { Packet pi = plset(Scalar(i)); Packet res = padd(pset1(m_low), pmul(pset1(m_step), pi)); if(EIGEN_PREDICT_TRUE(i != m_size1-unpacket_traits::size+1)) return res; Packet mask = pcmp_lt(plset(0), pset1(unpacket_traits::size-1)); return pselect(mask, res, pset1(m_high)); } } const Scalar m_low; const Scalar m_high; const Index m_size1; const Scalar m_step; const bool m_flip; }; template struct linspaced_op_impl { EIGEN_DEVICE_FUNC linspaced_op_impl(const Scalar& low, const Scalar& high, Index num_steps) : m_low(low), m_multiplier((high-low)/convert_index(num_steps<=1 ? 1 : num_steps-1)), m_divisor(convert_index((high>=low?num_steps:-num_steps)+(high-low))/((numext::abs(high-low)+1)==0?1:(numext::abs(high-low)+1))), m_use_divisor(num_steps>1 && (numext::abs(high-low)+1) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (IndexType i) const { if(m_use_divisor) return m_low + convert_index(i)/m_divisor; else return m_low + convert_index(i)*m_multiplier; } const Scalar m_low; const Scalar m_multiplier; const Scalar m_divisor; const bool m_use_divisor; }; // ----- Linspace functor ---------------------------------------------------------------- // Forward declaration (we default to random access which does not really give // us a speed gain when using packet access but it allows to use the functor in // nested expressions). template struct linspaced_op; template struct functor_traits< linspaced_op > { enum { Cost = 1, PacketAccess = (!NumTraits::IsInteger) && packet_traits::HasSetLinear && packet_traits::HasBlend, /*&& ((!NumTraits::IsInteger) || packet_traits::HasDiv),*/ // <- vectorization for integer is currently disabled IsRepeatable = true }; }; template struct linspaced_op { EIGEN_DEVICE_FUNC linspaced_op(const Scalar& low, const Scalar& high, Index num_steps) : impl((num_steps==1 ? high : low),high,num_steps) {} template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (IndexType i) const { return impl(i); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(IndexType i) const { return impl.template packetOp(i); } // This proxy object handles the actual required temporaries and the different // implementations (integer vs. floating point). const linspaced_op_impl::IsInteger> impl; }; // Linear access is automatically determined from the operator() prototypes available for the given functor. // If it exposes an operator()(i,j), then we assume the i and j coefficients are required independently // and linear access is not possible. In all other cases, linear access is enabled. // Users should not have to deal with this structure. template struct functor_has_linear_access { enum { ret = !has_binary_operator::value }; }; // For unreliable compilers, let's specialize the has_*ary_operator // helpers so that at least built-in nullary functors work fine. #if !( (EIGEN_COMP_MSVC>1600) || (EIGEN_GNUC_AT_LEAST(4,8)) || (EIGEN_COMP_ICC>=1600)) template struct has_nullary_operator,IndexType> { enum { value = 1}; }; template struct has_unary_operator,IndexType> { enum { value = 0}; }; template struct has_binary_operator,IndexType> { enum { value = 0}; }; template struct has_nullary_operator,IndexType> { enum { value = 0}; }; template struct has_unary_operator,IndexType> { enum { value = 0}; }; template struct has_binary_operator,IndexType> { enum { value = 1}; }; template struct has_nullary_operator,IndexType> { enum { value = 0}; }; template struct has_unary_operator,IndexType> { enum { value = 1}; }; template struct has_binary_operator,IndexType> { enum { value = 0}; }; template struct has_nullary_operator,IndexType> { enum { value = 1}; }; template struct has_unary_operator,IndexType> { enum { value = 0}; }; template struct has_binary_operator,IndexType> { enum { value = 0}; }; #endif } // end namespace internal } // end namespace Eigen #endif // EIGEN_NULLARY_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/functors/UnaryFunctors.h0000644000175100001770000011632214107270226024103 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2016 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_UNARY_FUNCTORS_H #define EIGEN_UNARY_FUNCTORS_H namespace Eigen { namespace internal { /** \internal * \brief Template functor to compute the opposite of a scalar * * \sa class CwiseUnaryOp, MatrixBase::operator- */ template struct scalar_opposite_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_opposite_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return -a; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::pnegate(a); } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = packet_traits::HasNegate }; }; /** \internal * \brief Template functor to compute the absolute value of a scalar * * \sa class CwiseUnaryOp, Cwise::abs */ template struct scalar_abs_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_abs_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const Scalar& a) const { return numext::abs(a); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::pabs(a); } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = packet_traits::HasAbs }; }; /** \internal * \brief Template functor to compute the score of a scalar, to chose a pivot * * \sa class CwiseUnaryOp */ template struct scalar_score_coeff_op : scalar_abs_op { typedef void Score_is_abs; }; template struct functor_traits > : functor_traits > {}; /* Avoid recomputing abs when we know the score and they are the same. Not a true Eigen functor. */ template struct abs_knowing_score { EIGEN_EMPTY_STRUCT_CTOR(abs_knowing_score) typedef typename NumTraits::Real result_type; template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const Scalar& a, const Score&) const { return numext::abs(a); } }; template struct abs_knowing_score::Score_is_abs> { EIGEN_EMPTY_STRUCT_CTOR(abs_knowing_score) typedef typename NumTraits::Real result_type; template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const Scal&, const result_type& a) const { return a; } }; /** \internal * \brief Template functor to compute the squared absolute value of a scalar * * \sa class CwiseUnaryOp, Cwise::abs2 */ template struct scalar_abs2_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_abs2_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const Scalar& a) const { return numext::abs2(a); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::pmul(a,a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasAbs2 }; }; /** \internal * \brief Template functor to compute the conjugate of a complex value * * \sa class CwiseUnaryOp, MatrixBase::conjugate() */ template struct scalar_conjugate_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_conjugate_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return numext::conj(a); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::pconj(a); } }; template struct functor_traits > { enum { Cost = 0, // Yes the cost is zero even for complexes because in most cases for which // the cost is used, conjugation turns to be a no-op. Some examples: // cost(a*conj(b)) == cost(a*b) // cost(a+conj(b)) == cost(a+b) // ::HasConj }; }; /** \internal * \brief Template functor to compute the phase angle of a complex * * \sa class CwiseUnaryOp, Cwise::arg */ template struct scalar_arg_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_arg_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const result_type operator() (const Scalar& a) const { return numext::arg(a); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::parg(a); } }; template struct functor_traits > { enum { Cost = NumTraits::IsComplex ? 5 * NumTraits::MulCost : NumTraits::AddCost, PacketAccess = packet_traits::HasArg }; }; /** \internal * \brief Template functor to cast a scalar to another type * * \sa class CwiseUnaryOp, MatrixBase::cast() */ template struct scalar_cast_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_cast_op) typedef NewType result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const NewType operator() (const Scalar& a) const { return cast(a); } }; template struct functor_traits > { enum { Cost = is_same::value ? 0 : NumTraits::AddCost, PacketAccess = false }; }; /** \internal * \brief Template functor to arithmetically shift a scalar right by a number of bits * * \sa class CwiseUnaryOp, MatrixBase::shift_right() */ template struct scalar_shift_right_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_shift_right_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return a >> N; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::parithmetic_shift_right(a); } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = packet_traits::HasShift }; }; /** \internal * \brief Template functor to logically shift a scalar left by a number of bits * * \sa class CwiseUnaryOp, MatrixBase::shift_left() */ template struct scalar_shift_left_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_shift_left_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return a << N; } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Packet packetOp(const Packet& a) const { return internal::plogical_shift_left(a); } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = packet_traits::HasShift }; }; /** \internal * \brief Template functor to extract the real part of a complex * * \sa class CwiseUnaryOp, MatrixBase::real() */ template struct scalar_real_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_real_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const Scalar& a) const { return numext::real(a); } }; template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; /** \internal * \brief Template functor to extract the imaginary part of a complex * * \sa class CwiseUnaryOp, MatrixBase::imag() */ template struct scalar_imag_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_imag_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const Scalar& a) const { return numext::imag(a); } }; template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; /** \internal * \brief Template functor to extract the real part of a complex as a reference * * \sa class CwiseUnaryOp, MatrixBase::real() */ template struct scalar_real_ref_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_real_ref_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type& operator() (const Scalar& a) const { return numext::real_ref(*const_cast(&a)); } }; template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; /** \internal * \brief Template functor to extract the imaginary part of a complex as a reference * * \sa class CwiseUnaryOp, MatrixBase::imag() */ template struct scalar_imag_ref_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_imag_ref_op) typedef typename NumTraits::Real result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type& operator() (const Scalar& a) const { return numext::imag_ref(*const_cast(&a)); } }; template struct functor_traits > { enum { Cost = 0, PacketAccess = false }; }; /** \internal * * \brief Template functor to compute the exponential of a scalar * * \sa class CwiseUnaryOp, Cwise::exp() */ template struct scalar_exp_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_exp_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::exp(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pexp(a); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasExp, // The following numbers are based on the AVX implementation. #ifdef EIGEN_VECTORIZE_FMA // Haswell can issue 2 add/mul/madd per cycle. Cost = (sizeof(Scalar) == 4 // float: 8 pmadd, 4 pmul, 2 padd/psub, 6 other ? (8 * NumTraits::AddCost + 6 * NumTraits::MulCost) // double: 7 pmadd, 5 pmul, 3 padd/psub, 1 div, 13 other : (14 * NumTraits::AddCost + 6 * NumTraits::MulCost + scalar_div_cost::HasDiv>::value)) #else Cost = (sizeof(Scalar) == 4 // float: 7 pmadd, 6 pmul, 4 padd/psub, 10 other ? (21 * NumTraits::AddCost + 13 * NumTraits::MulCost) // double: 7 pmadd, 5 pmul, 3 padd/psub, 1 div, 13 other : (23 * NumTraits::AddCost + 12 * NumTraits::MulCost + scalar_div_cost::HasDiv>::value)) #endif }; }; /** \internal * * \brief Template functor to compute the exponential of a scalar - 1. * * \sa class CwiseUnaryOp, ArrayBase::expm1() */ template struct scalar_expm1_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_expm1_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::expm1(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pexpm1(a); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasExpm1, Cost = functor_traits >::Cost // TODO measure cost of expm1 }; }; /** \internal * * \brief Template functor to compute the logarithm of a scalar * * \sa class CwiseUnaryOp, ArrayBase::log() */ template struct scalar_log_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_log_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::log(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::plog(a); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasLog, Cost = (PacketAccess // The following numbers are based on the AVX implementation. #ifdef EIGEN_VECTORIZE_FMA // 8 pmadd, 6 pmul, 8 padd/psub, 16 other, can issue 2 add/mul/madd per cycle. ? (20 * NumTraits::AddCost + 7 * NumTraits::MulCost) #else // 8 pmadd, 6 pmul, 8 padd/psub, 20 other ? (36 * NumTraits::AddCost + 14 * NumTraits::MulCost) #endif // Measured cost of std::log. : sizeof(Scalar)==4 ? 40 : 85) }; }; /** \internal * * \brief Template functor to compute the logarithm of 1 plus a scalar value * * \sa class CwiseUnaryOp, ArrayBase::log1p() */ template struct scalar_log1p_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_log1p_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::log1p(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::plog1p(a); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasLog1p, Cost = functor_traits >::Cost // TODO measure cost of log1p }; }; /** \internal * * \brief Template functor to compute the base-10 logarithm of a scalar * * \sa class CwiseUnaryOp, Cwise::log10() */ template struct scalar_log10_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_log10_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { EIGEN_USING_STD(log10) return log10(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::plog10(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasLog10 }; }; /** \internal * * \brief Template functor to compute the base-2 logarithm of a scalar * * \sa class CwiseUnaryOp, Cwise::log2() */ template struct scalar_log2_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_log2_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return Scalar(EIGEN_LOG2E) * numext::log(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::plog2(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasLog }; }; /** \internal * \brief Template functor to compute the square root of a scalar * \sa class CwiseUnaryOp, Cwise::sqrt() */ template struct scalar_sqrt_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sqrt_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::sqrt(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psqrt(a); } }; template struct functor_traits > { enum { #if EIGEN_FAST_MATH // The following numbers are based on the AVX implementation. Cost = (sizeof(Scalar) == 8 ? 28 // 4 pmul, 1 pmadd, 3 other : (3 * NumTraits::AddCost + 5 * NumTraits::MulCost)), #else // The following numbers are based on min VSQRT throughput on Haswell. Cost = (sizeof(Scalar) == 8 ? 28 : 14), #endif PacketAccess = packet_traits::HasSqrt }; }; // Boolean specialization to eliminate -Wimplicit-conversion-floating-point-to-bool warnings. template<> struct scalar_sqrt_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sqrt_op) EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline bool operator() (const bool& a) const { return a; } template EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return a; } }; template <> struct functor_traits > { enum { Cost = 1, PacketAccess = packet_traits::Vectorizable }; }; /** \internal * \brief Template functor to compute the reciprocal square root of a scalar * \sa class CwiseUnaryOp, Cwise::rsqrt() */ template struct scalar_rsqrt_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_rsqrt_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::rsqrt(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::prsqrt(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasRsqrt }; }; /** \internal * \brief Template functor to compute the cosine of a scalar * \sa class CwiseUnaryOp, ArrayBase::cos() */ template struct scalar_cos_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_cos_op) EIGEN_DEVICE_FUNC inline Scalar operator() (const Scalar& a) const { return numext::cos(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pcos(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasCos }; }; /** \internal * \brief Template functor to compute the sine of a scalar * \sa class CwiseUnaryOp, ArrayBase::sin() */ template struct scalar_sin_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sin_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::sin(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psin(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasSin }; }; /** \internal * \brief Template functor to compute the tan of a scalar * \sa class CwiseUnaryOp, ArrayBase::tan() */ template struct scalar_tan_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_tan_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::tan(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::ptan(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasTan }; }; /** \internal * \brief Template functor to compute the arc cosine of a scalar * \sa class CwiseUnaryOp, ArrayBase::acos() */ template struct scalar_acos_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_acos_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::acos(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pacos(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasACos }; }; /** \internal * \brief Template functor to compute the arc sine of a scalar * \sa class CwiseUnaryOp, ArrayBase::asin() */ template struct scalar_asin_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_asin_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::asin(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pasin(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasASin }; }; /** \internal * \brief Template functor to compute the atan of a scalar * \sa class CwiseUnaryOp, ArrayBase::atan() */ template struct scalar_atan_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_atan_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::atan(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::patan(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasATan }; }; /** \internal * \brief Template functor to compute the tanh of a scalar * \sa class CwiseUnaryOp, ArrayBase::tanh() */ template struct scalar_tanh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_tanh_op) EIGEN_DEVICE_FUNC inline const Scalar operator()(const Scalar& a) const { return numext::tanh(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& x) const { return ptanh(x); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasTanh, Cost = ( (EIGEN_FAST_MATH && is_same::value) // The following numbers are based on the AVX implementation, #ifdef EIGEN_VECTORIZE_FMA // Haswell can issue 2 add/mul/madd per cycle. // 9 pmadd, 2 pmul, 1 div, 2 other ? (2 * NumTraits::AddCost + 6 * NumTraits::MulCost + scalar_div_cost::HasDiv>::value) #else ? (11 * NumTraits::AddCost + 11 * NumTraits::MulCost + scalar_div_cost::HasDiv>::value) #endif // This number assumes a naive implementation of tanh : (6 * NumTraits::AddCost + 3 * NumTraits::MulCost + 2 * scalar_div_cost::HasDiv>::value + functor_traits >::Cost)) }; }; #if EIGEN_HAS_CXX11_MATH /** \internal * \brief Template functor to compute the atanh of a scalar * \sa class CwiseUnaryOp, ArrayBase::atanh() */ template struct scalar_atanh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_atanh_op) EIGEN_DEVICE_FUNC inline const Scalar operator()(const Scalar& a) const { return numext::atanh(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = false }; }; #endif /** \internal * \brief Template functor to compute the sinh of a scalar * \sa class CwiseUnaryOp, ArrayBase::sinh() */ template struct scalar_sinh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sinh_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::sinh(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psinh(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasSinh }; }; #if EIGEN_HAS_CXX11_MATH /** \internal * \brief Template functor to compute the asinh of a scalar * \sa class CwiseUnaryOp, ArrayBase::asinh() */ template struct scalar_asinh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_asinh_op) EIGEN_DEVICE_FUNC inline const Scalar operator()(const Scalar& a) const { return numext::asinh(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = false }; }; #endif /** \internal * \brief Template functor to compute the cosh of a scalar * \sa class CwiseUnaryOp, ArrayBase::cosh() */ template struct scalar_cosh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_cosh_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return numext::cosh(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pcosh(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = packet_traits::HasCosh }; }; #if EIGEN_HAS_CXX11_MATH /** \internal * \brief Template functor to compute the acosh of a scalar * \sa class CwiseUnaryOp, ArrayBase::acosh() */ template struct scalar_acosh_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_acosh_op) EIGEN_DEVICE_FUNC inline const Scalar operator()(const Scalar& a) const { return numext::acosh(a); } }; template struct functor_traits > { enum { Cost = 5 * NumTraits::MulCost, PacketAccess = false }; }; #endif /** \internal * \brief Template functor to compute the inverse of a scalar * \sa class CwiseUnaryOp, Cwise::inverse() */ template struct scalar_inverse_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_inverse_op) EIGEN_DEVICE_FUNC inline Scalar operator() (const Scalar& a) const { return Scalar(1)/a; } template EIGEN_DEVICE_FUNC inline const Packet packetOp(const Packet& a) const { return internal::pdiv(pset1(Scalar(1)),a); } }; template struct functor_traits > { enum { PacketAccess = packet_traits::HasDiv, Cost = scalar_div_cost::value }; }; /** \internal * \brief Template functor to compute the square of a scalar * \sa class CwiseUnaryOp, Cwise::square() */ template struct scalar_square_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_square_op) EIGEN_DEVICE_FUNC inline Scalar operator() (const Scalar& a) const { return a*a; } template EIGEN_DEVICE_FUNC inline const Packet packetOp(const Packet& a) const { return internal::pmul(a,a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasMul }; }; // Boolean specialization to avoid -Wint-in-bool-context warnings on GCC. template<> struct scalar_square_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_square_op) EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline bool operator() (const bool& a) const { return a; } template EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline const Packet packetOp(const Packet& a) const { return a; } }; template<> struct functor_traits > { enum { Cost = 0, PacketAccess = packet_traits::Vectorizable }; }; /** \internal * \brief Template functor to compute the cube of a scalar * \sa class CwiseUnaryOp, Cwise::cube() */ template struct scalar_cube_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_cube_op) EIGEN_DEVICE_FUNC inline Scalar operator() (const Scalar& a) const { return a*a*a; } template EIGEN_DEVICE_FUNC inline const Packet packetOp(const Packet& a) const { return internal::pmul(a,pmul(a,a)); } }; template struct functor_traits > { enum { Cost = 2*NumTraits::MulCost, PacketAccess = packet_traits::HasMul }; }; // Boolean specialization to avoid -Wint-in-bool-context warnings on GCC. template<> struct scalar_cube_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_cube_op) EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline bool operator() (const bool& a) const { return a; } template EIGEN_DEPRECATED EIGEN_DEVICE_FUNC inline const Packet packetOp(const Packet& a) const { return a; } }; template<> struct functor_traits > { enum { Cost = 0, PacketAccess = packet_traits::Vectorizable }; }; /** \internal * \brief Template functor to compute the rounded value of a scalar * \sa class CwiseUnaryOp, ArrayBase::round() */ template struct scalar_round_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_round_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return numext::round(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pround(a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasRound }; }; /** \internal * \brief Template functor to compute the floor of a scalar * \sa class CwiseUnaryOp, ArrayBase::floor() */ template struct scalar_floor_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_floor_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return numext::floor(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pfloor(a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasFloor }; }; /** \internal * \brief Template functor to compute the rounded (with current rounding mode) value of a scalar * \sa class CwiseUnaryOp, ArrayBase::rint() */ template struct scalar_rint_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_rint_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return numext::rint(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::print(a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasRint }; }; /** \internal * \brief Template functor to compute the ceil of a scalar * \sa class CwiseUnaryOp, ArrayBase::ceil() */ template struct scalar_ceil_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_ceil_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Scalar operator() (const Scalar& a) const { return numext::ceil(a); } template EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::pceil(a); } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = packet_traits::HasCeil }; }; /** \internal * \brief Template functor to compute whether a scalar is NaN * \sa class CwiseUnaryOp, ArrayBase::isnan() */ template struct scalar_isnan_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_isnan_op) typedef bool result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const Scalar& a) const { #if defined(SYCL_DEVICE_ONLY) return numext::isnan(a); #else return (numext::isnan)(a); #endif } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = false }; }; /** \internal * \brief Template functor to check whether a scalar is +/-inf * \sa class CwiseUnaryOp, ArrayBase::isinf() */ template struct scalar_isinf_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_isinf_op) typedef bool result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const Scalar& a) const { #if defined(SYCL_DEVICE_ONLY) return numext::isinf(a); #else return (numext::isinf)(a); #endif } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = false }; }; /** \internal * \brief Template functor to check whether a scalar has a finite value * \sa class CwiseUnaryOp, ArrayBase::isfinite() */ template struct scalar_isfinite_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_isfinite_op) typedef bool result_type; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE result_type operator() (const Scalar& a) const { #if defined(SYCL_DEVICE_ONLY) return numext::isfinite(a); #else return (numext::isfinite)(a); #endif } }; template struct functor_traits > { enum { Cost = NumTraits::MulCost, PacketAccess = false }; }; /** \internal * \brief Template functor to compute the logical not of a boolean * * \sa class CwiseUnaryOp, ArrayBase::operator! */ template struct scalar_boolean_not_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_boolean_not_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool operator() (const bool& a) const { return !a; } }; template struct functor_traits > { enum { Cost = NumTraits::AddCost, PacketAccess = false }; }; /** \internal * \brief Template functor to compute the signum of a scalar * \sa class CwiseUnaryOp, Cwise::sign() */ template::IsComplex!=0), bool is_integer=(NumTraits::IsInteger!=0) > struct scalar_sign_op; template struct scalar_sign_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sign_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return Scalar( (a>Scalar(0)) - (a //EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psign(a); } }; template struct scalar_sign_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sign_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { return (numext::isnan)(a) ? a : Scalar( (a>Scalar(0)) - (a //EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psign(a); } }; template struct scalar_sign_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_sign_op) EIGEN_DEVICE_FUNC inline const Scalar operator() (const Scalar& a) const { typedef typename NumTraits::Real real_type; real_type aa = numext::abs(a); if (aa==real_type(0)) return Scalar(0); aa = real_type(1)/aa; return Scalar(a.real()*aa, a.imag()*aa ); } //TODO //template //EIGEN_DEVICE_FUNC inline Packet packetOp(const Packet& a) const { return internal::psign(a); } }; template struct functor_traits > { enum { Cost = NumTraits::IsComplex ? ( 8*NumTraits::MulCost ) // roughly : ( 3*NumTraits::AddCost), PacketAccess = packet_traits::HasSign }; }; /** \internal * \brief Template functor to compute the logistic function of a scalar * \sa class CwiseUnaryOp, ArrayBase::logistic() */ template struct scalar_logistic_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_logistic_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE T operator()(const T& x) const { return packetOp(x); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& x) const { const Packet one = pset1(T(1)); return pdiv(one, padd(one, pexp(pnegate(x)))); } }; #ifndef EIGEN_GPU_COMPILE_PHASE /** \internal * \brief Template specialization of the logistic function for float. * * Uses just a 9/10-degree rational interpolant which * interpolates 1/(1+exp(-x)) - 0.5 up to a couple of ulps in the range * [-9, 18]. Below -9 we use the more accurate approximation * 1/(1+exp(-x)) ~= exp(x), and above 18 the logistic function is 1 withing * one ulp. The shifted logistic is interpolated because it was easier to * make the fit converge. * */ template <> struct scalar_logistic_op { EIGEN_EMPTY_STRUCT_CTOR(scalar_logistic_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE float operator()(const float& x) const { return packetOp(x); } template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet packetOp(const Packet& _x) const { const Packet cutoff_lower = pset1(-9.f); const Packet lt_mask = pcmp_lt(_x, cutoff_lower); const bool any_small = predux_any(lt_mask); // The upper cut-off is the smallest x for which the rational approximation evaluates to 1. // Choosing this value saves us a few instructions clamping the results at the end. #ifdef EIGEN_VECTORIZE_FMA const Packet cutoff_upper = pset1(15.7243833541870117f); #else const Packet cutoff_upper = pset1(15.6437711715698242f); #endif const Packet x = pmin(_x, cutoff_upper); // The monomial coefficients of the numerator polynomial (odd). const Packet alpha_1 = pset1(2.48287947061529e-01f); const Packet alpha_3 = pset1(8.51377133304701e-03f); const Packet alpha_5 = pset1(6.08574864600143e-05f); const Packet alpha_7 = pset1(1.15627324459942e-07f); const Packet alpha_9 = pset1(4.37031012579801e-11f); // The monomial coefficients of the denominator polynomial (even). const Packet beta_0 = pset1(9.93151921023180e-01f); const Packet beta_2 = pset1(1.16817656904453e-01f); const Packet beta_4 = pset1(1.70198817374094e-03f); const Packet beta_6 = pset1(6.29106785017040e-06f); const Packet beta_8 = pset1(5.76102136993427e-09f); const Packet beta_10 = pset1(6.10247389755681e-13f); // Since the polynomials are odd/even, we need x^2. const Packet x2 = pmul(x, x); // Evaluate the numerator polynomial p. Packet p = pmadd(x2, alpha_9, alpha_7); p = pmadd(x2, p, alpha_5); p = pmadd(x2, p, alpha_3); p = pmadd(x2, p, alpha_1); p = pmul(x, p); // Evaluate the denominator polynomial q. Packet q = pmadd(x2, beta_10, beta_8); q = pmadd(x2, q, beta_6); q = pmadd(x2, q, beta_4); q = pmadd(x2, q, beta_2); q = pmadd(x2, q, beta_0); // Divide the numerator by the denominator and shift it up. const Packet logistic = padd(pdiv(p, q), pset1(0.5f)); if (EIGEN_PREDICT_FALSE(any_small)) { const Packet exponential = pexp(_x); return pselect(lt_mask, exponential, logistic); } else { return logistic; } } }; #endif // #ifndef EIGEN_GPU_COMPILE_PHASE template struct functor_traits > { enum { // The cost estimate for float here here is for the common(?) case where // all arguments are greater than -9. Cost = scalar_div_cost::HasDiv>::value + (internal::is_same::value ? NumTraits::AddCost * 15 + NumTraits::MulCost * 11 : NumTraits::AddCost * 2 + functor_traits >::Cost), PacketAccess = packet_traits::HasAdd && packet_traits::HasDiv && (internal::is_same::value ? packet_traits::HasMul && packet_traits::HasMax && packet_traits::HasMin : packet_traits::HasNegate && packet_traits::HasExp) }; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/functors/AssignmentFunctors.h0000644000175100001770000001503614107270226025115 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_ASSIGNMENT_FUNCTORS_H #define EIGEN_ASSIGNMENT_FUNCTORS_H namespace Eigen { namespace internal { /** \internal * \brief Template functor for scalar/packet assignment * */ template struct assign_op { EIGEN_EMPTY_STRUCT_CTOR(assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(DstScalar& a, const SrcScalar& b) const { a = b; } template EIGEN_STRONG_INLINE void assignPacket(DstScalar* a, const Packet& b) const { internal::pstoret(a,b); } }; // Empty overload for void type (used by PermutationMatrix) template struct assign_op {}; template struct functor_traits > { enum { Cost = NumTraits::ReadCost, PacketAccess = is_same::value && packet_traits::Vectorizable && packet_traits::Vectorizable }; }; /** \internal * \brief Template functor for scalar/packet assignment with addition * */ template struct add_assign_op { EIGEN_EMPTY_STRUCT_CTOR(add_assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(DstScalar& a, const SrcScalar& b) const { a += b; } template EIGEN_STRONG_INLINE void assignPacket(DstScalar* a, const Packet& b) const { internal::pstoret(a,internal::padd(internal::ploadt(a),b)); } }; template struct functor_traits > { enum { Cost = NumTraits::ReadCost + NumTraits::AddCost, PacketAccess = is_same::value && packet_traits::HasAdd }; }; /** \internal * \brief Template functor for scalar/packet assignment with subtraction * */ template struct sub_assign_op { EIGEN_EMPTY_STRUCT_CTOR(sub_assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(DstScalar& a, const SrcScalar& b) const { a -= b; } template EIGEN_STRONG_INLINE void assignPacket(DstScalar* a, const Packet& b) const { internal::pstoret(a,internal::psub(internal::ploadt(a),b)); } }; template struct functor_traits > { enum { Cost = NumTraits::ReadCost + NumTraits::AddCost, PacketAccess = is_same::value && packet_traits::HasSub }; }; /** \internal * \brief Template functor for scalar/packet assignment with multiplication * */ template struct mul_assign_op { EIGEN_EMPTY_STRUCT_CTOR(mul_assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(DstScalar& a, const SrcScalar& b) const { a *= b; } template EIGEN_STRONG_INLINE void assignPacket(DstScalar* a, const Packet& b) const { internal::pstoret(a,internal::pmul(internal::ploadt(a),b)); } }; template struct functor_traits > { enum { Cost = NumTraits::ReadCost + NumTraits::MulCost, PacketAccess = is_same::value && packet_traits::HasMul }; }; /** \internal * \brief Template functor for scalar/packet assignment with diviving * */ template struct div_assign_op { EIGEN_EMPTY_STRUCT_CTOR(div_assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(DstScalar& a, const SrcScalar& b) const { a /= b; } template EIGEN_STRONG_INLINE void assignPacket(DstScalar* a, const Packet& b) const { internal::pstoret(a,internal::pdiv(internal::ploadt(a),b)); } }; template struct functor_traits > { enum { Cost = NumTraits::ReadCost + NumTraits::MulCost, PacketAccess = is_same::value && packet_traits::HasDiv }; }; /** \internal * \brief Template functor for scalar/packet assignment with swapping * * It works as follow. For a non-vectorized evaluation loop, we have: * for(i) func(A.coeffRef(i), B.coeff(i)); * where B is a SwapWrapper expression. The trick is to make SwapWrapper::coeff behaves like a non-const coeffRef. * Actually, SwapWrapper might not even be needed since even if B is a plain expression, since it has to be writable * B.coeff already returns a const reference to the underlying scalar value. * * The case of a vectorized loop is more tricky: * for(i,j) func.assignPacket(&A.coeffRef(i,j), B.packet(i,j)); * Here, B must be a SwapWrapper whose packet function actually returns a proxy object holding a Scalar*, * the actual alignment and Packet type. * */ template struct swap_assign_op { EIGEN_EMPTY_STRUCT_CTOR(swap_assign_op) EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void assignCoeff(Scalar& a, const Scalar& b) const { #ifdef EIGEN_GPUCC // FIXME is there some kind of cuda::swap? Scalar t=b; const_cast(b)=a; a=t; #else using std::swap; swap(a,const_cast(b)); #endif } }; template struct functor_traits > { enum { Cost = 3 * NumTraits::ReadCost, PacketAccess = #if defined(EIGEN_VECTORIZE_AVX) && EIGEN_COMP_CLANG && (EIGEN_COMP_CLANG<800 || defined(__apple_build_version__)) // This is a partial workaround for a bug in clang generating bad code // when mixing 256/512 bits loads and 128 bits moves. // See http://eigen.tuxfamily.org/bz/show_bug.cgi?id=1684 // https://bugs.llvm.org/show_bug.cgi?id=40815 0 #else packet_traits::Vectorizable #endif }; }; } // namespace internal } // namespace Eigen #endif // EIGEN_ASSIGNMENT_FUNCTORS_H piqp-octave/src/eigen/Eigen/src/Core/DenseCoeffsBase.h0000644000175100001770000005764414107270226022410 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2006-2010 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_DENSECOEFFSBASE_H #define EIGEN_DENSECOEFFSBASE_H namespace Eigen { namespace internal { template struct add_const_on_value_type_if_arithmetic { typedef typename conditional::value, T, typename add_const_on_value_type::type>::type type; }; } /** \brief Base class providing read-only coefficient access to matrices and arrays. * \ingroup Core_Module * \tparam Derived Type of the derived class * * \note #ReadOnlyAccessors Constant indicating read-only access * * This class defines the \c operator() \c const function and friends, which can be used to read specific * entries of a matrix or array. * * \sa DenseCoeffsBase, DenseCoeffsBase, * \ref TopicClassHierarchy */ template class DenseCoeffsBase : public EigenBase { public: typedef typename internal::traits::StorageKind StorageKind; typedef typename internal::traits::Scalar Scalar; typedef typename internal::packet_traits::type PacketScalar; // Explanation for this CoeffReturnType typedef. // - This is the return type of the coeff() method. // - The LvalueBit means exactly that we can offer a coeffRef() method, which means exactly that we can get references // to coeffs, which means exactly that we can have coeff() return a const reference (as opposed to returning a value). // - The is_artihmetic check is required since "const int", "const double", etc. will cause warnings on some systems // while the declaration of "const T", where T is a non arithmetic type does not. Always returning "const Scalar&" is // not possible, since the underlying expressions might not offer a valid address the reference could be referring to. typedef typename internal::conditional::Flags&LvalueBit), const Scalar&, typename internal::conditional::value, Scalar, const Scalar>::type >::type CoeffReturnType; typedef typename internal::add_const_on_value_type_if_arithmetic< typename internal::packet_traits::type >::type PacketReturnType; typedef EigenBase Base; using Base::rows; using Base::cols; using Base::size; using Base::derived; EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Index rowIndexByOuterInner(Index outer, Index inner) const { return int(Derived::RowsAtCompileTime) == 1 ? 0 : int(Derived::ColsAtCompileTime) == 1 ? inner : int(Derived::Flags)&RowMajorBit ? outer : inner; } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Index colIndexByOuterInner(Index outer, Index inner) const { return int(Derived::ColsAtCompileTime) == 1 ? 0 : int(Derived::RowsAtCompileTime) == 1 ? inner : int(Derived::Flags)&RowMajorBit ? inner : outer; } /** Short version: don't use this function, use * \link operator()(Index,Index) const \endlink instead. * * Long version: this function is similar to * \link operator()(Index,Index) const \endlink, but without the assertion. * Use this for limiting the performance cost of debugging code when doing * repeated coefficient access. Only use this when it is guaranteed that the * parameters \a row and \a col are in range. * * If EIGEN_INTERNAL_DEBUGGING is defined, an assertion will be made, making this * function equivalent to \link operator()(Index,Index) const \endlink. * * \sa operator()(Index,Index) const, coeffRef(Index,Index), coeff(Index) const */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType coeff(Index row, Index col) const { eigen_internal_assert(row >= 0 && row < rows() && col >= 0 && col < cols()); return internal::evaluator(derived()).coeff(row,col); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType coeffByOuterInner(Index outer, Index inner) const { return coeff(rowIndexByOuterInner(outer, inner), colIndexByOuterInner(outer, inner)); } /** \returns the coefficient at given the given row and column. * * \sa operator()(Index,Index), operator[](Index) */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType operator()(Index row, Index col) const { eigen_assert(row >= 0 && row < rows() && col >= 0 && col < cols()); return coeff(row, col); } /** Short version: don't use this function, use * \link operator[](Index) const \endlink instead. * * Long version: this function is similar to * \link operator[](Index) const \endlink, but without the assertion. * Use this for limiting the performance cost of debugging code when doing * repeated coefficient access. Only use this when it is guaranteed that the * parameter \a index is in range. * * If EIGEN_INTERNAL_DEBUGGING is defined, an assertion will be made, making this * function equivalent to \link operator[](Index) const \endlink. * * \sa operator[](Index) const, coeffRef(Index), coeff(Index,Index) const */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType coeff(Index index) const { EIGEN_STATIC_ASSERT(internal::evaluator::Flags & LinearAccessBit, THIS_COEFFICIENT_ACCESSOR_TAKING_ONE_ACCESS_IS_ONLY_FOR_EXPRESSIONS_ALLOWING_LINEAR_ACCESS) eigen_internal_assert(index >= 0 && index < size()); return internal::evaluator(derived()).coeff(index); } /** \returns the coefficient at given index. * * This method is allowed only for vector expressions, and for matrix expressions having the LinearAccessBit. * * \sa operator[](Index), operator()(Index,Index) const, x() const, y() const, * z() const, w() const */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType operator[](Index index) const { EIGEN_STATIC_ASSERT(Derived::IsVectorAtCompileTime, THE_BRACKET_OPERATOR_IS_ONLY_FOR_VECTORS__USE_THE_PARENTHESIS_OPERATOR_INSTEAD) eigen_assert(index >= 0 && index < size()); return coeff(index); } /** \returns the coefficient at given index. * * This is synonymous to operator[](Index) const. * * This method is allowed only for vector expressions, and for matrix expressions having the LinearAccessBit. * * \sa operator[](Index), operator()(Index,Index) const, x() const, y() const, * z() const, w() const */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType operator()(Index index) const { eigen_assert(index >= 0 && index < size()); return coeff(index); } /** equivalent to operator[](0). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType x() const { return (*this)[0]; } /** equivalent to operator[](1). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType y() const { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=2, OUT_OF_RANGE_ACCESS); return (*this)[1]; } /** equivalent to operator[](2). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType z() const { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=3, OUT_OF_RANGE_ACCESS); return (*this)[2]; } /** equivalent to operator[](3). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType w() const { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=4, OUT_OF_RANGE_ACCESS); return (*this)[3]; } /** \internal * \returns the packet of coefficients starting at the given row and column. It is your responsibility * to ensure that a packet really starts there. This method is only available on expressions having the * PacketAccessBit. * * The \a LoadMode parameter may have the value \a #Aligned or \a #Unaligned. Its effect is to select * the appropriate vectorization instruction. Aligned access is faster, but is only possible for packets * starting at an address which is a multiple of the packet size. */ template EIGEN_STRONG_INLINE PacketReturnType packet(Index row, Index col) const { typedef typename internal::packet_traits::type DefaultPacketType; eigen_internal_assert(row >= 0 && row < rows() && col >= 0 && col < cols()); return internal::evaluator(derived()).template packet(row,col); } /** \internal */ template EIGEN_STRONG_INLINE PacketReturnType packetByOuterInner(Index outer, Index inner) const { return packet(rowIndexByOuterInner(outer, inner), colIndexByOuterInner(outer, inner)); } /** \internal * \returns the packet of coefficients starting at the given index. It is your responsibility * to ensure that a packet really starts there. This method is only available on expressions having the * PacketAccessBit and the LinearAccessBit. * * The \a LoadMode parameter may have the value \a #Aligned or \a #Unaligned. Its effect is to select * the appropriate vectorization instruction. Aligned access is faster, but is only possible for packets * starting at an address which is a multiple of the packet size. */ template EIGEN_STRONG_INLINE PacketReturnType packet(Index index) const { EIGEN_STATIC_ASSERT(internal::evaluator::Flags & LinearAccessBit, THIS_COEFFICIENT_ACCESSOR_TAKING_ONE_ACCESS_IS_ONLY_FOR_EXPRESSIONS_ALLOWING_LINEAR_ACCESS) typedef typename internal::packet_traits::type DefaultPacketType; eigen_internal_assert(index >= 0 && index < size()); return internal::evaluator(derived()).template packet(index); } protected: // explanation: DenseBase is doing "using ..." on the methods from DenseCoeffsBase. // But some methods are only available in the DirectAccess case. // So we add dummy methods here with these names, so that "using... " doesn't fail. // It's not private so that the child class DenseBase can access them, and it's not public // either since it's an implementation detail, so has to be protected. void coeffRef(); void coeffRefByOuterInner(); void writePacket(); void writePacketByOuterInner(); void copyCoeff(); void copyCoeffByOuterInner(); void copyPacket(); void copyPacketByOuterInner(); void stride(); void innerStride(); void outerStride(); void rowStride(); void colStride(); }; /** \brief Base class providing read/write coefficient access to matrices and arrays. * \ingroup Core_Module * \tparam Derived Type of the derived class * * \note #WriteAccessors Constant indicating read/write access * * This class defines the non-const \c operator() function and friends, which can be used to write specific * entries of a matrix or array. This class inherits DenseCoeffsBase which * defines the const variant for reading specific entries. * * \sa DenseCoeffsBase, \ref TopicClassHierarchy */ template class DenseCoeffsBase : public DenseCoeffsBase { public: typedef DenseCoeffsBase Base; typedef typename internal::traits::StorageKind StorageKind; typedef typename internal::traits::Scalar Scalar; typedef typename internal::packet_traits::type PacketScalar; typedef typename NumTraits::Real RealScalar; using Base::coeff; using Base::rows; using Base::cols; using Base::size; using Base::derived; using Base::rowIndexByOuterInner; using Base::colIndexByOuterInner; using Base::operator[]; using Base::operator(); using Base::x; using Base::y; using Base::z; using Base::w; /** Short version: don't use this function, use * \link operator()(Index,Index) \endlink instead. * * Long version: this function is similar to * \link operator()(Index,Index) \endlink, but without the assertion. * Use this for limiting the performance cost of debugging code when doing * repeated coefficient access. Only use this when it is guaranteed that the * parameters \a row and \a col are in range. * * If EIGEN_INTERNAL_DEBUGGING is defined, an assertion will be made, making this * function equivalent to \link operator()(Index,Index) \endlink. * * \sa operator()(Index,Index), coeff(Index, Index) const, coeffRef(Index) */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& coeffRef(Index row, Index col) { eigen_internal_assert(row >= 0 && row < rows() && col >= 0 && col < cols()); return internal::evaluator(derived()).coeffRef(row,col); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& coeffRefByOuterInner(Index outer, Index inner) { return coeffRef(rowIndexByOuterInner(outer, inner), colIndexByOuterInner(outer, inner)); } /** \returns a reference to the coefficient at given the given row and column. * * \sa operator[](Index) */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& operator()(Index row, Index col) { eigen_assert(row >= 0 && row < rows() && col >= 0 && col < cols()); return coeffRef(row, col); } /** Short version: don't use this function, use * \link operator[](Index) \endlink instead. * * Long version: this function is similar to * \link operator[](Index) \endlink, but without the assertion. * Use this for limiting the performance cost of debugging code when doing * repeated coefficient access. Only use this when it is guaranteed that the * parameters \a row and \a col are in range. * * If EIGEN_INTERNAL_DEBUGGING is defined, an assertion will be made, making this * function equivalent to \link operator[](Index) \endlink. * * \sa operator[](Index), coeff(Index) const, coeffRef(Index,Index) */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& coeffRef(Index index) { EIGEN_STATIC_ASSERT(internal::evaluator::Flags & LinearAccessBit, THIS_COEFFICIENT_ACCESSOR_TAKING_ONE_ACCESS_IS_ONLY_FOR_EXPRESSIONS_ALLOWING_LINEAR_ACCESS) eigen_internal_assert(index >= 0 && index < size()); return internal::evaluator(derived()).coeffRef(index); } /** \returns a reference to the coefficient at given index. * * This method is allowed only for vector expressions, and for matrix expressions having the LinearAccessBit. * * \sa operator[](Index) const, operator()(Index,Index), x(), y(), z(), w() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& operator[](Index index) { EIGEN_STATIC_ASSERT(Derived::IsVectorAtCompileTime, THE_BRACKET_OPERATOR_IS_ONLY_FOR_VECTORS__USE_THE_PARENTHESIS_OPERATOR_INSTEAD) eigen_assert(index >= 0 && index < size()); return coeffRef(index); } /** \returns a reference to the coefficient at given index. * * This is synonymous to operator[](Index). * * This method is allowed only for vector expressions, and for matrix expressions having the LinearAccessBit. * * \sa operator[](Index) const, operator()(Index,Index), x(), y(), z(), w() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& operator()(Index index) { eigen_assert(index >= 0 && index < size()); return coeffRef(index); } /** equivalent to operator[](0). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& x() { return (*this)[0]; } /** equivalent to operator[](1). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& y() { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=2, OUT_OF_RANGE_ACCESS); return (*this)[1]; } /** equivalent to operator[](2). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& z() { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=3, OUT_OF_RANGE_ACCESS); return (*this)[2]; } /** equivalent to operator[](3). */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Scalar& w() { EIGEN_STATIC_ASSERT(Derived::SizeAtCompileTime==-1 || Derived::SizeAtCompileTime>=4, OUT_OF_RANGE_ACCESS); return (*this)[3]; } }; /** \brief Base class providing direct read-only coefficient access to matrices and arrays. * \ingroup Core_Module * \tparam Derived Type of the derived class * * \note #DirectAccessors Constant indicating direct access * * This class defines functions to work with strides which can be used to access entries directly. This class * inherits DenseCoeffsBase which defines functions to access entries read-only using * \c operator() . * * \sa \blank \ref TopicClassHierarchy */ template class DenseCoeffsBase : public DenseCoeffsBase { public: typedef DenseCoeffsBase Base; typedef typename internal::traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; using Base::rows; using Base::cols; using Base::size; using Base::derived; /** \returns the pointer increment between two consecutive elements within a slice in the inner direction. * * \sa outerStride(), rowStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index innerStride() const { return derived().innerStride(); } /** \returns the pointer increment between two consecutive inner slices (for example, between two consecutive columns * in a column-major matrix). * * \sa innerStride(), rowStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index outerStride() const { return derived().outerStride(); } // FIXME shall we remove it ? EIGEN_CONSTEXPR inline Index stride() const { return Derived::IsVectorAtCompileTime ? innerStride() : outerStride(); } /** \returns the pointer increment between two consecutive rows. * * \sa innerStride(), outerStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rowStride() const { return Derived::IsRowMajor ? outerStride() : innerStride(); } /** \returns the pointer increment between two consecutive columns. * * \sa innerStride(), outerStride(), rowStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index colStride() const { return Derived::IsRowMajor ? innerStride() : outerStride(); } }; /** \brief Base class providing direct read/write coefficient access to matrices and arrays. * \ingroup Core_Module * \tparam Derived Type of the derived class * * \note #DirectWriteAccessors Constant indicating direct access * * This class defines functions to work with strides which can be used to access entries directly. This class * inherits DenseCoeffsBase which defines functions to access entries read/write using * \c operator(). * * \sa \blank \ref TopicClassHierarchy */ template class DenseCoeffsBase : public DenseCoeffsBase { public: typedef DenseCoeffsBase Base; typedef typename internal::traits::Scalar Scalar; typedef typename NumTraits::Real RealScalar; using Base::rows; using Base::cols; using Base::size; using Base::derived; /** \returns the pointer increment between two consecutive elements within a slice in the inner direction. * * \sa outerStride(), rowStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index innerStride() const EIGEN_NOEXCEPT { return derived().innerStride(); } /** \returns the pointer increment between two consecutive inner slices (for example, between two consecutive columns * in a column-major matrix). * * \sa innerStride(), rowStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index outerStride() const EIGEN_NOEXCEPT { return derived().outerStride(); } // FIXME shall we remove it ? EIGEN_CONSTEXPR inline Index stride() const EIGEN_NOEXCEPT { return Derived::IsVectorAtCompileTime ? innerStride() : outerStride(); } /** \returns the pointer increment between two consecutive rows. * * \sa innerStride(), outerStride(), colStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rowStride() const EIGEN_NOEXCEPT { return Derived::IsRowMajor ? outerStride() : innerStride(); } /** \returns the pointer increment between two consecutive columns. * * \sa innerStride(), outerStride(), rowStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index colStride() const EIGEN_NOEXCEPT { return Derived::IsRowMajor ? innerStride() : outerStride(); } }; namespace internal { template struct first_aligned_impl { static EIGEN_CONSTEXPR inline Index run(const Derived&) EIGEN_NOEXCEPT { return 0; } }; template struct first_aligned_impl { static inline Index run(const Derived& m) { return internal::first_aligned(m.data(), m.size()); } }; /** \internal \returns the index of the first element of the array stored by \a m that is properly aligned with respect to \a Alignment for vectorization. * * \tparam Alignment requested alignment in Bytes. * * There is also the variant first_aligned(const Scalar*, Integer) defined in Memory.h. See it for more * documentation. */ template static inline Index first_aligned(const DenseBase& m) { enum { ReturnZero = (int(evaluator::Alignment) >= Alignment) || !(Derived::Flags & DirectAccessBit) }; return first_aligned_impl::run(m.derived()); } template static inline Index first_default_aligned(const DenseBase& m) { typedef typename Derived::Scalar Scalar; typedef typename packet_traits::type DefaultPacketType; return internal::first_aligned::alignment),Derived>(m); } template::ret> struct inner_stride_at_compile_time { enum { ret = traits::InnerStrideAtCompileTime }; }; template struct inner_stride_at_compile_time { enum { ret = 0 }; }; template::ret> struct outer_stride_at_compile_time { enum { ret = traits::OuterStrideAtCompileTime }; }; template struct outer_stride_at_compile_time { enum { ret = 0 }; }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_DENSECOEFFSBASE_H piqp-octave/src/eigen/Eigen/src/Core/MatrixBase.h0000644000175100001770000005646014107270226021463 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2006-2009 Benoit Jacob // Copyright (C) 2008 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_MATRIXBASE_H #define EIGEN_MATRIXBASE_H namespace Eigen { /** \class MatrixBase * \ingroup Core_Module * * \brief Base class for all dense matrices, vectors, and expressions * * This class is the base that is inherited by all matrix, vector, and related expression * types. Most of the Eigen API is contained in this class, and its base classes. Other important * classes for the Eigen API are Matrix, and VectorwiseOp. * * Note that some methods are defined in other modules such as the \ref LU_Module LU module * for all functions related to matrix inversions. * * \tparam Derived is the derived type, e.g. a matrix type, or an expression, etc. * * When writing a function taking Eigen objects as argument, if you want your function * to take as argument any matrix, vector, or expression, just let it take a * MatrixBase argument. As an example, here is a function printFirstRow which, given * a matrix, vector, or expression \a x, prints the first row of \a x. * * \code template void printFirstRow(const Eigen::MatrixBase& x) { cout << x.row(0) << endl; } * \endcode * * This class can be extended with the help of the plugin mechanism described on the page * \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_MATRIXBASE_PLUGIN. * * \sa \blank \ref TopicClassHierarchy */ template class MatrixBase : public DenseBase { public: #ifndef EIGEN_PARSED_BY_DOXYGEN typedef MatrixBase StorageBaseType; typedef typename internal::traits::StorageKind StorageKind; typedef typename internal::traits::StorageIndex StorageIndex; typedef typename internal::traits::Scalar Scalar; typedef typename internal::packet_traits::type PacketScalar; typedef typename NumTraits::Real RealScalar; typedef DenseBase Base; using Base::RowsAtCompileTime; using Base::ColsAtCompileTime; using Base::SizeAtCompileTime; using Base::MaxRowsAtCompileTime; using Base::MaxColsAtCompileTime; using Base::MaxSizeAtCompileTime; using Base::IsVectorAtCompileTime; using Base::Flags; using Base::derived; using Base::const_cast_derived; using Base::rows; using Base::cols; using Base::size; using Base::coeff; using Base::coeffRef; using Base::lazyAssign; using Base::eval; using Base::operator-; using Base::operator+=; using Base::operator-=; using Base::operator*=; using Base::operator/=; typedef typename Base::CoeffReturnType CoeffReturnType; typedef typename Base::ConstTransposeReturnType ConstTransposeReturnType; typedef typename Base::RowXpr RowXpr; typedef typename Base::ColXpr ColXpr; #endif // not EIGEN_PARSED_BY_DOXYGEN #ifndef EIGEN_PARSED_BY_DOXYGEN /** type of the equivalent square matrix */ typedef Matrix SquareMatrixType; #endif // not EIGEN_PARSED_BY_DOXYGEN /** \returns the size of the main diagonal, which is min(rows(),cols()). * \sa rows(), cols(), SizeAtCompileTime. */ EIGEN_DEVICE_FUNC inline Index diagonalSize() const { return (numext::mini)(rows(),cols()); } typedef typename Base::PlainObject PlainObject; #ifndef EIGEN_PARSED_BY_DOXYGEN /** \internal Represents a matrix with all coefficients equal to one another*/ typedef CwiseNullaryOp,PlainObject> ConstantReturnType; /** \internal the return type of MatrixBase::adjoint() */ typedef typename internal::conditional::IsComplex, CwiseUnaryOp, ConstTransposeReturnType>, ConstTransposeReturnType >::type AdjointReturnType; /** \internal Return type of eigenvalues() */ typedef Matrix, internal::traits::ColsAtCompileTime, 1, ColMajor> EigenvaluesReturnType; /** \internal the return type of identity */ typedef CwiseNullaryOp,PlainObject> IdentityReturnType; /** \internal the return type of unit vectors */ typedef Block, SquareMatrixType>, internal::traits::RowsAtCompileTime, internal::traits::ColsAtCompileTime> BasisReturnType; #endif // not EIGEN_PARSED_BY_DOXYGEN #define EIGEN_CURRENT_STORAGE_BASE_CLASS Eigen::MatrixBase #define EIGEN_DOC_UNARY_ADDONS(X,Y) # include "../plugins/CommonCwiseBinaryOps.h" # include "../plugins/MatrixCwiseUnaryOps.h" # include "../plugins/MatrixCwiseBinaryOps.h" # ifdef EIGEN_MATRIXBASE_PLUGIN # include EIGEN_MATRIXBASE_PLUGIN # endif #undef EIGEN_CURRENT_STORAGE_BASE_CLASS #undef EIGEN_DOC_UNARY_ADDONS /** Special case of the template operator=, in order to prevent the compiler * from generating a default operator= (issue hit with g++ 4.1) */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator=(const MatrixBase& other); // We cannot inherit here via Base::operator= since it is causing // trouble with MSVC. template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator=(const DenseBase& other); template EIGEN_DEVICE_FUNC Derived& operator=(const EigenBase& other); template EIGEN_DEVICE_FUNC Derived& operator=(const ReturnByValue& other); template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator+=(const MatrixBase& other); template EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Derived& operator-=(const MatrixBase& other); template EIGEN_DEVICE_FUNC const Product operator*(const MatrixBase &other) const; template EIGEN_DEVICE_FUNC const Product lazyProduct(const MatrixBase &other) const; template Derived& operator*=(const EigenBase& other); template void applyOnTheLeft(const EigenBase& other); template void applyOnTheRight(const EigenBase& other); template EIGEN_DEVICE_FUNC const Product operator*(const DiagonalBase &diagonal) const; template EIGEN_DEVICE_FUNC typename ScalarBinaryOpTraits::Scalar,typename internal::traits::Scalar>::ReturnType dot(const MatrixBase& other) const; EIGEN_DEVICE_FUNC RealScalar squaredNorm() const; EIGEN_DEVICE_FUNC RealScalar norm() const; RealScalar stableNorm() const; RealScalar blueNorm() const; RealScalar hypotNorm() const; EIGEN_DEVICE_FUNC const PlainObject normalized() const; EIGEN_DEVICE_FUNC const PlainObject stableNormalized() const; EIGEN_DEVICE_FUNC void normalize(); EIGEN_DEVICE_FUNC void stableNormalize(); EIGEN_DEVICE_FUNC const AdjointReturnType adjoint() const; EIGEN_DEVICE_FUNC void adjointInPlace(); typedef Diagonal DiagonalReturnType; EIGEN_DEVICE_FUNC DiagonalReturnType diagonal(); typedef typename internal::add_const >::type ConstDiagonalReturnType; EIGEN_DEVICE_FUNC ConstDiagonalReturnType diagonal() const; template struct DiagonalIndexReturnType { typedef Diagonal Type; }; template struct ConstDiagonalIndexReturnType { typedef const Diagonal Type; }; template EIGEN_DEVICE_FUNC typename DiagonalIndexReturnType::Type diagonal(); template EIGEN_DEVICE_FUNC typename ConstDiagonalIndexReturnType::Type diagonal() const; typedef Diagonal DiagonalDynamicIndexReturnType; typedef typename internal::add_const >::type ConstDiagonalDynamicIndexReturnType; EIGEN_DEVICE_FUNC DiagonalDynamicIndexReturnType diagonal(Index index); EIGEN_DEVICE_FUNC ConstDiagonalDynamicIndexReturnType diagonal(Index index) const; template struct TriangularViewReturnType { typedef TriangularView Type; }; template struct ConstTriangularViewReturnType { typedef const TriangularView Type; }; template EIGEN_DEVICE_FUNC typename TriangularViewReturnType::Type triangularView(); template EIGEN_DEVICE_FUNC typename ConstTriangularViewReturnType::Type triangularView() const; template struct SelfAdjointViewReturnType { typedef SelfAdjointView Type; }; template struct ConstSelfAdjointViewReturnType { typedef const SelfAdjointView Type; }; template EIGEN_DEVICE_FUNC typename SelfAdjointViewReturnType::Type selfadjointView(); template EIGEN_DEVICE_FUNC typename ConstSelfAdjointViewReturnType::Type selfadjointView() const; const SparseView sparseView(const Scalar& m_reference = Scalar(0), const typename NumTraits::Real& m_epsilon = NumTraits::dummy_precision()) const; EIGEN_DEVICE_FUNC static const IdentityReturnType Identity(); EIGEN_DEVICE_FUNC static const IdentityReturnType Identity(Index rows, Index cols); EIGEN_DEVICE_FUNC static const BasisReturnType Unit(Index size, Index i); EIGEN_DEVICE_FUNC static const BasisReturnType Unit(Index i); EIGEN_DEVICE_FUNC static const BasisReturnType UnitX(); EIGEN_DEVICE_FUNC static const BasisReturnType UnitY(); EIGEN_DEVICE_FUNC static const BasisReturnType UnitZ(); EIGEN_DEVICE_FUNC static const BasisReturnType UnitW(); EIGEN_DEVICE_FUNC const DiagonalWrapper asDiagonal() const; const PermutationWrapper asPermutation() const; EIGEN_DEVICE_FUNC Derived& setIdentity(); EIGEN_DEVICE_FUNC Derived& setIdentity(Index rows, Index cols); EIGEN_DEVICE_FUNC Derived& setUnit(Index i); EIGEN_DEVICE_FUNC Derived& setUnit(Index newSize, Index i); bool isIdentity(const RealScalar& prec = NumTraits::dummy_precision()) const; bool isDiagonal(const RealScalar& prec = NumTraits::dummy_precision()) const; bool isUpperTriangular(const RealScalar& prec = NumTraits::dummy_precision()) const; bool isLowerTriangular(const RealScalar& prec = NumTraits::dummy_precision()) const; template bool isOrthogonal(const MatrixBase& other, const RealScalar& prec = NumTraits::dummy_precision()) const; bool isUnitary(const RealScalar& prec = NumTraits::dummy_precision()) const; /** \returns true if each coefficients of \c *this and \a other are all exactly equal. * \warning When using floating point scalar values you probably should rather use a * fuzzy comparison such as isApprox() * \sa isApprox(), operator!= */ template EIGEN_DEVICE_FUNC inline bool operator==(const MatrixBase& other) const { return cwiseEqual(other).all(); } /** \returns true if at least one pair of coefficients of \c *this and \a other are not exactly equal to each other. * \warning When using floating point scalar values you probably should rather use a * fuzzy comparison such as isApprox() * \sa isApprox(), operator== */ template EIGEN_DEVICE_FUNC inline bool operator!=(const MatrixBase& other) const { return cwiseNotEqual(other).any(); } NoAlias EIGEN_DEVICE_FUNC noalias(); // TODO forceAlignedAccess is temporarily disabled // Need to find a nicer workaround. inline const Derived& forceAlignedAccess() const { return derived(); } inline Derived& forceAlignedAccess() { return derived(); } template inline const Derived& forceAlignedAccessIf() const { return derived(); } template inline Derived& forceAlignedAccessIf() { return derived(); } EIGEN_DEVICE_FUNC Scalar trace() const; template EIGEN_DEVICE_FUNC RealScalar lpNorm() const; EIGEN_DEVICE_FUNC MatrixBase& matrix() { return *this; } EIGEN_DEVICE_FUNC const MatrixBase& matrix() const { return *this; } /** \returns an \link Eigen::ArrayBase Array \endlink expression of this matrix * \sa ArrayBase::matrix() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE ArrayWrapper array() { return ArrayWrapper(derived()); } /** \returns a const \link Eigen::ArrayBase Array \endlink expression of this matrix * \sa ArrayBase::matrix() */ EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const ArrayWrapper array() const { return ArrayWrapper(derived()); } /////////// LU module /////////// inline const FullPivLU fullPivLu() const; inline const PartialPivLU partialPivLu() const; inline const PartialPivLU lu() const; EIGEN_DEVICE_FUNC inline const Inverse inverse() const; template inline void computeInverseAndDetWithCheck( ResultType& inverse, typename ResultType::Scalar& determinant, bool& invertible, const RealScalar& absDeterminantThreshold = NumTraits::dummy_precision() ) const; template inline void computeInverseWithCheck( ResultType& inverse, bool& invertible, const RealScalar& absDeterminantThreshold = NumTraits::dummy_precision() ) const; EIGEN_DEVICE_FUNC Scalar determinant() const; /////////// Cholesky module /////////// inline const LLT llt() const; inline const LDLT ldlt() const; /////////// QR module /////////// inline const HouseholderQR householderQr() const; inline const ColPivHouseholderQR colPivHouseholderQr() const; inline const FullPivHouseholderQR fullPivHouseholderQr() const; inline const CompleteOrthogonalDecomposition completeOrthogonalDecomposition() const; /////////// Eigenvalues module /////////// inline EigenvaluesReturnType eigenvalues() const; inline RealScalar operatorNorm() const; /////////// SVD module /////////// inline JacobiSVD jacobiSvd(unsigned int computationOptions = 0) const; inline BDCSVD bdcSvd(unsigned int computationOptions = 0) const; /////////// Geometry module /////////// #ifndef EIGEN_PARSED_BY_DOXYGEN /// \internal helper struct to form the return type of the cross product template struct cross_product_return_type { typedef typename ScalarBinaryOpTraits::Scalar,typename internal::traits::Scalar>::ReturnType Scalar; typedef Matrix type; }; #endif // EIGEN_PARSED_BY_DOXYGEN template EIGEN_DEVICE_FUNC #ifndef EIGEN_PARSED_BY_DOXYGEN inline typename cross_product_return_type::type #else inline PlainObject #endif cross(const MatrixBase& other) const; template EIGEN_DEVICE_FUNC inline PlainObject cross3(const MatrixBase& other) const; EIGEN_DEVICE_FUNC inline PlainObject unitOrthogonal(void) const; EIGEN_DEVICE_FUNC inline Matrix eulerAngles(Index a0, Index a1, Index a2) const; // put this as separate enum value to work around possible GCC 4.3 bug (?) enum { HomogeneousReturnTypeDirection = ColsAtCompileTime==1&&RowsAtCompileTime==1 ? ((internal::traits::Flags&RowMajorBit)==RowMajorBit ? Horizontal : Vertical) : ColsAtCompileTime==1 ? Vertical : Horizontal }; typedef Homogeneous HomogeneousReturnType; EIGEN_DEVICE_FUNC inline HomogeneousReturnType homogeneous() const; enum { SizeMinusOne = SizeAtCompileTime==Dynamic ? Dynamic : SizeAtCompileTime-1 }; typedef Block::ColsAtCompileTime==1 ? SizeMinusOne : 1, internal::traits::ColsAtCompileTime==1 ? 1 : SizeMinusOne> ConstStartMinusOne; typedef EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(ConstStartMinusOne,Scalar,quotient) HNormalizedReturnType; EIGEN_DEVICE_FUNC inline const HNormalizedReturnType hnormalized() const; ////////// Householder module /////////// EIGEN_DEVICE_FUNC void makeHouseholderInPlace(Scalar& tau, RealScalar& beta); template EIGEN_DEVICE_FUNC void makeHouseholder(EssentialPart& essential, Scalar& tau, RealScalar& beta) const; template EIGEN_DEVICE_FUNC void applyHouseholderOnTheLeft(const EssentialPart& essential, const Scalar& tau, Scalar* workspace); template EIGEN_DEVICE_FUNC void applyHouseholderOnTheRight(const EssentialPart& essential, const Scalar& tau, Scalar* workspace); ///////// Jacobi module ///////// template EIGEN_DEVICE_FUNC void applyOnTheLeft(Index p, Index q, const JacobiRotation& j); template EIGEN_DEVICE_FUNC void applyOnTheRight(Index p, Index q, const JacobiRotation& j); ///////// SparseCore module ///////// template EIGEN_STRONG_INLINE const typename SparseMatrixBase::template CwiseProductDenseReturnType::Type cwiseProduct(const SparseMatrixBase &other) const { return other.cwiseProduct(derived()); } ///////// MatrixFunctions module ///////// typedef typename internal::stem_function::type StemFunction; #define EIGEN_MATRIX_FUNCTION(ReturnType, Name, Description) \ /** \returns an expression of the matrix Description of \c *this. \brief This function requires the unsupported MatrixFunctions module. To compute the coefficient-wise Description use ArrayBase::##Name . */ \ const ReturnType Name() const; #define EIGEN_MATRIX_FUNCTION_1(ReturnType, Name, Description, Argument) \ /** \returns an expression of the matrix Description of \c *this. \brief This function requires the unsupported MatrixFunctions module. To compute the coefficient-wise Description use ArrayBase::##Name . */ \ const ReturnType Name(Argument) const; EIGEN_MATRIX_FUNCTION(MatrixExponentialReturnValue, exp, exponential) /** \brief Helper function for the unsupported MatrixFunctions module.*/ const MatrixFunctionReturnValue matrixFunction(StemFunction f) const; EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, cosh, hyperbolic cosine) EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, sinh, hyperbolic sine) #if EIGEN_HAS_CXX11_MATH EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, atanh, inverse hyperbolic cosine) EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, acosh, inverse hyperbolic cosine) EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, asinh, inverse hyperbolic sine) #endif EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, cos, cosine) EIGEN_MATRIX_FUNCTION(MatrixFunctionReturnValue, sin, sine) EIGEN_MATRIX_FUNCTION(MatrixSquareRootReturnValue, sqrt, square root) EIGEN_MATRIX_FUNCTION(MatrixLogarithmReturnValue, log, logarithm) EIGEN_MATRIX_FUNCTION_1(MatrixPowerReturnValue, pow, power to \c p, const RealScalar& p) EIGEN_MATRIX_FUNCTION_1(MatrixComplexPowerReturnValue, pow, power to \c p, const std::complex& p) protected: EIGEN_DEFAULT_COPY_CONSTRUCTOR(MatrixBase) EIGEN_DEFAULT_EMPTY_CONSTRUCTOR_AND_DESTRUCTOR(MatrixBase) private: EIGEN_DEVICE_FUNC explicit MatrixBase(int); EIGEN_DEVICE_FUNC MatrixBase(int,int); template EIGEN_DEVICE_FUNC explicit MatrixBase(const MatrixBase&); protected: // mixing arrays and matrices is not legal template Derived& operator+=(const ArrayBase& ) {EIGEN_STATIC_ASSERT(std::ptrdiff_t(sizeof(typename OtherDerived::Scalar))==-1,YOU_CANNOT_MIX_ARRAYS_AND_MATRICES); return *this;} // mixing arrays and matrices is not legal template Derived& operator-=(const ArrayBase& ) {EIGEN_STATIC_ASSERT(std::ptrdiff_t(sizeof(typename OtherDerived::Scalar))==-1,YOU_CANNOT_MIX_ARRAYS_AND_MATRICES); return *this;} }; /*************************************************************************** * Implementation of matrix base methods ***************************************************************************/ /** replaces \c *this by \c *this * \a other. * * \returns a reference to \c *this * * Example: \include MatrixBase_applyOnTheRight.cpp * Output: \verbinclude MatrixBase_applyOnTheRight.out */ template template inline Derived& MatrixBase::operator*=(const EigenBase &other) { other.derived().applyThisOnTheRight(derived()); return derived(); } /** replaces \c *this by \c *this * \a other. It is equivalent to MatrixBase::operator*=(). * * Example: \include MatrixBase_applyOnTheRight.cpp * Output: \verbinclude MatrixBase_applyOnTheRight.out */ template template inline void MatrixBase::applyOnTheRight(const EigenBase &other) { other.derived().applyThisOnTheRight(derived()); } /** replaces \c *this by \a other * \c *this. * * Example: \include MatrixBase_applyOnTheLeft.cpp * Output: \verbinclude MatrixBase_applyOnTheLeft.out */ template template inline void MatrixBase::applyOnTheLeft(const EigenBase &other) { other.derived().applyThisOnTheLeft(derived()); } } // end namespace Eigen #endif // EIGEN_MATRIXBASE_H piqp-octave/src/eigen/Eigen/src/Core/Reshaped.h0000644000175100001770000004121114107270226021143 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2017 Gael Guennebaud // Copyright (C) 2014 yoco // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_RESHAPED_H #define EIGEN_RESHAPED_H namespace Eigen { /** \class Reshaped * \ingroup Core_Module * * \brief Expression of a fixed-size or dynamic-size reshape * * \tparam XprType the type of the expression in which we are taking a reshape * \tparam Rows the number of rows of the reshape we are taking at compile time (optional) * \tparam Cols the number of columns of the reshape we are taking at compile time (optional) * \tparam Order can be ColMajor or RowMajor, default is ColMajor. * * This class represents an expression of either a fixed-size or dynamic-size reshape. * It is the return type of DenseBase::reshaped(NRowsType,NColsType) and * most of the time this is the only way it is used. * * However, in C++98, if you want to directly maniputate reshaped expressions, * for instance if you want to write a function returning such an expression, you * will need to use this class. In C++11, it is advised to use the \em auto * keyword for such use cases. * * Here is an example illustrating the dynamic case: * \include class_Reshaped.cpp * Output: \verbinclude class_Reshaped.out * * Here is an example illustrating the fixed-size case: * \include class_FixedReshaped.cpp * Output: \verbinclude class_FixedReshaped.out * * \sa DenseBase::reshaped(NRowsType,NColsType) */ namespace internal { template struct traits > : traits { typedef typename traits::Scalar Scalar; typedef typename traits::StorageKind StorageKind; typedef typename traits::XprKind XprKind; enum{ MatrixRows = traits::RowsAtCompileTime, MatrixCols = traits::ColsAtCompileTime, RowsAtCompileTime = Rows, ColsAtCompileTime = Cols, MaxRowsAtCompileTime = Rows, MaxColsAtCompileTime = Cols, XpxStorageOrder = ((int(traits::Flags) & RowMajorBit) == RowMajorBit) ? RowMajor : ColMajor, ReshapedStorageOrder = (RowsAtCompileTime == 1 && ColsAtCompileTime != 1) ? RowMajor : (ColsAtCompileTime == 1 && RowsAtCompileTime != 1) ? ColMajor : XpxStorageOrder, HasSameStorageOrderAsXprType = (ReshapedStorageOrder == XpxStorageOrder), InnerSize = (ReshapedStorageOrder==int(RowMajor)) ? int(ColsAtCompileTime) : int(RowsAtCompileTime), InnerStrideAtCompileTime = HasSameStorageOrderAsXprType ? int(inner_stride_at_compile_time::ret) : Dynamic, OuterStrideAtCompileTime = Dynamic, HasDirectAccess = internal::has_direct_access::ret && (Order==int(XpxStorageOrder)) && ((evaluator::Flags&LinearAccessBit)==LinearAccessBit), MaskPacketAccessBit = (InnerSize == Dynamic || (InnerSize % packet_traits::size) == 0) && (InnerStrideAtCompileTime == 1) ? PacketAccessBit : 0, //MaskAlignedBit = ((OuterStrideAtCompileTime!=Dynamic) && (((OuterStrideAtCompileTime * int(sizeof(Scalar))) % 16) == 0)) ? AlignedBit : 0, FlagsLinearAccessBit = (RowsAtCompileTime == 1 || ColsAtCompileTime == 1) ? LinearAccessBit : 0, FlagsLvalueBit = is_lvalue::value ? LvalueBit : 0, FlagsRowMajorBit = (ReshapedStorageOrder==int(RowMajor)) ? RowMajorBit : 0, FlagsDirectAccessBit = HasDirectAccess ? DirectAccessBit : 0, Flags0 = traits::Flags & ( (HereditaryBits & ~RowMajorBit) | MaskPacketAccessBit), Flags = (Flags0 | FlagsLinearAccessBit | FlagsLvalueBit | FlagsRowMajorBit | FlagsDirectAccessBit) }; }; template class ReshapedImpl_dense; } // end namespace internal template class ReshapedImpl; template class Reshaped : public ReshapedImpl::StorageKind> { typedef ReshapedImpl::StorageKind> Impl; public: //typedef typename Impl::Base Base; typedef Impl Base; EIGEN_GENERIC_PUBLIC_INTERFACE(Reshaped) EIGEN_INHERIT_ASSIGNMENT_OPERATORS(Reshaped) /** Fixed-size constructor */ EIGEN_DEVICE_FUNC inline Reshaped(XprType& xpr) : Impl(xpr) { EIGEN_STATIC_ASSERT(RowsAtCompileTime!=Dynamic && ColsAtCompileTime!=Dynamic,THIS_METHOD_IS_ONLY_FOR_FIXED_SIZE) eigen_assert(Rows * Cols == xpr.rows() * xpr.cols()); } /** Dynamic-size constructor */ EIGEN_DEVICE_FUNC inline Reshaped(XprType& xpr, Index reshapeRows, Index reshapeCols) : Impl(xpr, reshapeRows, reshapeCols) { eigen_assert((RowsAtCompileTime==Dynamic || RowsAtCompileTime==reshapeRows) && (ColsAtCompileTime==Dynamic || ColsAtCompileTime==reshapeCols)); eigen_assert(reshapeRows * reshapeCols == xpr.rows() * xpr.cols()); } }; // The generic default implementation for dense reshape simply forward to the internal::ReshapedImpl_dense // that must be specialized for direct and non-direct access... template class ReshapedImpl : public internal::ReshapedImpl_dense >::HasDirectAccess> { typedef internal::ReshapedImpl_dense >::HasDirectAccess> Impl; public: typedef Impl Base; EIGEN_INHERIT_ASSIGNMENT_OPERATORS(ReshapedImpl) EIGEN_DEVICE_FUNC inline ReshapedImpl(XprType& xpr) : Impl(xpr) {} EIGEN_DEVICE_FUNC inline ReshapedImpl(XprType& xpr, Index reshapeRows, Index reshapeCols) : Impl(xpr, reshapeRows, reshapeCols) {} }; namespace internal { /** \internal Internal implementation of dense Reshaped in the general case. */ template class ReshapedImpl_dense : public internal::dense_xpr_base >::type { typedef Reshaped ReshapedType; public: typedef typename internal::dense_xpr_base::type Base; EIGEN_DENSE_PUBLIC_INTERFACE(ReshapedType) EIGEN_INHERIT_ASSIGNMENT_OPERATORS(ReshapedImpl_dense) typedef typename internal::ref_selector::non_const_type MatrixTypeNested; typedef typename internal::remove_all::type NestedExpression; class InnerIterator; /** Fixed-size constructor */ EIGEN_DEVICE_FUNC inline ReshapedImpl_dense(XprType& xpr) : m_xpr(xpr), m_rows(Rows), m_cols(Cols) {} /** Dynamic-size constructor */ EIGEN_DEVICE_FUNC inline ReshapedImpl_dense(XprType& xpr, Index nRows, Index nCols) : m_xpr(xpr), m_rows(nRows), m_cols(nCols) {} EIGEN_DEVICE_FUNC Index rows() const { return m_rows; } EIGEN_DEVICE_FUNC Index cols() const { return m_cols; } #ifdef EIGEN_PARSED_BY_DOXYGEN /** \sa MapBase::data() */ EIGEN_DEVICE_FUNC inline const Scalar* data() const; EIGEN_DEVICE_FUNC inline Index innerStride() const; EIGEN_DEVICE_FUNC inline Index outerStride() const; #endif /** \returns the nested expression */ EIGEN_DEVICE_FUNC const typename internal::remove_all::type& nestedExpression() const { return m_xpr; } /** \returns the nested expression */ EIGEN_DEVICE_FUNC typename internal::remove_reference::type& nestedExpression() { return m_xpr; } protected: MatrixTypeNested m_xpr; const internal::variable_if_dynamic m_rows; const internal::variable_if_dynamic m_cols; }; /** \internal Internal implementation of dense Reshaped in the direct access case. */ template class ReshapedImpl_dense : public MapBase > { typedef Reshaped ReshapedType; typedef typename internal::ref_selector::non_const_type XprTypeNested; public: typedef MapBase Base; EIGEN_DENSE_PUBLIC_INTERFACE(ReshapedType) EIGEN_INHERIT_ASSIGNMENT_OPERATORS(ReshapedImpl_dense) /** Fixed-size constructor */ EIGEN_DEVICE_FUNC inline ReshapedImpl_dense(XprType& xpr) : Base(xpr.data()), m_xpr(xpr) {} /** Dynamic-size constructor */ EIGEN_DEVICE_FUNC inline ReshapedImpl_dense(XprType& xpr, Index nRows, Index nCols) : Base(xpr.data(), nRows, nCols), m_xpr(xpr) {} EIGEN_DEVICE_FUNC const typename internal::remove_all::type& nestedExpression() const { return m_xpr; } EIGEN_DEVICE_FUNC XprType& nestedExpression() { return m_xpr; } /** \sa MapBase::innerStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index innerStride() const { return m_xpr.innerStride(); } /** \sa MapBase::outerStride() */ EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index outerStride() const { return ((Flags&RowMajorBit)==RowMajorBit) ? this->cols() : this->rows(); } protected: XprTypeNested m_xpr; }; // Evaluators template struct reshaped_evaluator; template struct evaluator > : reshaped_evaluator >::HasDirectAccess> { typedef Reshaped XprType; typedef typename XprType::Scalar Scalar; // TODO: should check for smaller packet types typedef typename packet_traits::type PacketScalar; enum { CoeffReadCost = evaluator::CoeffReadCost, HasDirectAccess = traits::HasDirectAccess, // RowsAtCompileTime = traits::RowsAtCompileTime, // ColsAtCompileTime = traits::ColsAtCompileTime, // MaxRowsAtCompileTime = traits::MaxRowsAtCompileTime, // MaxColsAtCompileTime = traits::MaxColsAtCompileTime, // // InnerStrideAtCompileTime = traits::HasSameStorageOrderAsXprType // ? int(inner_stride_at_compile_time::ret) // : Dynamic, // OuterStrideAtCompileTime = Dynamic, FlagsLinearAccessBit = (traits::RowsAtCompileTime == 1 || traits::ColsAtCompileTime == 1 || HasDirectAccess) ? LinearAccessBit : 0, FlagsRowMajorBit = (traits::ReshapedStorageOrder==int(RowMajor)) ? RowMajorBit : 0, FlagsDirectAccessBit = HasDirectAccess ? DirectAccessBit : 0, Flags0 = evaluator::Flags & (HereditaryBits & ~RowMajorBit), Flags = Flags0 | FlagsLinearAccessBit | FlagsRowMajorBit | FlagsDirectAccessBit, PacketAlignment = unpacket_traits::alignment, Alignment = evaluator::Alignment }; typedef reshaped_evaluator reshaped_evaluator_type; EIGEN_DEVICE_FUNC explicit evaluator(const XprType& xpr) : reshaped_evaluator_type(xpr) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } }; template struct reshaped_evaluator : evaluator_base > { typedef Reshaped XprType; enum { CoeffReadCost = evaluator::CoeffReadCost /* TODO + cost of index computations */, Flags = (evaluator::Flags & (HereditaryBits /*| LinearAccessBit | DirectAccessBit*/)), Alignment = 0 }; EIGEN_DEVICE_FUNC explicit reshaped_evaluator(const XprType& xpr) : m_argImpl(xpr.nestedExpression()), m_xpr(xpr) { EIGEN_INTERNAL_CHECK_COST_VALUE(CoeffReadCost); } typedef typename XprType::Scalar Scalar; typedef typename XprType::CoeffReturnType CoeffReturnType; typedef std::pair RowCol; inline RowCol index_remap(Index rowId, Index colId) const { if(Order==ColMajor) { const Index nth_elem_idx = colId * m_xpr.rows() + rowId; return RowCol(nth_elem_idx % m_xpr.nestedExpression().rows(), nth_elem_idx / m_xpr.nestedExpression().rows()); } else { const Index nth_elem_idx = colId + rowId * m_xpr.cols(); return RowCol(nth_elem_idx / m_xpr.nestedExpression().cols(), nth_elem_idx % m_xpr.nestedExpression().cols()); } } EIGEN_DEVICE_FUNC inline Scalar& coeffRef(Index rowId, Index colId) { EIGEN_STATIC_ASSERT_LVALUE(XprType) const RowCol row_col = index_remap(rowId, colId); return m_argImpl.coeffRef(row_col.first, row_col.second); } EIGEN_DEVICE_FUNC inline const Scalar& coeffRef(Index rowId, Index colId) const { const RowCol row_col = index_remap(rowId, colId); return m_argImpl.coeffRef(row_col.first, row_col.second); } EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const CoeffReturnType coeff(Index rowId, Index colId) const { const RowCol row_col = index_remap(rowId, colId); return m_argImpl.coeff(row_col.first, row_col.second); } EIGEN_DEVICE_FUNC inline Scalar& coeffRef(Index index) { EIGEN_STATIC_ASSERT_LVALUE(XprType) const RowCol row_col = index_remap(Rows == 1 ? 0 : index, Rows == 1 ? index : 0); return m_argImpl.coeffRef(row_col.first, row_col.second); } EIGEN_DEVICE_FUNC inline const Scalar& coeffRef(Index index) const { const RowCol row_col = index_remap(Rows == 1 ? 0 : index, Rows == 1 ? index : 0); return m_argImpl.coeffRef(row_col.first, row_col.second); } EIGEN_DEVICE_FUNC inline const CoeffReturnType coeff(Index index) const { const RowCol row_col = index_remap(Rows == 1 ? 0 : index, Rows == 1 ? index : 0); return m_argImpl.coeff(row_col.first, row_col.second); } #if 0 EIGEN_DEVICE_FUNC template inline PacketScalar packet(Index rowId, Index colId) const { const RowCol row_col = index_remap(rowId, colId); return m_argImpl.template packet(row_col.first, row_col.second); } template EIGEN_DEVICE_FUNC inline void writePacket(Index rowId, Index colId, const PacketScalar& val) { const RowCol row_col = index_remap(rowId, colId); m_argImpl.const_cast_derived().template writePacket (row_col.first, row_col.second, val); } template EIGEN_DEVICE_FUNC inline PacketScalar packet(Index index) const { const RowCol row_col = index_remap(RowsAtCompileTime == 1 ? 0 : index, RowsAtCompileTime == 1 ? index : 0); return m_argImpl.template packet(row_col.first, row_col.second); } template EIGEN_DEVICE_FUNC inline void writePacket(Index index, const PacketScalar& val) { const RowCol row_col = index_remap(RowsAtCompileTime == 1 ? 0 : index, RowsAtCompileTime == 1 ? index : 0); return m_argImpl.template packet(row_col.first, row_col.second, val); } #endif protected: evaluator m_argImpl; const XprType& m_xpr; }; template struct reshaped_evaluator : mapbase_evaluator, typename Reshaped::PlainObject> { typedef Reshaped XprType; typedef typename XprType::Scalar Scalar; EIGEN_DEVICE_FUNC explicit reshaped_evaluator(const XprType& xpr) : mapbase_evaluator(xpr) { // TODO: for the 3.4 release, this should be turned to an internal assertion, but let's keep it as is for the beta lifetime eigen_assert(((internal::UIntPtr(xpr.data()) % EIGEN_PLAIN_ENUM_MAX(1,evaluator::Alignment)) == 0) && "data is not aligned"); } }; } // end namespace internal } // end namespace Eigen #endif // EIGEN_RESHAPED_H piqp-octave/src/eigen/Eigen/src/Core/NestByValue.h0000644000175100001770000000473014107270226021616 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008 Gael Guennebaud // Copyright (C) 2006-2008 Benoit Jacob // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_NESTBYVALUE_H #define EIGEN_NESTBYVALUE_H namespace Eigen { namespace internal { template struct traits > : public traits { enum { Flags = traits::Flags & ~NestByRefBit }; }; } /** \class NestByValue * \ingroup Core_Module * * \brief Expression which must be nested by value * * \tparam ExpressionType the type of the object of which we are requiring nesting-by-value * * This class is the return type of MatrixBase::nestByValue() * and most of the time this is the only way it is used. * * \sa MatrixBase::nestByValue() */ template class NestByValue : public internal::dense_xpr_base< NestByValue >::type { public: typedef typename internal::dense_xpr_base::type Base; EIGEN_DENSE_PUBLIC_INTERFACE(NestByValue) EIGEN_DEVICE_FUNC explicit inline NestByValue(const ExpressionType& matrix) : m_expression(matrix) {} EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index rows() const EIGEN_NOEXCEPT { return m_expression.rows(); } EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR inline Index cols() const EIGEN_NOEXCEPT { return m_expression.cols(); } EIGEN_DEVICE_FUNC operator const ExpressionType&() const { return m_expression; } EIGEN_DEVICE_FUNC const ExpressionType& nestedExpression() const { return m_expression; } protected: const ExpressionType m_expression; }; /** \returns an expression of the temporary version of *this. */ template EIGEN_DEVICE_FUNC inline const NestByValue DenseBase::nestByValue() const { return NestByValue(derived()); } namespace internal { // Evaluator of Solve -> eval into a temporary template struct evaluator > : public evaluator { typedef evaluator Base; EIGEN_DEVICE_FUNC explicit evaluator(const NestByValue& xpr) : Base(xpr.nestedExpression()) {} }; } } // end namespace Eigen #endif // EIGEN_NESTBYVALUE_H piqp-octave/src/eigen/Eigen/src/Core/Select.h0000644000175100001770000001377714107270226020647 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2008-2010 Gael Guennebaud // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #ifndef EIGEN_SELECT_H #define EIGEN_SELECT_H namespace Eigen { /** \class Select * \ingroup Core_Module * * \brief Expression of a coefficient wise version of the C++ ternary operator ?: * * \param ConditionMatrixType the type of the \em condition expression which must be a boolean matrix * \param ThenMatrixType the type of the \em then expression * \param ElseMatrixType the type of the \em else expression * * This class represents an expression of a coefficient wise version of the C++ ternary operator ?:. * It is the return type of DenseBase::select() and most of the time this is the only way it is used. * * \sa DenseBase::select(const DenseBase&, const DenseBase&) const */ namespace internal { template struct traits > : traits { typedef typename traits::Scalar Scalar; typedef Dense StorageKind; typedef typename traits::XprKind XprKind; typedef typename ConditionMatrixType::Nested ConditionMatrixNested; typedef typename ThenMatrixType::Nested ThenMatrixNested; typedef typename ElseMatrixType::Nested ElseMatrixNested; enum { RowsAtCompileTime = ConditionMatrixType::RowsAtCompileTime, ColsAtCompileTime = ConditionMatrixType::ColsAtCompileTime, MaxRowsAtCompileTime = ConditionMatrixType::MaxRowsAtCompileTime, MaxColsAtCompileTime = ConditionMatrixType::MaxColsAtCompileTime, Flags = (unsigned int)ThenMatrixType::Flags & ElseMatrixType::Flags & RowMajorBit }; }; } template class Select : public internal::dense_xpr_base< Select >::type, internal::no_assignment_operator { public: typedef typename internal::dense_xpr_base" << endl; cerr << "available actions:" << endl; for (auto it = available_actions.begin(); it != available_actions.end(); ++it) { cerr << " " << (*it)->invokation_name() << endl; } cerr << "the input files should each contain an output of benchmark-blocking-sizes" << endl; exit(1); } int main(int argc, char* argv[]) { cout.precision(default_precision); cerr.precision(default_precision); vector> available_actions; available_actions.emplace_back(new partition_action_t); available_actions.emplace_back(new evaluate_defaults_action_t); vector input_filenames; action_t* action = nullptr; if (argc < 2) { show_usage_and_exit(argc, argv, available_actions); } for (int i = 1; i < argc; i++) { bool arg_handled = false; // Step 1. Try to match action invocation names. for (auto it = available_actions.begin(); it != available_actions.end(); ++it) { if (!strcmp(argv[i], (*it)->invokation_name())) { if (!action) { action = it->get(); arg_handled = true; break; } else { cerr << "can't specify more than one action!" << endl; show_usage_and_exit(argc, argv, available_actions); } } } if (arg_handled) { continue; } // Step 2. Try to match option names. if (argv[i][0] == '-') { if (!strcmp(argv[i], "--only-cubic-sizes")) { only_cubic_sizes = true; arg_handled = true; } if (!strcmp(argv[i], "--dump-tables")) { dump_tables = true; arg_handled = true; } if (!arg_handled) { cerr << "Unrecognized option: " << argv[i] << endl; show_usage_and_exit(argc, argv, available_actions); } } if (arg_handled) { continue; } // Step 3. Default to interpreting args as input filenames. input_filenames.emplace_back(argv[i]); } if (dump_tables && only_cubic_sizes) { cerr << "Incompatible options: --only-cubic-sizes and --dump-tables." << endl; show_usage_and_exit(argc, argv, available_actions); } if (!action) { show_usage_and_exit(argc, argv, available_actions); } action->run(input_filenames); } piqp-octave/src/eigen/bench/benchBlasGemm.cpp0000644000175100001770000001425114107270226021007 0ustar runnerdocker// g++ -O3 -DNDEBUG -I.. -L /usr/lib64/atlas/ benchBlasGemm.cpp -o benchBlasGemm -lrt -lcblas // possible options: // -DEIGEN_DONT_VECTORIZE // -msse2 // #define EIGEN_DEFAULT_TO_ROW_MAJOR #define _FLOAT #include #include #include "BenchTimer.h" // include the BLAS headers extern "C" { #include } #include #ifdef _FLOAT typedef float Scalar; #define CBLAS_GEMM cblas_sgemm #else typedef double Scalar; #define CBLAS_GEMM cblas_dgemm #endif typedef Eigen::Matrix MyMatrix; void bench_eigengemm(MyMatrix& mc, const MyMatrix& ma, const MyMatrix& mb, int nbloops); void check_product(int M, int N, int K); void check_product(void); int main(int argc, char *argv[]) { // disable SSE exceptions #ifdef __GNUC__ { int aux; asm( "stmxcsr %[aux] \n\t" "orl $32832, %[aux] \n\t" "ldmxcsr %[aux] \n\t" : : [aux] "m" (aux)); } #endif int nbtries=1, nbloops=1, M, N, K; if (argc==2) { if (std::string(argv[1])=="check") check_product(); else M = N = K = atoi(argv[1]); } else if ((argc==3) && (std::string(argv[1])=="auto")) { M = N = K = atoi(argv[2]); nbloops = 1000000000/(M*M*M); if (nbloops<1) nbloops = 1; nbtries = 6; } else if (argc==4) { M = N = K = atoi(argv[1]); nbloops = atoi(argv[2]); nbtries = atoi(argv[3]); } else if (argc==6) { M = atoi(argv[1]); N = atoi(argv[2]); K = atoi(argv[3]); nbloops = atoi(argv[4]); nbtries = atoi(argv[5]); } else { std::cout << "Usage: " << argv[0] << " size \n"; std::cout << "Usage: " << argv[0] << " auto size\n"; std::cout << "Usage: " << argv[0] << " size nbloops nbtries\n"; std::cout << "Usage: " << argv[0] << " M N K nbloops nbtries\n"; std::cout << "Usage: " << argv[0] << " check\n"; std::cout << "Options:\n"; std::cout << " size unique size of the 2 matrices (integer)\n"; std::cout << " auto automatically set the number of repetitions and tries\n"; std::cout << " nbloops number of times the GEMM routines is executed\n"; std::cout << " nbtries number of times the loop is benched (return the best try)\n"; std::cout << " M N K sizes of the matrices: MxN = MxK * KxN (integers)\n"; std::cout << " check check eigen product using cblas as a reference\n"; exit(1); } double nbmad = double(M) * double(N) * double(K) * double(nbloops); if (!(std::string(argv[1])=="auto")) std::cout << M << " x " << N << " x " << K << "\n"; Scalar alpha, beta; MyMatrix ma(M,K), mb(K,N), mc(M,N); ma = MyMatrix::Random(M,K); mb = MyMatrix::Random(K,N); mc = MyMatrix::Random(M,N); Eigen::BenchTimer timer; // we simply compute c += a*b, so: alpha = 1; beta = 1; // bench cblas // ROWS_A, COLS_B, COLS_A, 1.0, A, COLS_A, B, COLS_B, 0.0, C, COLS_B); if (!(std::string(argv[1])=="auto")) { timer.reset(); for (uint k=0 ; k(1,64); N = internal::random(1,768); K = internal::random(1,768); M = (0 + M) * 1; std::cout << M << " x " << N << " x " << K << "\n"; check_product(M, N, K); } } piqp-octave/src/eigen/bench/benchFFT.cpp0000644000175100001770000000536614107270226017746 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2009 Mark Borgerding mark a borgerding net // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include #include #include #include #include #include using namespace Eigen; using namespace std; template string nameof(); template <> string nameof() {return "float";} template <> string nameof() {return "double";} template <> string nameof() {return "long double";} #ifndef TYPE #define TYPE float #endif #ifndef NFFT #define NFFT 1024 #endif #ifndef NDATA #define NDATA 1000000 #endif using namespace Eigen; template void bench(int nfft,bool fwd,bool unscaled=false, bool halfspec=false) { typedef typename NumTraits::Real Scalar; typedef typename std::complex Complex; int nits = NDATA/nfft; vector inbuf(nfft); vector outbuf(nfft); FFT< Scalar > fft; if (unscaled) { fft.SetFlag(fft.Unscaled); cout << "unscaled "; } if (halfspec) { fft.SetFlag(fft.HalfSpectrum); cout << "halfspec "; } std::fill(inbuf.begin(),inbuf.end(),0); fft.fwd( outbuf , inbuf); BenchTimer timer; timer.reset(); for (int k=0;k<8;++k) { timer.start(); if (fwd) for(int i = 0; i < nits; i++) fft.fwd( outbuf , inbuf); else for(int i = 0; i < nits; i++) fft.inv(inbuf,outbuf); timer.stop(); } cout << nameof() << " "; double mflops = 5.*nfft*log2((double)nfft) / (1e6 * timer.value() / (double)nits ); if ( NumTraits::IsComplex ) { cout << "complex"; }else{ cout << "real "; mflops /= 2; } if (fwd) cout << " fwd"; else cout << " inv"; cout << " NFFT=" << nfft << " " << (double(1e-6*nfft*nits)/timer.value()) << " MS/s " << mflops << "MFLOPS\n"; } int main(int argc,char ** argv) { bench >(NFFT,true); bench >(NFFT,false); bench(NFFT,true); bench(NFFT,false); bench(NFFT,false,true); bench(NFFT,false,true,true); bench >(NFFT,true); bench >(NFFT,false); bench(NFFT,true); bench(NFFT,false); bench >(NFFT,true); bench >(NFFT,false); bench(NFFT,true); bench(NFFT,false); return 0; } piqp-octave/src/eigen/bench/benchCholesky.cpp0000644000175100001770000000673414107270226021110 0ustar runnerdocker// g++ -DNDEBUG -O3 -I.. benchCholesky.cpp -o benchCholesky && ./benchCholesky // options: // -DBENCH_GSL -lgsl /usr/lib/libcblas.so.3 // -DEIGEN_DONT_VECTORIZE // -msse2 // -DREPEAT=100 // -DTRIES=10 // -DSCALAR=double #include #include #include #include using namespace Eigen; #ifndef REPEAT #define REPEAT 10000 #endif #ifndef TRIES #define TRIES 10 #endif typedef float Scalar; template __attribute__ ((noinline)) void benchLLT(const MatrixType& m) { int rows = m.rows(); int cols = m.cols(); double cost = 0; for (int j=0; j SquareMatrixType; MatrixType a = MatrixType::Random(rows,cols); SquareMatrixType covMat = a * a.adjoint(); BenchTimer timerNoSqrt, timerSqrt; Scalar acc = 0; int r = internal::random(0,covMat.rows()-1); int c = internal::random(0,covMat.cols()-1); for (int t=0; t cholnosqrt(covMat); acc += cholnosqrt.matrixL().coeff(r,c); } timerNoSqrt.stop(); } for (int t=0; t chol(covMat); acc += chol.matrixL().coeff(r,c); } timerSqrt.stop(); } if (MatrixType::RowsAtCompileTime==Dynamic) std::cout << "dyn "; else std::cout << "fixed "; std::cout << covMat.rows() << " \t" << (timerNoSqrt.best()) / repeats << "s " << "(" << 1e-9 * cost*repeats/timerNoSqrt.best() << " GFLOPS)\t" << (timerSqrt.best()) / repeats << "s " << "(" << 1e-9 * cost*repeats/timerSqrt.best() << " GFLOPS)\n"; #ifdef BENCH_GSL if (MatrixType::RowsAtCompileTime==Dynamic) { timerSqrt.reset(); gsl_matrix* gslCovMat = gsl_matrix_alloc(covMat.rows(),covMat.cols()); gsl_matrix* gslCopy = gsl_matrix_alloc(covMat.rows(),covMat.cols()); eiToGsl(covMat, &gslCovMat); for (int t=0; t0; ++i) benchLLT(Matrix(dynsizes[i],dynsizes[i])); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); benchLLT(Matrix()); return 0; } piqp-octave/src/eigen/bench/quatmul.cpp0000644000175100001770000000211114107270226020000 0ustar runnerdocker#include #include #include #include using namespace Eigen; template EIGEN_DONT_INLINE void quatmul_default(const Quat& a, const Quat& b, Quat& c) { c = a * b; } template EIGEN_DONT_INLINE void quatmul_novec(const Quat& a, const Quat& b, Quat& c) { c = internal::quat_product<0, Quat, Quat, typename Quat::Scalar, Aligned>::run(a,b); } template void bench(const std::string& label) { int tries = 10; int rep = 1000000; BenchTimer t; Quat a(4, 1, 2, 3); Quat b(2, 3, 4, 5); Quat c; std::cout.precision(3); BENCH(t, tries, rep, quatmul_default(a,b,c)); std::cout << label << " default " << 1e3*t.best(CPU_TIMER) << "ms \t" << 1e-6*double(rep)/(t.best(CPU_TIMER)) << " M mul/s\n"; BENCH(t, tries, rep, quatmul_novec(a,b,c)); std::cout << label << " novec " << 1e3*t.best(CPU_TIMER) << "ms \t" << 1e-6*double(rep)/(t.best(CPU_TIMER)) << " M mul/s\n"; } int main() { bench("float "); bench("double"); return 0; } piqp-octave/src/eigen/bench/benchmark_suite0000755000175100001770000000227114107270226020704 0ustar runnerdocker#!/bin/bash CXX=${CXX-g++} # default value unless caller has defined CXX echo "Fixed size 3x3, column-major, -DNDEBUG" $CXX -O3 -I .. -DNDEBUG benchmark.cpp -o benchmark && time ./benchmark >/dev/null echo "Fixed size 3x3, column-major, with asserts" $CXX -O3 -I .. benchmark.cpp -o benchmark && time ./benchmark >/dev/null echo "Fixed size 3x3, row-major, -DNDEBUG" $CXX -O3 -I .. -DEIGEN_DEFAULT_TO_ROW_MAJOR -DNDEBUG benchmark.cpp -o benchmark && time ./benchmark >/dev/null echo "Fixed size 3x3, row-major, with asserts" $CXX -O3 -I .. -DEIGEN_DEFAULT_TO_ROW_MAJOR benchmark.cpp -o benchmark && time ./benchmark >/dev/null echo "Dynamic size 20x20, column-major, -DNDEBUG" $CXX -O3 -I .. -DNDEBUG benchmarkX.cpp -o benchmarkX && time ./benchmarkX >/dev/null echo "Dynamic size 20x20, column-major, with asserts" $CXX -O3 -I .. benchmarkX.cpp -o benchmarkX && time ./benchmarkX >/dev/null echo "Dynamic size 20x20, row-major, -DNDEBUG" $CXX -O3 -I .. -DEIGEN_DEFAULT_TO_ROW_MAJOR -DNDEBUG benchmarkX.cpp -o benchmarkX && time ./benchmarkX >/dev/null echo "Dynamic size 20x20, row-major, with asserts" $CXX -O3 -I .. -DEIGEN_DEFAULT_TO_ROW_MAJOR benchmarkX.cpp -o benchmarkX && time ./benchmarkX >/dev/null piqp-octave/src/eigen/bench/bench_reverse.cpp0000644000175100001770000000415714107270226021136 0ustar runnerdocker #include #include #include using namespace Eigen; #ifndef REPEAT #define REPEAT 100000 #endif #ifndef TRIES #define TRIES 20 #endif typedef double Scalar; template __attribute__ ((noinline)) void bench_reverse(const MatrixType& m) { int rows = m.rows(); int cols = m.cols(); int size = m.size(); int repeats = (REPEAT*1000)/size; MatrixType a = MatrixType::Random(rows,cols); MatrixType b = MatrixType::Random(rows,cols); BenchTimer timerB, timerH, timerV; Scalar acc = 0; int r = internal::random(0,rows-1); int c = internal::random(0,cols-1); for (int t=0; t0; ++i) { bench_reverse(Matrix(dynsizes[i],dynsizes[i])); bench_reverse(Matrix(dynsizes[i]*dynsizes[i])); } // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); // bench_reverse(Matrix()); return 0; } piqp-octave/src/eigen/bench/product_threshold.cpp0000644000175100001770000000624014107270226022053 0ustar runnerdocker #include #include #include using namespace Eigen; using namespace std; #define END 9 template struct map_size { enum { ret = S }; }; template<> struct map_size<10> { enum { ret = 20 }; }; template<> struct map_size<11> { enum { ret = 50 }; }; template<> struct map_size<12> { enum { ret = 100 }; }; template<> struct map_size<13> { enum { ret = 300 }; }; template struct alt_prod { enum { ret = M==1 && N==1 ? InnerProduct : K==1 ? OuterProduct : M==1 ? GemvProduct : N==1 ? GemvProduct : GemmProduct }; }; void print_mode(int mode) { if(mode==InnerProduct) std::cout << "i"; if(mode==OuterProduct) std::cout << "o"; if(mode==CoeffBasedProductMode) std::cout << "c"; if(mode==LazyCoeffBasedProductMode) std::cout << "l"; if(mode==GemvProduct) std::cout << "v"; if(mode==GemmProduct) std::cout << "m"; } template EIGEN_DONT_INLINE void prod(const Lhs& a, const Rhs& b, Res& c) { c.noalias() += typename ProductReturnType::Type(a,b); } template EIGEN_DONT_INLINE void bench_prod() { typedef Matrix Lhs; Lhs a; a.setRandom(); typedef Matrix Rhs; Rhs b; b.setRandom(); typedef Matrix Res; Res c; c.setRandom(); BenchTimer t; double n = 2.*double(M)*double(N)*double(K); int rep = 100000./n; rep /= 2; if(rep<1) rep = 1; do { rep *= 2; t.reset(); BENCH(t,1,rep,prod(a,b,c)); } while(t.best()<0.1); t.reset(); BENCH(t,5,rep,prod(a,b,c)); print_mode(Mode); std::cout << int(1e-6*n*rep/t.best()) << "\t"; } template struct print_n; template struct loop_on_m; template struct loop_on_n; template struct loop_on_k { static void run() { std::cout << "K=" << K << "\t"; print_n::run(); std::cout << "\n"; loop_on_m::run(); std::cout << "\n\n"; loop_on_k::run(); } }; template struct loop_on_k { static void run(){} }; template struct loop_on_m { static void run() { std::cout << M << "f\t"; loop_on_n::run(); std::cout << "\n"; std::cout << M << "f\t"; loop_on_n::run(); std::cout << "\n"; loop_on_m::run(); } }; template struct loop_on_m { static void run(){} }; template struct loop_on_n { static void run() { bench_prod::ret : Mode>(); loop_on_n::run(); } }; template struct loop_on_n { static void run(){} }; template struct print_n { static void run() { std::cout << map_size::ret << "\t"; print_n::run(); } }; template<> struct print_n { static void run(){} }; int main() { loop_on_k<1,1,1>::run(); return 0; } piqp-octave/src/eigen/bench/geometry.cpp0000644000175100001770000000635314107270226020157 0ustar runnerdocker #include #include #include using namespace std; using namespace Eigen; #ifndef SCALAR #define SCALAR float #endif #ifndef SIZE #define SIZE 8 #endif typedef SCALAR Scalar; typedef NumTraits::Real RealScalar; typedef Matrix A; typedef Matrix B; typedef Matrix C; typedef Matrix M; template EIGEN_DONT_INLINE void transform(const Transformation& t, Data& data) { EIGEN_ASM_COMMENT("begin"); data = t * data; EIGEN_ASM_COMMENT("end"); } template EIGEN_DONT_INLINE void transform(const Quaternion& t, Data& data) { EIGEN_ASM_COMMENT("begin quat"); for(int i=0;i struct ToRotationMatrixWrapper { enum {Dim = T::Dim}; typedef typename T::Scalar Scalar; ToRotationMatrixWrapper(const T& o) : object(o) {} T object; }; template EIGEN_DONT_INLINE void transform(const ToRotationMatrixWrapper& t, Data& data) { EIGEN_ASM_COMMENT("begin quat via mat"); data = t.object.toRotationMatrix() * data; EIGEN_ASM_COMMENT("end quat via mat"); } template EIGEN_DONT_INLINE void transform(const Transform& t, Data& data) { data = (t * data.colwise().homogeneous()).template block(0,0); } template struct get_dim { enum { Dim = T::Dim }; }; template struct get_dim > { enum { Dim = R }; }; template struct bench_impl { static EIGEN_DONT_INLINE void run(const Transformation& t) { Matrix::Dim,N> data; data.setRandom(); bench_impl::run(t); BenchTimer timer; BENCH(timer,10,100000,transform(t,data)); cout.width(9); cout << timer.best() << " "; } }; template struct bench_impl { static EIGEN_DONT_INLINE void run(const Transformation&) {} }; template EIGEN_DONT_INLINE void bench(const std::string& msg, const Transformation& t) { cout << msg << " "; bench_impl::run(t); std::cout << "\n"; } int main(int argc, char ** argv) { Matrix mat34; mat34.setRandom(); Transform iso3(mat34); Transform aff3(mat34); Transform caff3(mat34); Transform proj3(mat34); Quaternion quat;quat.setIdentity(); ToRotationMatrixWrapper > quatmat(quat); Matrix mat33; mat33.setRandom(); cout.precision(4); std::cout << "N "; for(int i=0;i #include "BenchTimer.h" using namespace std; using namespace Eigen; #include #include #include #include #include #include #include #include #include template void initMatrix_random(MatrixType& mat) __attribute__((noinline)); template void initMatrix_random(MatrixType& mat) { mat.setRandom();// = MatrixType::random(mat.rows(), mat.cols()); } template void initMatrix_identity(MatrixType& mat) __attribute__((noinline)); template void initMatrix_identity(MatrixType& mat) { mat.setIdentity(); } #ifndef __INTEL_COMPILER #define DISABLE_SSE_EXCEPTIONS() { \ int aux; \ asm( \ "stmxcsr %[aux] \n\t" \ "orl $32832, %[aux] \n\t" \ "ldmxcsr %[aux] \n\t" \ : : [aux] "m" (aux)); \ } #else #define DISABLE_SSE_EXCEPTIONS() #endif #ifdef BENCH_GMM #include template void eiToGmm(const EigenMatrixType& src, GmmMatrixType& dst) { dst.resize(src.rows(),src.cols()); for (int j=0; j #include #include template void eiToGsl(const EigenMatrixType& src, gsl_matrix** dst) { for (int j=0; j #include template void eiToUblas(const EigenMatrixType& src, UblasMatrixType& dst) { dst.resize(src.rows(),src.cols()); for (int j=0; j void eiToUblasVec(const EigenType& src, UblasType& dst) { dst.resize(src.size()); for (int j=0; j #include "BenchTimer.h" #include #include #include #include #include using namespace Eigen; std::map > results; std::vector labels; std::vector sizes; template EIGEN_DONT_INLINE void compute_norm_equation(Solver &solver, const MatrixType &A) { if(A.rows()!=A.cols()) solver.compute(A.transpose()*A); else solver.compute(A); } template EIGEN_DONT_INLINE void compute(Solver &solver, const MatrixType &A) { solver.compute(A); } template void bench(int id, int rows, int size = Size) { typedef Matrix Mat; typedef Matrix MatDyn; typedef Matrix MatSquare; Mat A(rows,size); A.setRandom(); if(rows==size) A = A*A.adjoint(); BenchTimer t_llt, t_ldlt, t_lu, t_fplu, t_qr, t_cpqr, t_cod, t_fpqr, t_jsvd, t_bdcsvd; int svd_opt = ComputeThinU|ComputeThinV; int tries = 5; int rep = 1000/size; if(rep==0) rep = 1; // rep = rep*rep; LLT llt(size); LDLT ldlt(size); PartialPivLU lu(size); FullPivLU fplu(size,size); HouseholderQR qr(A.rows(),A.cols()); ColPivHouseholderQR cpqr(A.rows(),A.cols()); CompleteOrthogonalDecomposition cod(A.rows(),A.cols()); FullPivHouseholderQR fpqr(A.rows(),A.cols()); JacobiSVD jsvd(A.rows(),A.cols()); BDCSVD bdcsvd(A.rows(),A.cols()); BENCH(t_llt, tries, rep, compute_norm_equation(llt,A)); BENCH(t_ldlt, tries, rep, compute_norm_equation(ldlt,A)); BENCH(t_lu, tries, rep, compute_norm_equation(lu,A)); if(size<=1000) BENCH(t_fplu, tries, rep, compute_norm_equation(fplu,A)); BENCH(t_qr, tries, rep, compute(qr,A)); BENCH(t_cpqr, tries, rep, compute(cpqr,A)); BENCH(t_cod, tries, rep, compute(cod,A)); if(size*rows<=10000000) BENCH(t_fpqr, tries, rep, compute(fpqr,A)); if(size<500) // JacobiSVD is really too slow for too large matrices BENCH(t_jsvd, tries, rep, jsvd.compute(A,svd_opt)); // if(size*rows<=20000000) BENCH(t_bdcsvd, tries, rep, bdcsvd.compute(A,svd_opt)); results["LLT"][id] = t_llt.best(); results["LDLT"][id] = t_ldlt.best(); results["PartialPivLU"][id] = t_lu.best(); results["FullPivLU"][id] = t_fplu.best(); results["HouseholderQR"][id] = t_qr.best(); results["ColPivHouseholderQR"][id] = t_cpqr.best(); results["CompleteOrthogonalDecomposition"][id] = t_cod.best(); results["FullPivHouseholderQR"][id] = t_fpqr.best(); results["JacobiSVD"][id] = t_jsvd.best(); results["BDCSVD"][id] = t_bdcsvd.best(); } int main() { labels.push_back("LLT"); labels.push_back("LDLT"); labels.push_back("PartialPivLU"); labels.push_back("FullPivLU"); labels.push_back("HouseholderQR"); labels.push_back("ColPivHouseholderQR"); labels.push_back("CompleteOrthogonalDecomposition"); labels.push_back("FullPivHouseholderQR"); labels.push_back("JacobiSVD"); labels.push_back("BDCSVD"); for(int i=0; i(k,sizes[k](0),sizes[k](1)); } cout.width(32); cout << "solver/size"; cout << " "; for(int k=0; k=1e6) cout << "-"; else cout << r(k); cout << " "; } cout << endl; } // HTML output cout << "" << endl; cout << "" << endl; for(int k=0; k" << sizes[k](0) << "x" << sizes[k](1) << ""; cout << "" << endl; for(int i=0; i"; ArrayXf r = (results[labels[i]]*100000.f).floor()/100.f; for(int k=0; k=1e6) cout << ""; else { cout << ""; } } cout << "" << endl; } cout << "
    solver/size
    " << labels[i] << "-" << r(k); if(i>0) cout << " (x" << numext::round(10.f*results[labels[i]](k)/results["LLT"](k))/10.f << ")"; if(i<4 && sizes[k](0)!=sizes[k](1)) cout << " *"; cout << "
    " << endl; // cout << "LLT (ms) " << (results["LLT"]*1000.).format(fmt) << "\n"; // cout << "LDLT (%) " << (results["LDLT"]/results["LLT"]).format(fmt) << "\n"; // cout << "PartialPivLU (%) " << (results["PartialPivLU"]/results["LLT"]).format(fmt) << "\n"; // cout << "FullPivLU (%) " << (results["FullPivLU"]/results["LLT"]).format(fmt) << "\n"; // cout << "HouseholderQR (%) " << (results["HouseholderQR"]/results["LLT"]).format(fmt) << "\n"; // cout << "ColPivHouseholderQR (%) " << (results["ColPivHouseholderQR"]/results["LLT"]).format(fmt) << "\n"; // cout << "CompleteOrthogonalDecomposition (%) " << (results["CompleteOrthogonalDecomposition"]/results["LLT"]).format(fmt) << "\n"; // cout << "FullPivHouseholderQR (%) " << (results["FullPivHouseholderQR"]/results["LLT"]).format(fmt) << "\n"; // cout << "JacobiSVD (%) " << (results["JacobiSVD"]/results["LLT"]).format(fmt) << "\n"; // cout << "BDCSVD (%) " << (results["BDCSVD"]/results["LLT"]).format(fmt) << "\n"; } piqp-octave/src/eigen/bench/sparse_dense_product.cpp0000644000175100001770000001175514107270226022541 0ustar runnerdocker //g++ -O3 -g0 -DNDEBUG sparse_product.cpp -I.. -I/home/gael/Coding/LinearAlgebra/mtl4/ -DDENSITY=0.005 -DSIZE=10000 && ./a.out //g++ -O3 -g0 -DNDEBUG sparse_product.cpp -I.. -I/home/gael/Coding/LinearAlgebra/mtl4/ -DDENSITY=0.05 -DSIZE=2000 && ./a.out // -DNOGMM -DNOMTL -DCSPARSE // -I /home/gael/Coding/LinearAlgebra/CSparse/Include/ /home/gael/Coding/LinearAlgebra/CSparse/Lib/libcsparse.a #ifndef SIZE #define SIZE 650000 #endif #ifndef DENSITY #define DENSITY 0.01 #endif #ifndef REPEAT #define REPEAT 1 #endif #include "BenchSparseUtil.h" #ifndef MINDENSITY #define MINDENSITY 0.0004 #endif #ifndef NBTRIES #define NBTRIES 10 #endif #define BENCH(X) \ timer.reset(); \ for (int _j=0; _j=MINDENSITY; density*=0.5) { //fillMatrix(density, rows, cols, sm1); fillMatrix2(7, rows, cols, sm1); // dense matrices #ifdef DENSEMATRIX { std::cout << "Eigen Dense\t" << density*100 << "%\n"; DenseMatrix m1(rows,cols); eiToDense(sm1, m1); timer.reset(); timer.start(); for (int k=0; k m1(sm1); // std::cout << "Eigen dyn-sparse\t" << m1.nonZeros()/float(m1.rows()*m1.cols())*100 << "%\n"; // // BENCH(for (int k=0; k gmmV1(cols), gmmV2(cols); Map >(&gmmV1[0], cols) = v1; Map >(&gmmV2[0], cols) = v2; BENCH( asm("#myx"); gmm::mult(m1, gmmV1, gmmV2); asm("#myy"); ) std::cout << " a * v:\t" << timer.value() << endl; BENCH( gmm::mult(gmm::transposed(m1), gmmV1, gmmV2); ) std::cout << " a' * v:\t" << timer.value() << endl; } #endif #ifndef NOUBLAS { std::cout << "ublas sparse\t" << density*100 << "%\n"; UBlasSparse m1(rows,cols); eiToUblas(sm1, m1); boost::numeric::ublas::vector uv1, uv2; eiToUblasVec(v1,uv1); eiToUblasVec(v2,uv2); // std::vector gmmV1(cols), gmmV2(cols); // Map >(&gmmV1[0], cols) = v1; // Map >(&gmmV2[0], cols) = v2; BENCH( uv2 = boost::numeric::ublas::prod(m1, uv1); ) std::cout << " a * v:\t" << timer.value() << endl; // BENCH( boost::ublas::prod(gmm::transposed(m1), gmmV1, gmmV2); ) // std::cout << " a' * v:\t" << timer.value() << endl; } #endif // MTL4 #ifndef NOMTL { std::cout << "MTL4\t" << density*100 << "%\n"; MtlSparse m1(rows,cols); eiToMtl(sm1, m1); mtl::dense_vector mtlV1(cols, 1.0); mtl::dense_vector mtlV2(cols, 1.0); timer.reset(); timer.start(); for (int k=0; k void bench_printhelp() { cout<< " \nbenchsolver : performs a benchmark of all the solvers available in Eigen \n\n"; cout<< " MATRIX FOLDER : \n"; cout<< " The matrices for the benchmark should be collected in a folder specified with an environment variable EIGEN_MATRIXDIR \n"; cout<< " The matrices are stored using the matrix market coordinate format \n"; cout<< " The matrix and associated right-hand side (rhs) files are named respectively \n"; cout<< " as MatrixName.mtx and MatrixName_b.mtx. If the rhs does not exist, a random one is generated. \n"; cout<< " If a matrix is SPD, the matrix should be named as MatrixName_SPD.mtx \n"; cout<< " If a true solution exists, it should be named as MatrixName_x.mtx; \n" ; cout<< " it will be used to compute the norm of the error relative to the computed solutions\n\n"; cout<< " OPTIONS : \n"; cout<< " -h or --help \n print this help and return\n\n"; cout<< " -d matrixdir \n Use matrixdir as the matrix folder instead of the one specified in the environment variable EIGEN_MATRIXDIR\n\n"; cout<< " -o outputfile.xml \n Output the statistics to a xml file \n\n"; cout<< " --eps Sets the relative tolerance for iterative solvers (default 1e-08) \n\n"; cout<< " --maxits Sets the maximum number of iterations (default 1000) \n\n"; } int main(int argc, char ** args) { bool help = ( get_options(argc, args, "-h") || get_options(argc, args, "--help") ); if(help) { bench_printhelp(); return 0; } // Get the location of the test matrices string matrix_dir; if (!get_options(argc, args, "-d", &matrix_dir)) { if(getenv("EIGEN_MATRIXDIR") == NULL){ std::cerr << "Please, specify the location of the matrices with -d mat_folder or the environment variable EIGEN_MATRIXDIR \n"; std::cerr << " Run with --help to see the list of all the available options \n"; return -1; } matrix_dir = getenv("EIGEN_MATRIXDIR"); } std::ofstream statbuf; string statFile ; // Get the file to write the statistics bool statFileExists = get_options(argc, args, "-o", &statFile); if(statFileExists) { statbuf.open(statFile.c_str(), std::ios::out); if(statbuf.good()){ statFileExists = true; printStatheader(statbuf); statbuf.close(); } else std::cerr << "Unable to open the provided file for writing... \n"; } // Get the maximum number of iterations and the tolerance int maxiters = 1000; double tol = 1e-08; string inval; if (get_options(argc, args, "--eps", &inval)) tol = atof(inval.c_str()); if(get_options(argc, args, "--maxits", &inval)) maxiters = atoi(inval.c_str()); string current_dir; // Test the real-arithmetics matrices Browse_Matrices(matrix_dir, statFileExists, statFile,maxiters, tol); // Test the complex-arithmetics matrices Browse_Matrices >(matrix_dir, statFileExists, statFile, maxiters, tol); if(statFileExists) { statbuf.open(statFile.c_str(), std::ios::app); statbuf << "
    \n"; cout << "\n Output written in " << statFile << " ...\n"; statbuf.close(); } return 0; } piqp-octave/src/eigen/bench/spbench/spbenchsolver.h0000644000175100001770000004336714107270226022276 0ustar runnerdocker// This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // Copyright (C) 2012 Désiré Nuentsa-Wakam // // This Source Code Form is subject to the terms of the Mozilla // Public License v. 2.0. If a copy of the MPL was not distributed // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "spbenchstyle.h" #ifdef EIGEN_METIS_SUPPORT #include #endif #ifdef EIGEN_CHOLMOD_SUPPORT #include #endif #ifdef EIGEN_UMFPACK_SUPPORT #include #endif #ifdef EIGEN_KLU_SUPPORT #include #endif #ifdef EIGEN_PARDISO_SUPPORT #include #endif #ifdef EIGEN_SUPERLU_SUPPORT #include #endif #ifdef EIGEN_PASTIX_SUPPORT #include #endif // CONSTANTS #define EIGEN_UMFPACK 10 #define EIGEN_KLU 11 #define EIGEN_SUPERLU 20 #define EIGEN_PASTIX 30 #define EIGEN_PARDISO 40 #define EIGEN_SPARSELU_COLAMD 50 #define EIGEN_SPARSELU_METIS 51 #define EIGEN_BICGSTAB 60 #define EIGEN_BICGSTAB_ILUT 61 #define EIGEN_GMRES 70 #define EIGEN_GMRES_ILUT 71 #define EIGEN_SIMPLICIAL_LDLT 80 #define EIGEN_CHOLMOD_LDLT 90 #define EIGEN_PASTIX_LDLT 100 #define EIGEN_PARDISO_LDLT 110 #define EIGEN_SIMPLICIAL_LLT 120 #define EIGEN_CHOLMOD_SUPERNODAL_LLT 130 #define EIGEN_CHOLMOD_SIMPLICIAL_LLT 140 #define EIGEN_PASTIX_LLT 150 #define EIGEN_PARDISO_LLT 160 #define EIGEN_CG 170 #define EIGEN_CG_PRECOND 180 using namespace Eigen; using namespace std; // Global variables for input parameters int MaximumIters; // Maximum number of iterations double RelErr; // Relative error of the computed solution double best_time_val; // Current best time overall solvers int best_time_id; // id of the best solver for the current system template inline typename NumTraits::Real test_precision() { return NumTraits::dummy_precision(); } template<> inline float test_precision() { return 1e-3f; } template<> inline double test_precision() { return 1e-6; } template<> inline float test_precision >() { return test_precision(); } template<> inline double test_precision >() { return test_precision(); } void printStatheader(std::ofstream& out) { // Print XML header // NOTE It would have been much easier to write these XML documents using external libraries like tinyXML or Xerces-C++. out << " \n"; out << " \n"; out << "\n]>"; out << "\n\n\n"; out << "\n \n" ; //root XML element // Print the xsl style section printBenchStyle(out); // List all available solvers out << " \n"; #ifdef EIGEN_UMFPACK_SUPPORT out <<" \n"; out << " LU \n"; out << " UMFPACK \n"; out << " \n"; #endif #ifdef EIGEN_KLU_SUPPORT out <<" \n"; out << " LU \n"; out << " KLU \n"; out << " \n"; #endif #ifdef EIGEN_SUPERLU_SUPPORT out <<" \n"; out << " LU \n"; out << " SUPERLU \n"; out << " \n"; #endif #ifdef EIGEN_CHOLMOD_SUPPORT out <<" \n"; out << " LLT SP \n"; out << " CHOLMOD \n"; out << " \n"; out <<" \n"; out << " LLT \n"; out << " CHOLMOD \n"; out << " \n"; out <<" \n"; out << " LDLT \n"; out << " CHOLMOD \n"; out << " \n"; #endif #ifdef EIGEN_PARDISO_SUPPORT out <<" \n"; out << " LU \n"; out << " PARDISO \n"; out << " \n"; out <<" \n"; out << " LLT \n"; out << " PARDISO \n"; out << " \n"; out <<" \n"; out << " LDLT \n"; out << " PARDISO \n"; out << " \n"; #endif #ifdef EIGEN_PASTIX_SUPPORT out <<" \n"; out << " LU \n"; out << " PASTIX \n"; out << " \n"; out <<" \n"; out << " LLT \n"; out << " PASTIX \n"; out << " \n"; out <<" \n"; out << " LDLT \n"; out << " PASTIX \n"; out << " \n"; #endif out <<" \n"; out << " BICGSTAB \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " BICGSTAB_ILUT \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " GMRES_ILUT \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " LDLT \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " LLT \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " CG \n"; out << " EIGEN \n"; out << " \n"; out <<" \n"; out << " LU_COLAMD \n"; out << " EIGEN \n"; out << " \n"; #ifdef EIGEN_METIS_SUPPORT out <<" \n"; out << " LU_METIS \n"; out << " EIGEN \n"; out << " \n"; #endif out << " \n"; } template void call_solver(Solver &solver, const int solver_id, const typename Solver::MatrixType& A, const Matrix& b, const Matrix& refX,std::ofstream& statbuf) { double total_time; double compute_time; double solve_time; double rel_error; Matrix x; BenchTimer timer; timer.reset(); timer.start(); solver.compute(A); if (solver.info() != Success) { std::cerr << "Solver failed ... \n"; return; } timer.stop(); compute_time = timer.value(); statbuf << "