More work on Sparse Matrices

Hi All,

During this week I completed more functionalities regarding to CSR Matrices. I mainly implemented a generalized binary operation between two matrices. It’s similar to the SciPy function csr_binop_csr_canonical. This method applies a given binary function, binop to each corresponding elements of the two matrices which are given as input arguments, i.e. (i, j)^{th} entry of the output matrix is equal to binop(a, b) where a and b are the (i, j)^{th} entries of input matrices respectively.

Also, I implemented get(), set() methods for CSRMatrix class. It took me while to get this completely right as I was discarding some of the possible cases.

I originally planned to start writing python wrappers but I couldn’t do it as I was a bit busy with a project in my university. But I hope to get started on this in upcoming week. Although the GSoC is officially over, I will work on the project for two additional weeks as I took a two weeks break during the programme.

Sparse Matrices

Hi All,

I am writing a blog post for GSoC after three weeks. I couldn’t work on the project for two weeks as I participated in a competition. However I am back again and started working again. Now, I am mainly concerned about finishing my work with Sparse Matrices. Then we can write some wrappers and do some benchmarks.

We decided to use SciPy’s sparse matrices code and I have ported most of the important functionalities. I created a CSRMatrix class and created a method to create a CSRMatrix from a normal COO matrix. I ported the methods for testing the canonical form and sparse matrix multiplication.  Also, I implemented three methods mainly to be used in extracting diagonal elements and scaling.

I am going to implement binary operations with CSR matrices during the coming week and also want to get started with writing wrappers so we can use the Matrix classes from python and SymPy.

[GSoC] Week 9: Matrix Inverse and Sparse Matrices

Hi All, Sorry for a late blog post. I was kind of busy during last week, preparing for a competition.

During the last week, I mainly did two things, implementing matrix inverse and starting the implementation of sparse matrices.

Implementing Matrix Inverse

I implemented matrix inverse in two different methods, using Gauss Jordan elimination and fraction free LU decomposition. I had only implemented gauss Jordan elimination to solve a system Ax = b where b is a column matrix. I had to enhance the algorithm so that it can be used to solve several systems at once, i.e. b can be a collection of column matrix.

This was not a problem in fraction free LU method because L and U factors can be used to solve column vectors e_{1}, e_{2}, . . . e_{n} after calculating L and U once.

Implementing Sparse Matrices

We decided to adapt the implementation of SciPy sparse matrices. For the time being I have implemented CSR form of a sparse matrix. CSR is an acronym for “Compress Sparse Row”. You can learn more about it in the following links.

Wiki Article

Netlib Article

You can find information about scipy implementation of CSR matrices here.

[GSoC] Week 8: Solving Ax = b and Determinant

Hi All,

Sorry, I couldn’t write a blog post last week. During past two weeks, I contributed in solving some of the bugs in CSymPy and the determinant of a Matrix. Also, I implemented the solutions of the system Ax = b by using various decomposition methods.

Determinant of a Matrix

I implemented determinant using two different methods, Bareiss’s method and Berkowitz algorithm.

Bareiss’s algorithm

Bareiss’s algorithm[1] can be used to find the row echelon form or determinant of a Matrix. This is not a division free algorithm, but guarantees that all the divisions are exact, i.e. there is no remainder [2]. The algorithm is based on Sylvester’s identity and a transformation that yields the determinant after successive application. This can be also used to solve a system of equations. You can read more about the algorithm in [2].

Berkowitz Algorithm

Berkowitz algorithm can also be used in calculating the determinant. This algorithms has various other applications as well, like calculating characteristic polynomial of a matrix, principal minors and Eigen values. I am yet to implement the calculation of characteristic polynomial and Eigen values using this algorithm but that won’t be a difficult thing. I wish to do it over the weekend.

Ax = b

I used various matrix decompositions implemented earlier to solve the system Ax = b. Only the fraction free LU decomposition was used earlier, but now we can solve linear systems using LU decomposition and LDL decomposition. QR and Cholesky decomposition can be enabled after figuring out a good way to do expression simplification in CSymPy.

I hope to work on Sparse matrices in upcoming weeks and do some benchmarks of CSymPy’s algorithms with GiNaC.

References

[1] Erwin H. Bareiss, Sylvester’s Identity and Multi step Integer-Preserving Gaussian Elimination

[2] Wikipedia article, http://en.wikipedia.org/wiki/Bareiss_algorithm

[3] Wolfram Mathworld article, http://mathworld.wolfram.com/SylvestersDeterminantIdentity.html

 

[GSoC] Week 6: Cholesky and LDL Algorithms

This week I implemented Cholesky decomposition and LDL decomposition. In addition to that I fixed two errors In CSymPy. I was also bale to finish work with LU decomposition and merge it to master. Also, I could solve the system Ax = b using fraction free LU factorization.

Cholesky Decomposition

Cholesky decomposition can be applied to a Hermitian positive definite matrix. Hermitian matrix is a matrix with complex entries that is equal to it’s conjugate transpose [1]. Hence a symmetric matrix with real entries can be considered as a Hermitian matrix and can be decomposed using Cholesky decomposition if it’s positive definite. A Symmetric matrix A is positive definite if z^TAz is greater than zero for every non zero column matrix z [2]. If the above conditions are satisfied, Cholesky decomposition for a matrix A can be written as A = LL^* where L is an lower triangular Matrix. This is equal to A = LL^T when L is a real matrix. This factorization can be used for fast solution of the system Ax = b. I am yet to use this decomposition in solving above mentioned system.

LDL Factorization

LDL decomposition is closely related to Cholesky decomposition. As the name implies, in LDL decomposition of a matrix A can be written as A = LDL^* where L is a lower triangular matrix and D is a diagonal matrix [3]. This decomposition can be used for some matrices which don’t have a Cholesky decomposition.

CSymPy printing error and simplification errror

I also worked on a printing error of CSymPy and a simplification error in Mul class which is used to represent multiplication types in CSymPy. There is still some work to be done to fix simplification error completely. The most important thing was that we were able to introduce a fix which doesn’t create a considerable slow down in speed after being applied.

References

[1] Hermitian Matrix, Wikipedia Article: http://en.wikipedia.org/wiki/Hermitian_matrix

[2] Positive definite Matrix, Wikipedia Article: http://en.wikipedia.org/wiki/Positive-definite_matrix

[3] LDL Decomposition, Wikipedia Article: http://en.wikipedia.org/wiki/Cholesky_decomposition#LDL_decomposition

[GSoC] Week 5: Matrix Decompositions

Hi All,

Week 5 of the GSoC is over and I have made some progress in Matrix factorizations. I implemented LU factorization and QR factorization. Still I have to implement Cholesky factorization. I hope to do it during the Weekend.

LU factorization

LU factorization was invented by Alan Turing and this is very important when solving a system of equations. In LU decomposition we try to factorize a given square matrix into a product of two matrices, L and U where L is a lower triangular matrix and U is a upper triangular matrix. The factorization can be a true factorization i.e. A = LU or it may be the process of finding two matrices L, U which can be used in solving the system Ax = B. In the latter case we don’t have A = LU. I implemented three different algorithms for LU decomposition.

Fraction Free LU decomposition in [1]

Fraction free version described in [1] results in two matrices, a lower triangular matrix L and a upper triangular matrix U. But this is not a true factorization i.e. A != LU in this case. The algorithm is more focused in a factorization that helps to solve a system Ax = B. So we are only interested in finding two matrices which can be used in backward / forward substitution to solve the system.

Normal and Fraction Free version in SymPy

I also implemented the normal and fraction free version of the algorithm in SymPy which gives an exact factorization A = LU. Normal algorithm results in fractions and not good for numerical calculations if you are not using arbitrary precision arithmetic. In contrast fraction free versions do not generate these kinds of fraction as a result of the decomposition. These algorithms can be found in [2] and [3] respectively.

QR decomposition

In QR decomposition we find two matrices, an [orthogonal matrix](http://en.wikipedia.org/wiki/Orthogonal_matrixand  a upper triangular matrix R such that A = QR. Here A doesn’t have to be a square matrix.There are several method to calculate QR factorization but SymPy uses Gram–Schmidt process. The Algorithm can be found in [4].

Issues

QR decomposition is not a `symbolic friendly` algorithm as it involves finding square root. I searched thoroughly for an algorithm that is suited for symbolic entries but couldn’t find one.

References

[1] Algorithm 3, page 14, Nakos, G. C., Turner, P. R., Williams, R. M. (1997). Fraction-free algorithms for linear and polynomial equations. ACM SIGSAM Bulletin, 31(3), 11–19. doi:10.1145/271130.271133.

[GSoC] Week 4: Fraction Free Algorithms

Hi All,

As the week 4 of the GSoC comes to an end I was able to start implementing Matrix decompositions, starting with the LU decomposition.

Fraction Free Algorithms

Gaussian elimination is the procedure for reducing a given matrix to an echelon form. In Gauss-Jordan elimination, we reduce a given matrix into a reduced row echelon form. I implemented quite a few algorithms for Gaussian elimination and Gauss-Jordan elimination. The conventional algorithm for Guassian elimination is a very straight forward one and can be found in[1]. There are two problems in this algorithm. One is that we have to perform division which is not a good thing to do if the divisor is too small. Also, the above implementation doesn’t allow pivoting, which makes the algorithm not applicable to non-singular matrices.

To avoid the problem with division, I implemented a set of algorithms which results in fraction free matrix entries. These algorithms are described in detail in [2] and [3]. One disadvantage in these fraction free methods is that the resulting matrix entries tends to grow larger than the naive algorithm described in [1]. These can be avoided by observing that certain entries are divisible by other entries in the matrix. This is described in detail in [3].

Pivoting is also a concern in these algorithms. Since we are mostly dealing with symbolic matrices, first row below the current pow which has a non-zero element is chosen as the pivot row.

Also, I implemented a fraction free version of LU decomposition.

Plan for upcoming week

I hope to implement some other Matrix decomposition algorithms like QR and Cholesky in the coming week. Also, I wish to implement a Gaussian elimination algorithm which is more suited to matrices with symbolic entries. For the time being I am studying about it in [4].

References

[1] http://www.cs.rutgers.edu/~venugopa/parallel_summer2012/ge.html#algo
[2] Fraction-free algorithms for linear and polynomial equations (1997) by George Nakos, Peter Turner, Robert Williams
[3] A Simplified Fraction-Free integer Gauss elimination algorithm, Peter R. Turner, Ph.D.
[4] Modified Gauss Algorithm for Matrices with Symbolic Entries, Benno Fuchssteiner.

[GSoC] Week 2: Gaussian Elimination

Hi All, sorry for a late blog post. During the past week I worked on implementing
gaussian elimination and correcting some bugs. I had a very slow progress during
the past week due to the time spent on correcting the bugs and because of the busy
schedule I had during the week. I wish to cover up for that during this week.

Row Reduction

Gaussian Elimination is a method for solving matrix equations of the form
Ax = b. It is done by perfroming a sequence of operations on the matrix
entries. This method can be used to find the determinant and inverse of a matrix. The
method is named after great mathematician Carl Friedrich Gauss.

The set of operations allowed on the matrix entries are called elementary row
operations. Below are the three operations that are allowed.

1. Add a.(row~j) to (row~i) of the matrix.
2. Interchange (row~i) and (row~j).
3. Multiply (row~i) by a constant c.

The simplest non-zero matrices are called matrix units and they are denoted
by e_{ij}. This denotes the matrix where only the {ij}^{th} entry is
one and every other entry is zero. With this definition, above three row
operations are equal to the left multiplying a given matrix by the following
matrices respectively. These matrices are called **elementary matrices**.

1. I~+~ae_{ij}
2. I~+~e_{ij}~+~e_{ji}~-~e_{ii}~-~e_{jj}
3. I~+(c-1)e_{ii}

Here I is the identity matrix of appropriate size.

Row reduction algorithm

I implemented the algorithm in SymPy that gives the reduced row echelon form
of the given matrix. It is a very naive algorithm not suited for symbolic matrices.
I wish to implement an elimination algorithm that’s more suited for symbolic
matrices during upcoming weeks.

Bugs in CSymPy

We found some bugs with `div` which were mainly originated in the constructor
for `Rational`. Currently, for performance reasons, there is an assertion in the
constructor for `Rational` that the input should actually be a rational number with
denominator greater than one. If the denominator is one, the assertion fails and
we get an error. We had to make sure that we are calling `Rational` with actual
rationals, not Integers.

[GSoC] Week 1: Implemented Basic Matrix Classes

Hi All, When the first week of the GSoC comes to an end, I was able to finish the basic structure of the Matrix class and implement some of the functionalities related to the `DenseMatrix` class. I implemented Matrix addition, multiplication, scalar addition and scalar multiplication. I am yet to update the PR with these changes. I am currently working on Gaussian elimination and having a few problems with that. I wish to solve these as soon as possible. After sorting it out, I wish to benchmark my implementation against other libraries such as Linbox and GiNaC.

Matrix Class Structure

  1. `MatrixBase` Class
    class MatrixBase: public Basic {
    public:
        MatrixBase(unsigned row, unsigned col)
            : row_{row}, col_{col} {};    // Below methods should be implemented by the derived classes. If not
        // applicable, raise an exception    // Get and set elements
        virtual RCP<const Basic>get(unsigned i) const = 0;
        virtual void set(unsigned i, RCP<const Basic> &e) = 0;

        virtual unsigned rank() const = 0;
        virtual RCP<const Basic> det() const = 0;
        virtual RCP<const MatrixBase> inv() const = 0;

        // These functions create a new instance of either DenseMatrix or
        // SparseMatrix and return a reference to the result
        virtual RCP<const MatrixBase> add_matrix(const MatrixBase &other) const = 0;
        virtual RCP<const MatrixBase> mul_matrix(const MatrixBase &other) const = 0;

    protected:
        // Stores the dimension of the Matrix
        unsigned row_;
        unsigned col_;
    };

    Most important thing about the `MatrixBase` class is that it is derived from the CSymPy `Basic` class. This enables the classes that are derived from `MatrixBase` to be used wherever a `Basic` class is used. This has a lot of advantages. `DenseMatrix` class is derived from `MatrixBase` class.

  2. `DenseMatrix` class
    class DenseMatrix: public MatrixBase {
    public:
        // Constructors
        DenseMatrix(unsigned row, unsigned col);
        DenseMatrix(unsigned row, unsigned col, std::vector<RCP<const Basic>> &l);    // Virtual functions inherited from Basic class
        virtual std::size_t __hash__() const;
        virtual bool __eq__(const Basic &o) const;
        virtual int compare(const Basic &o) const;    // Should implement all the virtual methods from MatrixBase
        // and throw an exception if a method is not applicable.
        // add_matrix, mul_matrix will have to find the correct function
        // to call depending on the `other` argument.

        // Get and set elements
        virtual RCP<const Basic> get(unsigned i) const;
        virtual void set(unsigned i, RCP<const Basic> &e);

        virtual unsigned rank() const;
        virtual RCP<const Basic> det() const;
        virtual RCP<const MatrixBase> inv() const;

        // Matrix addition
        virtual RCP<const MatrixBase> add_matrix(const MatrixBase &other) const;
        friend RCP<const DenseMatrix> add_dense_dense(const DenseMatrix &A,
                const DenseMatrix &B);
        friend RCP<const DenseMatrix> add_dense_scalar(const DenseMatrix &A,
                RCP<const Basic> &k);
        
        // Matrix multiplication
        virtual RCP<const MatrixBase> mul_matrix(const MatrixBase &other) const;
        friend RCP<const DenseMatrix> mul_dense_dense(const DenseMatrix &A,
                const DenseMatrix &B);
        friend RCP<const DenseMatrix> mul_dense_scalar(const DenseMatrix &A,
                RCP<const Basic> &k);
        
        // Gaussian elimination
        friend RCP<const DenseMatrix> gaussian_elimination(const DenseMatrix &A);
                
    protected:
        // Matrix elements are stored in row-major order
        std::vector<RCP<const Basic>> m_;
    };

    Some of the functions are declared as friend functions so that these can access private and protected members of the `DenseMatrix` class. This makes coding the algorithms a bit easier

  3. Friend function implementations
    • add_dense_dense
      RCP<const DenseMatrix> add_dense_dense(const DenseMatrix &A,
              const DenseMatrix &B)
      {
          unsigned row = A.row_;
          unsigned col = A.col_;    CSYMPY_ASSERT(row == B.row_ && col == B.col_)    std::vector<RCP<const Basic>> sum(row*col);

          std::vector<RCP<const Basic>>::const_iterator ait = A.m_.begin();
          std::vector<RCP<const Basic>>::const_iterator bit = B.m_.begin();

          for(auto &it: sum) {
              it = add(*ait, *bit);
              ait++;
              bit++;
          }

          return rcp(new DenseMatrix(row, col, sum));
      }

    • mul_dense_dense
      RCP<const DenseMatrix> mul_dense_dense(const DenseMatrix &A,
              const DenseMatrix &B)
      {
          unsigned row = A.row_;
          unsigned col = A.col_;    CSYMPY_ASSERT(col == B.row_)

          std::vector<RCP<const Basic>> prod(row*B.col_);

          for (unsigned r = 0; r<row; r++) {
              for (unsigned c = 0; c<B.col_; c++) {
                  prod[r*B.col_ + c] = zero; // Integer Zero
                  for (unsigned k = 0; k<col; k++)
                      prod[r*B.col_ + c] = add(prod[r*B.col_ + c],
                              mul(A.m_[r*col + k], B.m_[k*B.col_ + c]));
              }
          }

          return rcp(new DenseMatrix(row, B.col_, prod));
      }

Plan for this week

I wish to finish implementing Gaussian elimination and do some benchmarks.

 

 

[GSoC] Linear Algebra Module for CSymPy

Hi All,

This summer, as my GSoC project, I will be working on a Linear Algebra Module for CSymPy which is hoped to be used along with SymPy after the completion. I am going to mainly focus on Matrices and associated algorithms. This is my second GSoC project. I did my first project last summer with SymPy and I am really excited to work with the SymPy community again.

CSymPy is a fast core written in C++ for SymPy. It was started by Ondrej and try to address some performance critical applications where python is rather slow.  Linear algebra and related algorithms are performance critical as we have to deal with large matrices. So writing them in C++ will make them faster and these fast algorithms can be used in SymPy via python wrappers. You can find more about CSymPy by visiting the project page in github.

Apart from focusing on performance, I like to pay special attention to symbolic matrices and their manipulations. My aim is to make CSymPy the fastest library available for symbolic mathematics. There are a few challenges we need to overcome, especially avoiding intermediate expression swelling.

I created a PR with the basic structure for the Matrices. Please have a look. I would love to hear what you have to say.