|
Class |
Description |
|---|---|
|
A dense Matrix of ComplexNumbers, implemented as an ArrayMatrix | |
|
A number of the form _a + bi_ where _i_ is the imaginary unit. | |
|
Implements the basic ScalarOperations on ComplexNumbers | |
|
A dense Vector of ComplexNumbers implemented as an ArrayVector | |
|
A dense matrix of JavaScript | |
|
A dense Vector of | |
|
A Regressor model which uses an ordinary least squares model with regularization to predict a continuous target. The optimal set of parameters is computed with gradient descent. | |
|
A Classifier model which uses logistic regression to predict a discrete target. The optimal set of parameters is computed with gradient descent. | |
|
Provides methods for constructing Matrices of a given type | |
|
A dense matrix of JavaScript | |
|
Implements the basic ScalarOperations on | |
|
A dense Vector of | |
|
A wrapper for static methods representing the elementary row operations | |
|
A Matrix implemented as a sparse set of JS | |
|
A Vector implemented as a sparse set of JS | |
|
A Classifier model which uses logistic regression to predict a discrete target. The optimal set of parameters is computed with gradient descent. | |
|
Provides methods for constructing Vectors of a given type |
|
Abstract Class |
Description |
|---|---|
|
Implements Matrix with a 2-dimensional array of values. | |
|
Implements Vector with an array of values. | |
|
A class which encapsulates the basic arithmetic operations for an arbitrary scalar type. | |
|
Implements Matrix with a map of indices to nonzero values. | |
|
Implements Vector as a map of indices to nonzero values. |
|
Enumeration |
Description |
|---|---|
|
Types of solution to a linear system. |
|
Function |
Description |
|---|---|
|
Builds a matrix that transforms a vector to a vector of backward differences | |
|
Uses the serial version of the Cholesky algorith to calculate the Cholesky decomposition of a matrix | |
|
Uses the QR algorithm to compute the eigenvalues of a matrix | |
|
calculateGeneralLeastSquares(dataPoints, functionTemplate, numberOfTerms) |
Calculates a regression model for an arbitrary function. |
|
Calculates a linear regression model for the provided | |
|
Uses the Doolittle algorithm to calculate the LU Decomposition of a matrix A. | |
|
Uses the Graham-Schmidt process to calculate the QR decomposition of the matrix A. | |
|
Uses the Power Method to calculate the Singular Value Decomposition of a matrix | |
|
Returns the vector | |
|
Returns the matrix | |
|
Builds a matrix that transforms a vector to a vector of central differences | |
|
Returns the product of the given array of matrices. | |
|
Calculates the 1-Norm of a matrix | |
|
Calculates the correlation coefficient r of two vectors | |
|
Calculates the correlation matrix of a matrix | |
|
Calculates the covariance of two vectors | |
|
Calculates the covariance matrix of a matrix | |
|
Calculates the cross-product (vector-product) of two vectors. This is defined only for vectors with three dimensions. | |
|
Uses finite differences to build a vector containing approximate values of the derivative of | |
|
Uses expansion of minors to calculate the determinant of a matrix. Throws an error if the input is not square. | |
|
Creates a new matrix with the specified entries on the diagonal. See MatrixBuilder.diagonal() | |
|
Computes the dot/inner/scalar product of two vectors. See Vector.innerProduct(). | |
|
Uses the QR algorithm to compute the eigenvalues and eigenvectors of a matrix | |
|
Calculates the Euclidean Norm (or 2-Norm) of a vector | |
|
Implements the Pade Approximant to compute the exponential of matrix | |
|
Creates a new identity matrix of size | |
|
Builds a matrix that transforms a vector to a vector of forward differences | |
|
Calculates the Frobenius Norm of a matrix | |
|
Creates a gaussian Kernel for use in a SupportVectorMachineClassifier. The gaussian kernel converts a data Matrix into a similarity | |
|
Given a matrix | |
|
Learns an optimal set of parameters | |
|
Computes the hadamard (element-wise) product of two vectors. | |
|
Computes the hadamard (element-wise) product of two matrices. | |
|
Uses Gauss-Jordan elimination with pivoting to calculate the inverse of a matrix. | |
|
Tests if a matrix is Hermitian. | |
|
Tests if a matrix is an identity matrix | |
|
Tests if a matrix is lower-triangular. | |
|
Tests if a matrix is orthogonal | |
|
Tests if a matrix is orthonormal | |
|
Tests if a matrix is square. | |
|
Tests if a matrix is symmetric. | |
|
Tests if a matrix is upper-triangular. | |
|
Computes the Kronecker product (generalized outer product) of two matrices. | |
|
A linear kernel for use in a SupportVectorMachineClassifier. The linear kernel converts a data Matrix into a matrix which has been prepended with a column of all ones, representing the constant term in a linear model, or the bias term in an SVM. | |
|
Builds a vector of | |
|
Creates a new Matrix of numbers. See MatrixBuilder.fromArray() | |
|
Calculates the mean of the values in the vector | |
|
Calculates the mean vector of the matrix | |
|
Returns a vector with the same direction as the input | |
|
Creates a new vector of all 1s. See VectorBuilder.ones() | |
|
Creates a new matrix of all 1s. See MatrixBuilder.ones() | |
|
Conducts a principal component analysis of a matrix | |
|
Calculates the P-Norm of a vector | |
|
Computes _A^n_ recursively. | |
|
Returns an easy-to-read string representing a | |
|
Returns an easy-to-read string representing the contents of a Vector | |
|
Returns an easy-to-read string representing the contents of a Matrix | |
|
Creates a Kernel for use in a SupportVectorMachineClassifier. The RBF kernel converts a data Matrix into a similarity | |
|
Calculates the rank of a matrix | |
|
Reduce the number of dimensions of a data matrix | |
|
Uses Gauss-Jordan elimination with pivoting to convert a matrix to Reduced Row-Echelon Form (RREF) | |
|
Uses Gauss-Jordan elimination with pivoting to convert a matrix to Row-Echelon Form (REF) | |
|
Calculates the Infinity-Norm of a matrix | |
|
Solves the matrix equation _Ax=b_ for the vector _x_ using the default implementation. See solveByGaussianElimination() | |
|
Uses Gauss-Jordan elimination with pivoting and backward substitution to solve the linear equation _Ax=b_ | |
|
Gives an approximate solution to an overdetermined linear system. | |
|
Calculates the standard deviation of a vector | |
|
Calculates the standard deviation of each column of the matrix | |
|
Returns the vector | |
|
Returns the matrix | |
|
Calculates the Sum Norm (or 1-Norm) of a vector | |
|
Calculates the Supremum Norm (or Infinity-Norm) of a vector | |
|
Calculates the scalar triple-product of three vectors. This is defined only for vectors with three dimensions. | |
|
Calculates the variance of a vector | |
|
Calculates the variance of each column of the matrix | |
|
Creates a new Vector of numbers. See VectorBuilder.fromArray() | |
|
Creates a new vector of all 0s. See VectorBuilder.zeros() | |
|
Creates a new matrix of all 0s. See MatrixBuilder.zeros() |
|
Interface |
Description |
|---|---|
|
The result of a Cholesky Decomposition | |
|
A machine learning model with a continuous numeric target | |
|
The output of a cost function | |
|
An eigenvector and its corresponding eigenvalue | |
|
The result of a least squares approximation. | |
|
An abstract linear transformation between vectors of type | |
|
The result of an LU Decomposition | |
|
A generalized Matrix - one of the core data types | |
|
A type representing the lack of solution to a linear system. | |
|
The result of a principal component analysis. | |
|
The result of a QR decomposition. | |
|
A machine learning model with a continuous numeric target | |
|
The result of a row operation ( | |
|
The result of a Singular Value Decomposition | |
|
A particular solution to a linear system with infinitely many solutions. | |
|
The unique solution to a linear system. | |
|
A generalized Vector - one of the core data types |
|
Type Alias |
Description |
|---|---|
|
A function that takes a vector of inputs and produces an output. This must always be a pure function that is linear in its coefficients. | |
|
A higher-order function which is used to generate an | |
|
A function that evaluates the cost of a set of parameters | |
|
Specify how dimension reduction ought to be done. | |
|
The parameters for gradientDescent() | |
|
A function which takes a Matrix of data (and optionally another Generally intended for use with a SupportVectorMachineClassifier. | |
|
An function which, given an initial value of | |
|
The set of hyperparameters for a LinearRegressor | |
|
A general type representing any type of solution to a linear system. | |
|
The set of hyperparameters for a LogisticRegressionClassifier | |
|
The data stored in a Matrix represented as a 2-D array | |
|
A function that generates a matrix entry based on an existing entry | |
|
A tuple representing the shape of a Matrix. The first entry is the number of rows, and the second entry is the number of columns. | |
|
A function that calculates a norm for a vector. | |
|
A function which expresses the similarity of two Vectors as a number between 0 (very dissimilar) and 1 (identical). | |
|
A function that solves a linear system _Ax=b_ | |
|
The data stored in a Matrix represented as a map | |
|
The data stored in a Vector represented as a map | |
|
The set of hyperparameters for a SupportVectorMachineClassifier | |
|
The data stored in a Vector represented as a map | |
|
A function that generates a vector entry based on its index |