sparse
– Symbolic Sparse Matrices¶
In the tutorial section, you can find a sparse tutorial.
The sparse submodule is not loaded when we import Theano. You must
import theano.sparse
to enable it.
The sparse module provides the same functionality as the tensor module. The difference lies under the covers because sparse matrices do not store data in a contiguous array. Note that there are no GPU implementations for sparse matrices in Theano. The sparse module has been used in:
- NLP: Dense linear transformations of sparse vectors.
- Audio: Filterbank in the Fourier domain.
Compressed Sparse Format¶
This section tries to explain how information is stored for the two sparse formats of SciPy supported by Theano. There are more formats that can be used with SciPy and some documentation about them may be found here.
Theano supports two compressed sparse formats: csc
and csr
,
respectively based on columns and rows. They have both the same
attributes: data
, indices
, indptr
and shape
.
- The
data
attribute is a one-dimensionalndarray
which contains all the non-zero elements of the sparse matrix.- The
indices
andindptr
attributes are used to store the position of the data in the sparse matrix.- The
shape
attribute is exactly the same as theshape
attribute of a dense (i.e. generic) matrix. It can be explicitly specified at the creation of a sparse matrix if it cannot be inferred from the first three attributes.
CSC Matrix¶
In the Compressed Sparse Column format, indices
stands for
indexes inside the column vectors of the matrix and indptr
tells
where the column starts in the data
and in the indices
attributes. indptr
can be thought of as giving the slice which
must be applied to the other attribute in order to get each column of
the matrix. In other words, slice(indptr[i], indptr[i+1])
corresponds to the slice needed to find the i-th column of the matrix
in the data
and indices
fields.
The following example builds a matrix and returns its columns. It prints the i-th column, i.e. a list of indices in the column and their corresponding value in the second list.
>>> import numpy as np
>>> import scipy.sparse as sp
>>> data = np.asarray([7, 8, 9])
>>> indices = np.asarray([0, 1, 2])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> m = sp.csc_matrix((data, indices, indptr), shape=(3, 3))
>>> m.toarray()
array([[7, 0, 0],
[8, 0, 0],
[0, 9, 0]])
>>> i = 0
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([0, 1], dtype=int32), array([7, 8]))
>>> i = 1
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([2], dtype=int32), array([9]))
>>> i = 2
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([], dtype=int32), array([], dtype=int64))
CSR Matrix¶
In the Compressed Sparse Row format, indices
stands for indexes
inside the row vectors of the matrix and indptr
tells where the
row starts in the data
and in the indices
attributes. indptr
can be thought of as giving the slice which
must be applied to the other attribute in order to get each row of the
matrix. In other words, slice(indptr[i], indptr[i+1])
corresponds
to the slice needed to find the i-th row of the matrix in the data
and indices
fields.
The following example builds a matrix and returns its rows. It prints the i-th row, i.e. a list of indices in the row and their corresponding value in the second list.
>>> import numpy as np
>>> import scipy.sparse as sp
>>> data = np.asarray([7, 8, 9])
>>> indices = np.asarray([0, 1, 2])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> m = sp.csr_matrix((data, indices, indptr), shape=(3, 3))
>>> m.toarray()
array([[7, 8, 0],
[0, 0, 9],
[0, 0, 0]])
>>> i = 0
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([0, 1], dtype=int32), array([7, 8]))
>>> i = 1
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([2], dtype=int32), array([9]))
>>> i = 2
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
(array([], dtype=int32), array([], dtype=int64))
List of Implemented Operations¶
- Moving from and to sparse
dense_from_sparse
. Both grads are implemented. Structured by default.csr_from_dense
,csc_from_dense
. The grad implemented is structured.- Theano SparseVariable objects have a method
toarray()
that is the same asdense_from_sparse
.
- Construction of Sparses and their Properties
CSM
andCSC
,CSR
to construct a matrix. The grad implemented is regular.csm_properties
. to get the properties of a sparse matrix. The grad implemented is regular.- csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape.
sp_ones_like
. The grad implemented is regular.sp_zeros_like
. The grad implemented is regular.square_diagonal
. The grad implemented is regular.construct_sparse_from_list
. The grad implemented is regular.
- Cast
cast
withbcast
,wcast
,icast
,lcast
,fcast
,dcast
,ccast
, andzcast
. The grad implemented is regular.
- Transpose
transpose
. The grad implemented is regular.
- Basic Arithmetic
neg
. The grad implemented is regular.eq
.neq
.gt
.ge
.lt
.le
.add
. The grad implemented is regular.sub
. The grad implemented is regular.mul
. The grad implemented is regular.col_scale
to multiply by a vector along the columns. The grad implemented is structured.row_scale
to multiply by a vector along the rows. The grad implemented is structured.
- Monoid (Element-wise operation with only one sparse input).
They all have a structured grad.
structured_sigmoid
structured_exp
structured_log
structured_pow
structured_minimum
structured_maximum
structured_add
sin
arcsin
tan
arctan
sinh
arcsinh
tanh
arctanh
rad2deg
deg2rad
rint
ceil
floor
trunc
sgn
log1p
expm1
sqr
sqrt
- Dot Product
dot
.- One of the inputs must be sparse, the other sparse or dense.
- The grad implemented is regular.
- No C code for perform and no C code for grad.
- Returns a dense for perform and a dense for grad.
-
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is structured.
- C code for perform and grad.
- It returns a sparse output if both inputs are sparse and dense one if one of the inputs is dense.
- Returns a sparse grad for sparse inputs and dense grad for dense inputs.
-
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is regular.
- No C code for perform and no C code for grad.
- Returns a Sparse.
- The gradient returns a Sparse for sparse inputs and by
default a dense for dense inputs. The parameter
grad_preserves_dense
can be set to False to return a sparse grad for dense inputs.
-
- Both inputs must be dense.
- The grad implemented is structured for p.
- Sample of the dot and sample of the gradient.
- C code for perform but not for grad.
- Returns sparse for perform and grad.
usmm
.- You shouldn’t insert this op yourself!
- There is an optimization that transform a
dot
toUsmm
when possible.
- There is an optimization that transform a
- This op is the equivalent of gemm for sparse dot.
- There is no grad implemented for this op.
- One of the inputs must be sparse, the other sparse or dense.
- Returns a dense from perform.
- Slice Operations
- sparse_variable[N, N], returns a tensor scalar. There is no grad implemented for this operation.
- sparse_variable[M:N, O:P], returns a sparse matrix There is no grad implemented for this operation.
- Sparse variables don’t support [M, N:O] and [M:N, O] as we don’t support sparse vectors and returning a sparse matrix would break the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
diag
. The grad implemented is regular.
- Probability
There is no grad implemented for these operations.
Poisson
andpoisson
Binomial
andcsc_fbinomial
,csc_dbinomial
csr_fbinomial
,csr_dbinomial
Multinomial
andmultinomial
- Internal Representation
They all have a regular grad implemented.
ensure_sorted_indices
.remove0
.clean
to resort indices and remove zeros
- To help testing
tests.sparse.test_basic.sparse_random_inputs()
sparse
– Sparse Op¶
Classes for handling sparse matrices.
To read about different sparse formats, see http://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps
TODO: Automatic methods for determining best sparse format?
-
class
theano.sparse.basic.
AddSD
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
AddSS
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
AddSSData
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
theano.sparse.basic.
CSC
= <theano.sparse.basic.CSM object>[source]¶ Construct a CSC matrix from the internal representation.
Parameters: - data – One dimensional tensor representing the data of the sparse matrix to construct.
- indices – One dimensional tensor of integers representing the indices of the sparse matrix to construct.
- indptr – One dimensional tensor of integers representing the indice pointer for the sparse matrix to construct.
- shape – One dimensional tensor of integers representing the shape of the sparse matrix to construct.
Returns: A sparse matrix having the properties specified by the inputs.
Return type: sparse matrix
Notes
The grad method returns a dense vector, so it provides a regular grad.
-
class
theano.sparse.basic.
CSM
(format, kmap=None)[source]¶ Indexing to speficied what part of the data parameter should be used to construct the sparse matrix.
-
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(data, indices, indptr, shape)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
CSMGrad
(kmap=None)[source]¶ -
make_node
(x_data, x_indices, x_indptr, x_shape, g_data, g_indices, g_indptr, g_shape)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
CSMProperties
(kmap=None)[source]¶ -
grad
(inputs, g)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(csm)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, out)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
theano.sparse.basic.
CSR
= <theano.sparse.basic.CSM object>[source]¶ Construct a CSR matrix from the internal representation.
Parameters: - data – One dimensional tensor representing the data of the sparse matrix to construct.
- indices – One dimensional tensor of integers representing the indices of the sparse matrix to construct.
- indptr – One dimensional tensor of integers representing the indice pointer for the sparse matrix to construct.
- shape – One dimensional tensor of integers representing the shape of the sparse matrix to construct.
Returns: A sparse matrix having the properties specified by the inputs.
Return type: sparse matrix
Notes
The grad method returns a dense vector, so it provides a regular grad.
-
class
theano.sparse.basic.
Cast
(out_type)[source]¶ -
grad
(inputs, outputs_gradients)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
ColScaleCSC
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, s)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
ConstructSparseFromList
[source]¶ -
R_op
(inputs, eval_points)[source]¶ Construct a graph for the R-operator.
This method is primarily used by Rop
Suppose the op outputs
[ f_1(inputs), …, f_n(inputs) ]
Parameters: - inputs (a Variable or list of Variables) –
- eval_points – A Variable or list of Variables with the same length as inputs. Each element of eval_points specifies the value of the corresponding input at the point where the R op is to be evaluated.
Returns: - rval[i] should be Rop(f=f_i(inputs),
wrt=inputs, eval_points=eval_points)
Return type: list of n elements
-
grad
(inputs, grads)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, values, ilist)[source]¶ Parameters: - x – A dense matrix that specify the output shape.
- values – A dense matrix with the values to use for output.
- ilist – A dense vector with the same length as the number of rows of values. It specify where in the output to put the corresponding rows.
- create a sparse matrix with the same shape as x. Its (This) –
- are the rows of values moved. Pseudo-code:: (values) –
output = csc_matrix.zeros_like(x, dtype=values.dtype) for in_idx, out_idx in enumerate(ilist):
output[out_idx] = values[in_idx]
-
perform
(node, inp, out_)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
DenseFromSparse
(structured=True)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Diag
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Dot
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, out)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
EnsureSortedIndices
(inplace)[source]¶ -
grad
(inputs, output_grad)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItem2Lists
[source]¶ -
grad
(inputs, g_outputs)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, ind1, ind2)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inp, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItem2ListsGrad
[source]¶ -
make_node
(x, ind1, ind2, gz)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inp, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItem2d
[source]¶ -
make_node
(x, index)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItemList
[source]¶ -
grad
(inputs, g_outputs)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, index)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inp, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItemListGrad
[source]¶ -
make_node
(x, index, gz)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inp, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
GetItemScalar
[source]¶ -
make_node
(x, index)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
HStack
(format=None, dtype=None)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(*mat)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, block, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
MulSD
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
MulSS
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
MulSV
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Neg
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Remove0
(inplace=False)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
RowScaleCSC
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, s)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
SamplingDot
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y, p)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
SpSum
(axis=None, sparse_grad=True)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
SparseFromDense
(format)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
SquareDiagonal
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(diag)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
StructuredAddSV
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
StructuredDot
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(a, b)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
StructuredDotGradCSC
[source]¶ -
c_code
(node, name, inputs, outputs, sub)[source]¶ Return the C implementation of an Op.
Returns C code that does the computation associated to this Op, given names for the inputs and outputs.
Parameters: - node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.
- name (str) – A name that is automatically assigned and guaranteed to be unique.
- inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.
- outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
- sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME
-
c_code_cache_version
()[source]¶ Return a tuple of integers indicating the version of this Op.
An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.
The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.
See also
c_code_cache_version_apply
-
make_node
(a_indices, a_indptr, b, g_ab)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
StructuredDotGradCSR
[source]¶ -
c_code
(node, name, inputs, outputs, sub)[source]¶ Return the C implementation of an Op.
Returns C code that does the computation associated to this Op, given names for the inputs and outputs.
Parameters: - node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.
- name (str) – A name that is automatically assigned and guaranteed to be unique.
- inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.
- outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
- sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME
-
c_code_cache_version
()[source]¶ Return a tuple of integers indicating the version of this Op.
An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.
The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.
See also
c_code_cache_version_apply
-
make_node
(a_indices, a_indptr, b, g_ab)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Transpose
[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
TrueDot
(grad_preserves_dense=True)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
make_node
(x, y)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inp, out_)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
Usmm
[source]¶ -
make_node
(alpha, x, y, z)[source]¶ Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns: node – The constructed Apply node. Return type: Apply
-
perform
(node, inputs, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
class
theano.sparse.basic.
VStack
(format=None, dtype=None)[source]¶ -
grad
(inputs, gout)[source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
Parameters: - inputs (list of Variable) – The input variables.
- output_grads (list of Variable) – The gradients of the output variables.
Returns: grads – The gradients with respect to each Variable in inputs.
Return type: list of Variable
-
perform
(node, block, outputs)[source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters: - node (Apply) – The symbolic Apply node that represents this computation.
- inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
- output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
- params (tuple) – A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
-
-
theano.sparse.basic.
add
(x, y)[source]¶ Add two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x + y
Return type: A sparse matrix
Notes
At least one of x and y must be a sparse matrix.
The grad will be structured only when one of the variable will be a dense matrix.
-
theano.sparse.basic.
add_s_s_data
= <theano.sparse.basic.AddSSData object>[source]¶ Add two sparse matrices assuming they have the same sparsity pattern.
Parameters: - x – Sparse matrix.
- y – Sparse matrix.
Returns: The sum of the two sparse matrices element wise.
Return type: A sparse matrix
Notes
x and y are assumed to have the same sparsity pattern.
The grad implemented is structured.
-
theano.sparse.basic.
as_sparse
(x, name=None)[source]¶ Wrapper around SparseVariable constructor to construct a Variable with a sparse matrix with the same dtype and format.
Parameters: x – A sparse matrix. Returns: SparseVariable version of x. Return type: object
-
theano.sparse.basic.
as_sparse_or_tensor_variable
(x, name=None)[source]¶ Same as as_sparse_variable but if we can’t make a sparse variable, we try to make a tensor variable.
Parameters: x – A sparse matrix. Returns: Return type: SparseVariable or TensorVariable version of x
-
theano.sparse.basic.
as_sparse_variable
(x, name=None)[source]¶ Wrapper around SparseVariable constructor to construct a Variable with a sparse matrix with the same dtype and format.
Parameters: x – A sparse matrix. Returns: SparseVariable version of x. Return type: object
-
theano.sparse.basic.
cast
(variable, dtype)[source]¶ Cast sparse variable to the desired dtype.
Parameters: - variable – Sparse matrix.
- dtype – The dtype wanted.
Returns: Return type: Same as x but having dtype as dtype.
Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
clean
(x)[source]¶ Remove explicit zeros from a sparse matrix, and re-sort indices.
CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use clean when sorted indices are required (e.g. when passing data to other libraries) and to ensure there are no zeros in the data.
Parameters: x – A sparse matrix. Returns: The same as x with indices sorted and zeros removed. Return type: A sparse matrix Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
col_scale
(x, s)[source]¶ Scale each columns of a sparse matrix by the corresponding element of a dense vector.
Parameters: - x – A sparse matrix.
- s – A dense vector with length equal to the number of columns of x.
Returns: - A sparse matrix in the same format as x which each column had been
- multiply by the corresponding element of s.
Notes
The grad implemented is structured.
-
theano.sparse.basic.
construct_sparse_from_list
= <theano.sparse.basic.ConstructSparseFromList object>[source]¶ Constructs a sparse matrix out of a list of 2-D matrix rows.
Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
csc_from_dense
= <theano.sparse.basic.SparseFromDense object>[source]¶ Convert a dense matrix to a sparse csc matrix.
Parameters: x – A dense matrix. Returns: The same as x in a sparse csc matrix format. Return type: sparse matrix
-
theano.sparse.basic.
csm_grad
[source]¶ alias of
theano.sparse.basic.CSMGrad
-
theano.sparse.basic.
csm_properties
= <theano.sparse.basic.CSMProperties object>[source]¶ Extract all of .data, .indices, .indptr and .shape field.
For specific field, csm_data, csm_indices, csm_indptr and csm_shape are provided.
Parameters: - csm – Sparse matrix in CSR or CSC format.
- Returns – (data, indices, indptr, shape), the properties of csm.
Notes
The grad implemented is regular, i.e. not structured. infer_shape method is not available for this op.
-
theano.sparse.basic.
csr_from_dense
= <theano.sparse.basic.SparseFromDense object>[source]¶ Convert a dense matrix to a sparse csr matrix.
Parameters: x – A dense matrix. Returns: The same as x in a sparse csr matrix format. Return type: sparse matrix
-
theano.sparse.basic.
dense_from_sparse
= <theano.sparse.basic.DenseFromSparse object>[source]¶ Convert a sparse matrix to a dense one.
Parameters: x – A sparse matrix. Returns: A dense matrix, the same as x. Return type: theano.tensor.matrix Notes
The grad implementation can be controlled through the constructor via the structured parameter. True will provide a structured grad while False will provide a regular grad. By default, the grad is structured.
-
theano.sparse.basic.
diag
= <theano.sparse.basic.Diag object>[source]¶ Extract the diagonal of a square sparse matrix as a dense vector.
Parameters: x – A square sparse matrix in csc format. Returns: A dense vector representing the diagonal elements. Return type: TensorVariable Notes
The grad implemented is regular, i.e. not structured, since the output is a dense vector.
-
theano.sparse.basic.
dot
(x, y)[source]¶ Operation for efficiently calculating the dot product when one or all operands is sparse. Supported format are CSC and CSR. The output of the operation is dense.
Parameters: - x – Sparse or dense matrix variable.
- y – Sparse or dense matrix variable.
Returns: Return type: The dot product x.`y` in a dense format.
Notes
The grad implemented is regular, i.e. not structured.
At least one of x or y must be a sparse matrix.
When the operation has the form dot(csr_matrix, dense) the gradient of this operation can be performed inplace by UsmmCscDense. This leads to significant speed-ups.
-
theano.sparse.basic.
ensure_sorted_indices
= <theano.sparse.basic.EnsureSortedIndices object>[source]¶ Re-sort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use ensure_sorted_indices when sorted indices are required (e.g. when passing data to other libraries).
Parameters: x – A sparse matrix. Returns: The same as x with indices sorted. Return type: sparse matrix Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
eq
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x == y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
ge
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x >= y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
get_item_2d
= <theano.sparse.basic.GetItem2d object>[source]¶ Implement a subtensor of sparse variable, returning a sparse matrix.
If you want to take only one element of a sparse matrix see GetItemScalar that returns a tensor scalar.
Parameters: - x – Sparse matrix.
- index – Tuple of slice object.
Returns: The corresponding slice in x.
Return type: sparse matrix
Notes
Subtensor selection always returns a matrix, so indexing with [a:b, c:d] is forced. If one index is a scalar, for instance, x[a:b, c] or x[a, b:c], an error will be raised. Use instead x[a:b, c:c+1] or x[a:a+1, b:c].
The above indexing methods are not supported because the return value would be a sparse matrix rather than a sparse vector, which is a deviation from numpy indexing rule. This decision is made largely to preserve consistency between numpy and theano. This may be revised when sparse vectors are supported.
The grad is not implemented for this op.
-
theano.sparse.basic.
get_item_2lists
= <theano.sparse.basic.GetItem2Lists object>[source]¶ Select elements of sparse matrix, returning them in a vector.
Parameters: - x – Sparse matrix.
- index – List of two lists, first list indicating the row of each element and second list indicating its column.
Returns: The corresponding elements in x.
Return type: theano.tensor.vector
-
theano.sparse.basic.
get_item_list
= <theano.sparse.basic.GetItemList object>[source]¶ Select row of sparse matrix, returning them as a new sparse matrix.
Parameters: - x – Sparse matrix.
- index – List of rows.
Returns: The corresponding rows in x.
Return type: sparse matrix
-
theano.sparse.basic.
get_item_scalar
= <theano.sparse.basic.GetItemScalar object>[source]¶ Implement a subtensor of a sparse variable that takes two scalars as index and returns a scalar.
If you want to take a slice of a sparse matrix see GetItem2d that returns a sparse matrix.
Parameters: - x – Sparse matrix.
- index – Tuple of scalars.
Returns: The corresponding item in x.
Return type: TheanoVariable
Notes
The grad is not implemented for this op.
-
theano.sparse.basic.
gt
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x > y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
hstack
(blocks, format=None, dtype=None)[source]¶ Stack sparse matrices horizontally (column wise).
This wrap the method hstack from scipy.
Parameters: - blocks – List of sparse array of compatible shape.
- format – String representing the output format. Default is csc.
- dtype – Output dtype.
Returns: The concatenation of the sparse array column wise.
Return type: array
Notes
The number of line of the sparse matrix must agree.
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
le
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x <= y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
lt
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x < y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
mul
(x, y)[source]¶ Multiply elementwise two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x * y
Return type: A sparse matrix
Notes
At least one of x and y must be a sparse matrix. The grad is regular, i.e. not structured.
-
theano.sparse.basic.
mul_s_v
= <theano.sparse.basic.MulSV object>[source]¶ Multiplication of sparse matrix by a broadcasted dense vector element wise.
Parameters: - x – Sparse matrix to multiply.
- y – Tensor broadcastable vector.
Returns: The product x * y element wise.
Return type: A sparse matrix
Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
neg
= <theano.sparse.basic.Neg object>[source]¶ Return the negation of the sparse matrix.
Parameters: x – Sparse matrix. Returns: -x. Return type: sparse matrix Notes
The grad is regular, i.e. not structured.
-
theano.sparse.basic.
neq
(x, y)[source]¶ Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x != y
Return type: matrix variable
Notes
At least one of x and y must be a sparse matrix.
-
theano.sparse.basic.
remove0
= <theano.sparse.basic.Remove0 object>[source]¶ Remove explicit zeros from a sparse matrix.
Parameters: x – Sparse matrix. Returns: Exactly x but with a data attribute exempt of zeros. Return type: sparse matrix Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
row_scale
(x, s)[source]¶ Scale each row of a sparse matrix by the corresponding element of a dense vector.
Parameters: - x – A sparse matrix.
- s – A dense vector with length equal to the number of rows of x.
Returns: A sparse matrix in the same format as x whose each row has been multiplied by the corresponding element of s.
Return type: A sparse matrix
Notes
The grad implemented is structured.
-
theano.sparse.basic.
sampling_dot
= <theano.sparse.basic.SamplingDot object>[source]¶ Operand for calculating the dot product dot(x, y.T) = z when you only want to calculate a subset of z.
It is equivalent to p o (x . y.T) where o is the element-wise product, x and y operands of the dot product and p is a matrix that contains 1 when the corresponding element of z should be calculated and 0 when it shouldn’t. Note that SamplingDot has a different interface than dot because SamplingDot requires x to be a m`x`k matrix while y is a n`x`k matrix instead of the usual k`x`n matrix.
Notes
It will work if the pattern is not binary value, but if the pattern doesn’t have a high sparsity proportion it will be slower then a more optimized dot followed by a normal elemwise multiplication.
The grad implemented is regular, i.e. not structured.
Parameters: - x – Tensor matrix.
- y – Tensor matrix.
- p – Sparse matrix in csr format.
Returns: A dense matrix containing the dot product of x by y.T only where p is 1.
Return type: sparse matrix
-
theano.sparse.basic.
sp_ones_like
(x)[source]¶ Construct a sparse matrix of ones with the same sparsity pattern.
Parameters: x – Sparse matrix to take the sparsity pattern. Returns: The same as x with data changed for ones. Return type: A sparse matrix
-
theano.sparse.basic.
sp_sum
(x, axis=None, sparse_grad=False)[source]¶ Calculate the sum of a sparse matrix along the specified axis.
It operates a reduction along the specified axis. When axis is None, it is applied along all axes.
Parameters: - x – Sparse matrix.
- axis – Axis along which the sum is applied. Integer or None.
- sparse_grad (bool) – True to have a structured grad.
Returns: The sum of x in a dense format.
Return type: object
Notes
The grad implementation is controlled with the sparse_grad parameter. True will provide a structured grad and False will provide a regular grad. For both choices, the grad returns a sparse matrix having the same format as x.
This op does not return a sparse matrix, but a dense tensor matrix.
-
theano.sparse.basic.
sp_zeros_like
(x)[source]¶ Construct a sparse matrix of zeros.
Parameters: x – Sparse matrix to take the shape. Returns: The same as x with zero entries for all element. Return type: A sparse matrix
-
theano.sparse.basic.
sparse_formats
= ['csc', 'csr'][source]¶ Types of sparse matrices to use for testing.
-
theano.sparse.basic.
square_diagonal
= <theano.sparse.basic.SquareDiagonal object>[source]¶ Return a square sparse (csc) matrix whose diagonal is given by the dense vector argument.
Parameters: x – Dense vector for the diagonal. Returns: A sparse matrix having x as diagonal. Return type: sparse matrix Notes
The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
structured_add_s_v
= <theano.sparse.basic.StructuredAddSV object>[source]¶ Structured addition of a sparse matrix and a dense vector. The elements of the vector are only added to the corresponding non-zero elements of the sparse matrix. Therefore, this operation outputs another sparse matrix.
Parameters: - x – Sparse matrix.
- y – Tensor type vector.
Returns: A sparse matrix containing the addition of the vector to the data of the sparse matrix.
Return type: A sparse matrix
Notes
The grad implemented is structured since the op is structured.
-
theano.sparse.basic.
structured_dot
(x, y)[source]¶ Structured Dot is like dot, except that only the gradient wrt non-zero elements of the sparse matrix a are calculated and propagated.
The output is presumed to be a dense matrix, and is represented by a TensorType instance.
Parameters: - a – A sparse matrix.
- b – A sparse or dense matrix.
Returns: The dot product of a and b.
Return type: A sparse matrix
Notes
The grad implemented is structured.
-
theano.sparse.basic.
sub
(x, y)[source]¶ Subtract two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters: - x – A matrix variable.
- y – A matrix variable.
Returns: x - y
Return type: A sparse matrix
Notes
At least one of x and y must be a sparse matrix.
The grad will be structured only when one of the variable will be a dense matrix.
-
theano.sparse.basic.
transpose
= <theano.sparse.basic.Transpose object>[source]¶ Return the transpose of the sparse matrix.
Parameters: x – Sparse matrix. Returns: x transposed. Return type: sparse matrix Notes
The returned matrix will not be in the same format. csc matrix will be changed in csr matrix and csr matrix in csc matrix.
The grad is regular, i.e. not structured.
-
theano.sparse.basic.
true_dot
(x, y, grad_preserves_dense=True)[source]¶ Operation for efficiently calculating the dot product when one or all operands are sparse. Supported formats are CSC and CSR. The output of the operation is sparse.
Parameters: - x – Sparse matrix.
- y – Sparse matrix or 2d tensor variable.
- grad_preserves_dense (bool) – If True (default), makes the grad of dense inputs dense. Otherwise the grad is always sparse.
Returns: - The dot product x.`y` in a sparse format.
- Notex
- —–
- The grad implemented is regular, i.e. not structured.
-
theano.sparse.basic.
usmm
= <theano.sparse.basic.Usmm object>[source]¶ Performs the expression alpha * x y + z.
Parameters: - x – Matrix variable.
- y – Matrix variable.
- z – Dense matrix.
- alpha – A tensor scalar.
Returns: Return type: The dense matrix resulting from alpha * x y + z.
Notes
The grad is not implemented for this op. At least one of x or y must be a sparse matrix.
-
theano.sparse.basic.
vstack
(blocks, format=None, dtype=None)[source]¶ Stack sparse matrices vertically (row wise).
This wrap the method vstack from scipy.
Parameters: - blocks – List of sparse array of compatible shape.
- format – String representing the output format. Default is csc.
- dtype – Output dtype.
Returns: The concatenation of the sparse array row wise.
Return type: array
Notes
The number of column of the sparse matrix must agree.
The grad implemented is regular, i.e. not structured.