tensor.extra_ops – Tensor Extra Ops

class theano.tensor.extra_ops.Bartlett[source]
make_node(M)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, out_)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.CpuContiguous[source]

Check to see if the input is c-contiguous, if it is, do nothing, else return a contiguous array.

c_code(node, name, inames, onames, sub)[source]

Required: return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters
  • node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.

  • name (str) – A name that is automatically assigned and guaranteed to be unique.

  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.

  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.

  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME

Raises

MethodNotDefined – The subclass does not override this method.

c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply()

make_node(x)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.CumOp(axis=None, mode='add')[source]
c_code(node, name, inames, onames, sub)[source]

Required: return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters
  • node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.

  • name (str) – A name that is automatically assigned and guaranteed to be unique.

  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.

  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.

  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME

Raises

MethodNotDefined – The subclass does not override this method.

c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply()

make_node(x)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage, params)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.CumprodOp(*args, **kwargs)[source]
class theano.tensor.extra_ops.CumsumOp(*args, **kwargs)[source]
class theano.tensor.extra_ops.DiffOp(n=1, axis=- 1)[source]
make_node(x)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.FillDiagonal[source]
grad(inp, cost_grad)[source]

Notes

The gradient is currently implemented for matrices only.

make_node(a, val)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.FillDiagonalOffset[source]
grad(inp, cost_grad)[source]

Notes

The gradient is currently implemented for matrices only.

make_node(a, val, offset)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.RavelMultiIndex(mode='raise', order='C')[source]
make_node(*inp)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inp, out)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.RepeatOp(axis=None)[source]
make_node(x, repeats)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.SearchsortedOp(side='left')[source]

Wrapper of numpy.searchsorted.

For full documentation, see searchsorted().

See also

searchsorted

numpy-like function to use the SearchsortedOp

c_code(node, name, inames, onames, sub)[source]

Required: return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters
  • node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.

  • name (str) – A name that is automatically assigned and guaranteed to be unique.

  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.

  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.

  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME

Raises

MethodNotDefined – The subclass does not override this method.

c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply()

c_init_code_struct(node, name, sub)[source]

Optional: return a code string specific to the apply to be inserted in the struct initialization code.

Parameters
  • node (an Apply instance in the graph being compiled) –

  • name (str) – A unique name to distinguish variables from those of other nodes.

  • sub – A dictionary of values to substitute in the code. Most notably it contains a ‘fail’ entry that you should place in your code after setting a python exception to indicate an error.

Raises

MethodNotDefined – The subclass does not override this method.

c_support_code_struct(node, name)[source]

Optional: return utility code for use by an Op that will be inserted at struct scope, that can be specialized for the support of a particular Apply node.

Parameters
  • node (an Apply instance in the graph being compiled) –

  • name (str) – A unique name to distinguish you variables from those of other nodes.

Raises

MethodNotDefined – Subclass does not implement this method.

make_node(x, v, sorter=None)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage, params)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.Unique(return_index=False, return_inverse=False, return_counts=False, axis=None)[source]

Wraps numpy.unique. This op is not implemented on the GPU.

Examples

>>> import numpy as np
>>> import theano
>>> x = theano.tensor.vector()
>>> f = theano.function([x], Unique(True, True, False)(x))
>>> f([1, 2., 3, 4, 3, 2, 1.])
[array([ 1.,  2.,  3.,  4.]), array([0, 1, 2, 3]), array([0, 1, 2, 3, 2, 1, 0])]
>>> y = theano.tensor.matrix()
>>> g = theano.function([y], Unique(True, True, False)(y))
>>> g([[1, 1, 1.0], (2, 3, 3.0)])
[array([ 1.,  2.,  3.]), array([0, 3, 4]), array([0, 0, 0, 1, 2, 2])]
make_node(x)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theano.tensor.extra_ops.UnravelIndex(order='C')[source]
make_node(indices, dims)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inp, out)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

theano.tensor.extra_ops.bartlett(M)[source]

An instance of this class returns the Bartlett spectral window in the time-domain. The Bartlett window is very similar to a triangular window, except that the end points are at zero. It is often used in signal processing for tapering a signal, without generating too much ripple in the frequency domain.

New in version 0.6.

Parameters

M (integer scalar) – Number of points in the output window. If zero or less, an empty vector is returned.

Returns

The triangular window, with the maximum value normalized to one (the value one appears only if the number of samples is odd), with the first and last samples equal to zero.

Return type

vector of doubles

theano.tensor.extra_ops.bincount(x, weights=None, minlength=None, assert_nonneg=False)[source]

Count number of occurrences of each value in array of ints.

The number of bins (of size 1) is one larger than the largest value in x. If minlength is specified, there will be at least this number of bins in the output array (though it will be longer if necessary, depending on the contents of x). Each bin gives the number of occurrences of its index value in x. If weights is specified the input array is weighted by it, i.e. if a value n is found at position i, out[n] += weight[i] instead of out[n] += 1.

Parameters
  • x (1 dimension, nonnegative ints) –

  • weights (array of the same shape as x with corresponding weights.) – Optional.

  • minlength (A minimum number of bins for the output array.) – Optional.

  • assert_nonneg (A flag that inserts an assert_op to check if) – every input x is nonnegative. Optional.

New in version 0.6.

theano.tensor.extra_ops.broadcast_shape(*arrays, **kwargs)[source]

Compute the shape resulting from broadcasting arrays.

Parameters
  • *arrays (Tuple[TensorVariable] or Tuple[Tuple[Variable]]) – A tuple of tensors, or a tuple of shapes (as tuples), for which the broadcast shape is computed.

  • arrays_are_shapes (bool (Optional)) – Indicates whether or not the arrays contains shape tuples. If you use this approach, make sure that the broadcastable dimensions are (scalar) constants with the value 1 or 1 exactly.

theano.tensor.extra_ops.compress(condition, x, axis=None)[source]

Return selected slices of an array along given axis.

It returns the input tensor, but with selected slices along a given axis retained. If no axis is provided, the tensor is flattened. Corresponds to numpy.compress

New in version 0.7.

Parameters
  • x – Input data, tensor variable.

  • condition – 1 dimensional array of non-zero and zero values corresponding to indices of slices along a selected axis.

Returns

x with selected slices.

Return type

object

theano.tensor.extra_ops.cumprod(x, axis=None)[source]

Return the cumulative product of the elements along a given axis.

Wraping of numpy.cumprod.

Parameters
  • x – Input tensor variable.

  • axis – The axis along which the cumulative product is computed. The default (None) is to compute the cumprod over the flattened array.

New in version 0.7.

theano.tensor.extra_ops.cumsum(x, axis=None)[source]

Return the cumulative sum of the elements along a given axis.

Wraping of numpy.cumsum.

Parameters
  • x – Input tensor variable.

  • axis – The axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

New in version 0.7.

theano.tensor.extra_ops.diff(x, n=1, axis=- 1)[source]

Calculate the n-th order discrete difference along given axis.

The first order difference is given by out[i] = a[i + 1] - a[i] along the given axis, higher order differences are calculated by using diff recursively. Wraping of numpy.diff.

Parameters
  • x – Input tensor variable.

  • n – The number of times values are differenced, default is 1.

  • axis – The axis along which the difference is taken, default is the last axis.

New in version 0.6.

theano.tensor.extra_ops.fill_diagonal(a, val)[source]

Returns a copy of an array with all elements of the main diagonal set to a specified scalar value.

New in version 0.6.

Parameters
  • a – Rectangular array of at least two dimensions.

  • val – Scalar value to fill the diagonal whose type must be compatible with that of array ‘a’ (i.e. ‘val’ cannot be viewed as an upcast of ‘a’).

Returns

  • array – An array identical to ‘a’ except that its main diagonal is filled with scalar ‘val’. (For an array ‘a’ with a.ndim >= 2, the main diagonal is the list of locations a[i, i, …, i] (i.e. with indices all identical).)

  • Support rectangular matrix and tensor with more than 2 dimensions

  • if the later have all dimensions are equals.

theano.tensor.extra_ops.fill_diagonal_offset(a, val, offset)[source]

Returns a copy of an array with all elements of the main diagonal set to a specified scalar value.

Parameters
  • a – Rectangular array of two dimensions.

  • val – Scalar value to fill the diagonal whose type must be compatible with that of array ‘a’ (i.e. ‘val’ cannot be viewed as an upcast of ‘a’).

  • offset – Scalar value Offset of the diagonal from the main diagonal. Can be positive or negative integer.

Returns

An array identical to ‘a’ except that its offset diagonal is filled with scalar ‘val’. The output is unwrapped.

Return type

array

theano.tensor.extra_ops.ravel_multi_index(multi_index, dims, mode='raise', order='C')[source]

Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index.

Parameters
  • multi_index (tuple of Theano or NumPy arrays) – A tuple of integer arrays, one array for each dimension.

  • dims (tuple of ints) – The shape of array into which the indices from multi_index apply.

  • mode ({'raise', 'wrap', 'clip'}, optional) – Specifies how out-of-bounds indices are handled. Can specify either one mode or a tuple of modes, one mode per index. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range In ‘clip’ mode, a negative index which would normally wrap will clip to 0 instead.

  • order ({'C', 'F'}, optional) – Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order.

Returns

raveled_indices – An array of indices into the flattened version of an array of dimensions dims.

Return type

Theano array

See also

unravel_index()

theano.tensor.extra_ops.repeat(x, repeats, axis=None)[source]

Repeat elements of an array.

It returns an array which has the same shape as x, except along the given axis. The axis is used to speficy along which axis to repeat values. By default, use the flattened input array, and return a flat output array.

The number of repetitions for each element is repeat. repeats is broadcasted to fit the length of the given axis.

Parameters
  • x – Input data, tensor variable.

  • repeats – int, scalar or tensor variable

  • axis (int, optional) –

See also

tensor.tile(), ()

theano.tensor.extra_ops.searchsorted(x, v, side='left', sorter=None)[source]

Find indices where elements should be inserted to maintain order.

Wrapping of numpy.searchsorted. Find the indices into a sorted array x such that, if the corresponding elements in v were inserted before the indices, the order of x would be preserved.

Parameters
  • x (1-D tensor (array-like)) – Input array. If sorter is None, then it must be sorted in ascending order, otherwise sorter must be an array of indices which sorts it.

  • v (tensor (array-like)) – Contains the values to be inserted into x.

  • side ({'left', 'right'}, optional.) – If ‘left’ (default), the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of x).

  • sorter (1-D tensor of integers (array-like), optional) – Contains indices that sort array x into ascending order. They are typically the result of argsort.

Returns

indices – Array of insertion points with the same shape as v.

Return type

tensor of integers (int64)

Notes

  • Binary search is used to find the required insertion points.

  • This Op is working only on CPU currently.

Examples

>>> from theano import tensor
>>> x = tensor.dvector()
>>> idx = x.searchsorted(3)
>>> idx.eval({x: [1,2,3,4,5]})
array(2)
>>> tensor.extra_ops.searchsorted([1,2,3,4,5], 3).eval()
array(2)
>>> tensor.extra_ops.searchsorted([1,2,3,4,5], 3, side='right').eval()
array(3)
>>> tensor.extra_ops.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]).eval()
array([0, 5, 1, 2])

New in version 0.9.

theano.tensor.extra_ops.squeeze(x, axis=None)[source]

Remove broadcastable dimensions from the shape of an array.

It returns the input array, but with the broadcastable dimensions removed. This is always x itself or a view into x.

New in version 0.6.

Parameters
  • x – Input data, tensor variable.

  • axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.

Returns

x without its broadcastable dimensions.

Return type

object

theano.tensor.extra_ops.to_one_hot(y, nb_class, dtype=None)[source]

Return a matrix where each row correspond to the one hot encoding of each element in y.

Parameters
  • y – A vector of integer value between 0 and nb_class - 1.

  • nb_class (int) – The number of class in y.

  • dtype (data-type) – The dtype of the returned matrix. Default floatX.

Returns

A matrix of shape (y.shape[0], nb_class), where each row i is the one hot encoding of the corresponding y[i] value.

Return type

object

theano.tensor.extra_ops.unravel_index(indices, dims, order='C')[source]

Converts a flat index or array of flat indices into a tuple of coordinate arrays.

Parameters
  • indices (Theano or NumPy array) – An integer array whose elements are indices into the flattened version of an array of dimensions dims.

  • dims (tuple of ints) – The shape of the array to use for unraveling indices.

  • order ({'C', 'F'}, optional) – Determines whether the indices should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order.

Returns

unraveled_coords – Each array in the tuple has the same shape as the indices array.

Return type

tuple of ndarray