The sparse submodule is not loaded when we import PyTensor. You must
import pytensor.sparse to enable it.
The sparse module provides the same functionality as the tensor
module. The difference lies under the covers because sparse matrices
do not store data in a contiguous array. The sparse module has
been used in:
NLP: Dense linear transformations of sparse vectors.
This section tries to explain how information is stored for the two
sparse formats of SciPy supported by PyTensor.
PyTensor supports two compressed sparse formats: csc and csr,
respectively based on columns and rows. They have both the same
attributes: data, indices, indptr and shape.
The data attribute is a one-dimensional ndarray which
contains all the non-zero elements of the sparse matrix.
The indices and indptr attributes are used to store the
position of the data in the sparse matrix.
The shape attribute is exactly the same as the shape
attribute of a dense (i.e. generic) matrix. It can be explicitly
specified at the creation of a sparse matrix if it cannot be
inferred from the first three attributes.
In the Compressed Sparse Column format, indices stands for
indexes inside the column vectors of the matrix and indptr tells
where the column starts in the data and in the indices
attributes. indptr can be thought of as giving the slice which
must be applied to the other attribute in order to get each column of
the matrix. In other words, slice(indptr[i],indptr[i+1])
corresponds to the slice needed to find the i-th column of the matrix
in the data and indices fields.
The following example builds a matrix and returns its columns. It
prints the i-th column, i.e. a list of indices in the column and their
corresponding value in the second list.
In the Compressed Sparse Row format, indices stands for indexes
inside the row vectors of the matrix and indptr tells where the
row starts in the data and in the indices
attributes. indptr can be thought of as giving the slice which
must be applied to the other attribute in order to get each row of the
matrix. In other words, slice(indptr[i],indptr[i+1]) corresponds
to the slice needed to find the i-th row of the matrix in the data
and indices fields.
The following example builds a matrix and returns its rows. It prints
the i-th row, i.e. a list of indices in the row and their
corresponding value in the second list.
construct_sparse_from_list.
The grad implemented is regular.
Cast
cast with bcast, wcast, icast, lcast,
fcast, dcast, ccast, and zcast.
The grad implemented is regular.
Transpose
transpose.
The grad implemented is regular.
Basic Arithmetic
neg.
The grad implemented is regular.
eq.
neq.
gt.
ge.
lt.
le.
add.
The grad implemented is regular.
sub.
The grad implemented is regular.
mul.
The grad implemented is regular.
col_scale to multiply by a vector along the columns.
The grad implemented is structured.
row_scale to multiply by a vector along the rows.
The grad implemented is structured.
Monoid (Element-wise operation with only one sparse input).
Theyallhaveastructuredgrad.
structured_sigmoid
structured_exp
structured_log
structured_pow
structured_minimum
structured_maximum
structured_add
sin
arcsin
tan
arctan
sinh
arcsinh
tanh
arctanh
rad2deg
deg2rad
rint
ceil
floor
trunc
sign
log1p
expm1
sqr
sqrt
Dot Product
dot.
One of the inputs must be sparse, the other sparse or dense.
The grad implemented is regular.
No C code for perform and no C code for grad.
Returns a dense for perform and a dense for grad.
structured_dot.
The first input is sparse, the second can be sparse or dense.
The grad implemented is structured.
C code for perform and grad.
It returns a sparse output if both inputs are sparse and
dense one if one of the inputs is dense.
Returns a sparse grad for sparse inputs and dense grad for
dense inputs.
true_dot.
The first input is sparse, the second can be sparse or dense.
The grad implemented is regular.
No C code for perform and no C code for grad.
Returns a Sparse.
The gradient returns a Sparse for sparse inputs and by
default a dense for dense inputs. The parameter
grad_preserves_dense can be set to False to return a
sparse grad for dense inputs.
sampling_dot.
Both inputs must be dense.
The grad implemented is structured for p.
Sample of the dot and sample of the gradient.
C code for perform but not for grad.
Returns sparse for perform and grad.
usmm.
You shouldn’t insert this op yourself!
There is a rewrite that transforms a
dot to Usmm when possible.
This Op is the equivalent of gemm for sparse dot.
There is no grad implemented for this Op.
One of the inputs must be sparse, the other sparse or dense.
Returns a dense from perform.
Slice Operations
sparse_variable[N, N], returns a tensor scalar.
There is no grad implemented for this operation.
sparse_variable[M:N, O:P], returns a sparse matrix
There is no grad implemented for this operation.
Sparse variables don’t support [M, N:O] and [M:N, O] as we don’t
support sparse vectors and returning a sparse matrix would break
the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
CSM creates a matrix from data, indices, and indptr vectors; it’s gradient
is the gradient of the data vector only. There are two complexities to
calculate this gradient:
1. The gradient may be sparser than the input matrix defined by (data,
indices, indptr). In this case, the data vector of the gradient will have
less elements than the data vector of the input because sparse formats
remove 0s. Since we are only returning the gradient of the data vector,
the relevant 0s need to be added back.
2. The elements in the sparse dimension are not guaranteed to be sorted.
Therefore, the input data vector may have a different order than the
gradient data vector.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
The grad implemented is regular, i.e. not structured.
infer_shape method is not available for this Op.
We won’t implement infer_shape for this op now. This will
ask that we implement an GetNNZ op, and this op will keep
the dependence on the input of this op. So this won’t help
to remove computations in the graph. To remove computation,
we will need to make an infer_sparse_pattern feature to
remove computations. Doing this is trickier then the
infer_shape feature. For example, how do we handle the case
when some op create some 0 values? So there is dependence
on the values themselves. We could write an infer_shape for
the last output that is the shape, but I dough this will
get used.
We don’t return a view of the shape, we create a new ndarray from the shape
tuple.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
eval_points – A Variable or list of Variables with the same length as inputs.
Each element of eval_points specifies the value of the corresponding
input at the point where the R-operator is to be evaluated.
Return type:
rval[i] should be Rop(f=f_i(inputs),wrt=inputs,eval_points=eval_points).
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
The grad implementation can be controlled through the constructor via the
structured parameter. True will provide a structured grad while False
will provide a regular grad. By default, the grad is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use ensure_sorted_indices when sorted
indices are required (e.g. when passing data to other
libraries).
Notes
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Implement a subtensor of sparse variable, returning a sparse matrix.
If you want to take only one element of a sparse matrix see
GetItemScalar that returns a tensor scalar.
Notes
Subtensor selection always returns a matrix, so indexing with [a:b, c:d]
is forced. If one index is a scalar, for instance, x[a:b, c] or x[a, b:c],
an error will be raised. Use instead x[a:b, c:c+1] or x[a:a+1, b:c].
The above indexing methods are not supported because the return value
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
to preserve consistency between numpy and pytensor. This may be revised
when sparse vectors are supported.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Remove explicit zeros from a sparse matrix, and re-sort indices.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use clean when sorted
indices are required (e.g. when passing data to other
libraries) and to ensure there are no zeros in the data.
Parameters:
x – A sparse matrix.
Returns:
The same as x with indices sorted and zeros
removed.
Return type:
A sparse matrix
Notes
The grad implemented is regular, i.e. not structured.