Parameters: | x - symbolic Tensor (or compatible) |
---|---|
Return type: | same as x |
Returns: | element-wise sigmoid: ![]() |
Example:
x,y,b = T.dvectors('x','y','b')
W = T.dmatrix('W')
y = T.nnet.sigmoid(T.dot(W,x) + b)
Note
The underlying code will return an exact 0 or 1 if an element of x is too small or too big.
Parameter: | x - symbolic Tensor (or compatible) |
---|---|
Return type: | same as x |
Returns: | elementwise softplus: ![]() |
Note
The underlying code will return an exact 0 if an element of x is too small.
x,y,b = T.dvectors('x','y','b')
W = T.dmatrix('W')
y = T.nnet.softplus(T.dot(W,x) + b)
Parameter: | x symbolic 2D Tensor (or compatible). |
---|---|
Return type: | same as x |
Returns: | a symbolic 2D tensor whose ijth element is ![]() |
The softmax function will, when applied to a matrix, compute the softmax values row-wise.
x,y,b = T.dvectors('x','y','b')
W = T.dmatrix('W')
y = T.nnet.softmax(T.dot(W,x) + b)
Parameters: |
|
---|---|
Return type: | same as target |
Returns: | a symbolic tensor, where the following is applied elementwise |
The following block implements a simple auto-associator with a sigmoid nonlinearity and a reconstruction error which corresponds to the binary cross-entropy (note that this assumes that x will contain values between 0 and 1):
x, y, b = T.dvectors('x', 'y', 'b')
W = T.dmatrix('W')
h = T.nnet.sigmoid(T.dot(W, x) + b)
x_recons = T.nnet.sigmoid(T.dot(V, h) + c)
recon_cost = T.nnet.binary_crossentropy(x_recons, x).mean()
Return the cross-entropy between an approximating distribution and a true distribution.
The cross entropy between two probability distributions measures the average number of bits
needed to identify an event from a set of possibilities, if a coding scheme is used based
on a given probability distribution q, rather than the “true” distribution p. Mathematically, this
function computes , where
p=true_dist and q=coding_dist.
Parameters : |
|
---|---|
Return type: | tensor of rank one-less-than coding_dist |
Note
An application of the scenario where true_dist has a 1-of-N representation is in classification with softmax outputs. If coding_dist is the output of the softmax and true_dist is a vector of correct labels, then the function will compute y_i = - \log(coding_dist[i, one_of_n[i]]), which corresponds to computing the neg-log-probability of the correct class (which is typically the training criterion in classification settings).
y = T.nnet.softmax(T.dot(W, x) + b)
cost = T.nnet.categorical_crossentropy(y, o)
# o is either the above-mentioned 1-of-N vector or 2D tensor