Basic losses used in keratorch

MSE loss

e = torch.arange(-3, 3, 0.1)
plt.plot(e, [mse_loss(e_, torch.zeros_like(e_)) for e_ in e]);

MAE Loss

e = torch.arange(-3, 3, 0.1)
plt.plot(e, [mae(e_, torch.zeros_like(e_)) for e_ in e]);

Binary Crossentropy Loss

e = torch.arange(-3, 3, 0.1)
print('Red dot shows the loss when probability is zero and the true target is zero.')
plt.plot(torch.sigmoid(e), [bce(torch.sigmoid(e_), torch.zeros_like(e_)) for e_ in e])
plt.scatter(0, 0, c='r')
plt.xlabel('calculated probability')
plt.ylabel('loss')
plt.show()
Red dot shows the loss when probability is zero and the true target is zero.

Cross Entropy Loss

Cross entropy loss for multi class problems.

Cross Entropy for Softmax

ce4softmax[source]

ce4softmax(input, target)