Zhu Lin's webpage
Adjust parameters and visualize the optimization process in real-time.
Formula: ŷ = sigmoid(XW + b)
where sigmoid(z) = 1 / (1 + e-z)
Loss = - (1 / n) Σ [ y * log(ŷ) + (1 - y) * log(1 - ŷ) ]
where n is the number of data points.
note: the negative sign is used to convert the loss to a minimization problem from log likelihood.
Update W: W = W - α * (1 / n) Σ (ŷ - y) * X
Update b: b = b - α * (1 / n) Σ (ŷ - y)
where α is the learning rate.
Trained Parameters:
Weight (W):
Bias (b):
Cross-Entropy Loss
Weight Vector (W, b):