You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Problems/50_lasso_regression_gradient_descent/learn.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ <h3>2. Make Predictions at each step using the formula \[ \hat{y}_i = \sum_{j=1}
20
20
21
21
<h3>3. Find the residuals (Difference between the actual y values and the predicted ones) </h3>
22
22
<h3>4. Update the weights and bias using the formula </p>
23
-
<p> First, find the gradient with respect to weights $w$ using the formula \[ \frac{\partial J}{\partial w_j} = \frac{1}{n} \sum_{i=1}^nX_{ij}(y_i - \hat{y}_i) + \alpha \cdot sign(w_j) \] </h3>
23
+
<p> First, find the gradient with respect to weights $w$ using the formula \[ \frac{\partial J}{\partial w_j} = \frac{1}{n} \sum_{i=1}^nX_{ij}(\hat{y}_i - y_i) + \alpha \cdot sign(w_j) \] </h3>
24
24
<p> Then, we need to find the gradient with respect to the bias $b$. Since the bias term $b$ does not have a regularization component (since Lasso regularization is applied only to the weights $w_j$), the gradient with respect to $b$ is just the partial derivative of the Mean Squared Error (MSE) loss function with respect to $b$ \[ \frac{\partial J(w,b)}{\partial b} = \frac{1}{n} \sum_{i = 1}^{n}(\hat{y}_i-y_i)\]</p>
25
25
<p> Next, we update the weights and bias using the formula \[ w_j = w_j - \eta \cdot \frac{\partial J}{\partial w_j} \] \[ b = b - \eta \cdot \frac{\partial J}{\partial b} \] Where eta is the learning rate defined in the beginning of the function</p>
26
26
<h3> 5. Repeat steps 2-4 until the weights converge. This is determined by evaulating the L1 Norm of the weight gradients </h3>
0 commit comments