Introduced by Tibshirani, 2005, encourages sparsity of the coefficients and also sparsity of their differences.
(Tibshirani于2005年提出,鼓励系数的稀疏性以及它们之间差异的稀疏性。)
(article) ((文章))
Screenshot of the formula
(公式的屏幕截图)
How can I imitate this with cvxpy package?
(我该如何使用cvxpy软件包来模仿它?)
There are example codes for Lasso and Ridge Penalties seperately. (分别有套索和岭罚的示例代码。)
I get a general idea how the codes work but there are some fuctions I do not get how I'm supposed to decide which one to use.For example let's compare Lasso and Ridge penalty codes. (我大致了解了代码的工作原理,但有一些功能我不知道应该如何决定使用哪一个代码,例如让我们比较Lasso和Ridge罚码。)
# Lasso
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
def loss_fn(X, Y, beta):
return cp.norm2(cp.matmul(X, beta) - Y)**2
def regularizer(beta):
return cp.norm1(beta)
def objective_fn(X, Y, beta, lambd):
return loss_fn(X, Y, beta) + lambd * regularizer(beta)
def mse(X, Y, beta):
return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value
# Ridge
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
def loss_fn(X, Y, beta):
return cp.pnorm(cp.matmul(X, beta) - Y, p=2)**2
def regularizer(beta):
return cp.pnorm(beta, p=2)**2
def objective_fn(X, Y, beta, lambd):
return loss_fn(X, Y, beta) + lambd * regularizer(beta)
def mse(X, Y, beta):
return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value
For both of them we create a loss function but they are created with different commands?
(对于它们两者,我们都创建了损失函数,但是它们是用不同的命令创建的?)
ask by moli translate from so 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…