The sgs
R package fits sparse-group SLOPE (SGS) and
group SLOPE (gSLOPE) models. The package has implementations for linear
and logisitic regression, both of which are demonstrated here. The
package also uses strong screening rules to speed up computational time,
described in detail in F.
Feser, M. Evangelou (2024) “Strong Screening Rules for Group-based SLOPE
Models”. Screening rules are applied by default here. However, the
impact of screening is demonstrated in the Screening section at the
end.
Sparse-group SLOPE (SGS) is a penalised regression approach that performs bi-level selection with FDR control under orthogonal designs. SGS is described in detail in F. Feser, M. Evangelou (2023) “Sparse-group SLOPE: adaptive bi-level selection with FDR-control”.
For this example, a \(400 \times 500\) input matrix is used with a simple grouping structure, sampled from a multivariate Gaussian distribution with no correlation.
We now fit an SGS model to the data using linear regression. The SGS model has many different hyperparameters which can be tuned/selected. Of particular importance is the \(\lambda\) parameter, which defines the level of sparsity in the model. First, we select this manually and then next use cross-validation to tune it. The other parameters we leave as their default values, although they can easily be changed.
model = fit_sgs(X = data$X, y = data$y, groups = groups, type = "linear",
lambda = 0.5, alpha = 0.95, vFDR = 0.1, gFDR = 0.1, standardise = "l2",
intercept = TRUE, verbose = FALSE, screen = TRUE)
Note: we have fit an intercept and applied \(\ell_2\) standardisation. This is the recommended usage when applying SGS with linear regression. The lambda values can also be calculated automatically, starting at the null model and continuing as specified by and :
The package provides several useful outputs after fitting a model. The vector shows the fitted values (note the intercept). We can also recover the indices of the non-zero variables and groups, which are indexed from the first variable, not the intercept.
## [1] -1.6734720 -1.4893849 3.1678789 1.6350236 5.0369209 -1.4619740
## [7] 1.2779414 -0.9364065 4.9809334 -0.6548784 -1.4470782 -2.2343330
## [13] 1.6188817 -1.6665858 -0.7298015
## [1] 3.879978 5.295647 1.461974 1.277941 5.068190 1.588364 2.234333 2.435343
## [1] 97 99 100 133 136 170 217 231 234 260 263 334 391 393 394
## [1] 30 39 46 56 59 64 76 85
Defining a function that lets us calculate various metrics (including the FDR and sensitivity):
fdr_sensitivity = function(fitted_ids, true_ids, num_coef) {
# calculates FDR, FPR, and sensitivity
num_true = length(intersect(fitted_ids, true_ids))
num_false = length(fitted_ids) - num_true
num_missed = length(true_ids) - num_true
num_true_negatives = num_coef - length(true_ids)
out = c()
out$fdr = num_false/(num_true + num_false)
if (is.nan(out$fdr)) {
out$fdr = 0
}
out$sensitivity = num_true/length(true_ids)
if (length(true_ids) == 0) {
out$sensitivity = 1
}
out$fpr = num_false/num_true_negatives
out$f1 = (2 * num_true)/(2 * num_true + num_false + num_missed)
if (is.nan(out$f1)) {
out$f1 = 1
}
return(out)
}
Calculating relevant metrics give
## $fdr
## [1] 0
##
## $sensitivity
## [1] 0.5357143
##
## $fpr
## [1] 0
##
## $f1
## [1] 0.6976744
## $fdr
## [1] 0
##
## $sensitivity
## [1] 0.8
##
## $fpr
## [1] 0
##
## $f1
## [1] 0.8888889
The model is currently too sparse, as our choice of \(\lambda\) is too high. We can instead use cross-validation.
Cross-validation is used to fit SGS models along a \(\lambda\) path of length \(20\). The first value, \(\lambda_\text{max}\), is chosen to give the null model and the path is terminated at \(\lambda_\text{min} = \delta \lambda_\text{max}\), where \(\delta\) is some value between \(0\) and \(1\) (given by in the function). The 1se rule (as in the package) is used to choose the optimal model.
cv_model = fit_sgs_cv(X = data$X, y = data$y, groups = groups,
type = "linear", path_length = 20, nfolds = 10, alpha = 0.95,
vFDR = 0.1, gFDR = 0.1, min_frac = 0.05, standardise = "l2",
intercept = TRUE, verbose = TRUE, screen = TRUE)
## [1] "Fold 1/10 done. Error: 3593.46913744139"
## [1] "Fold 2/10 done. Error: 2399.84476769036"
## [1] "Fold 3/10 done. Error: 3250.66880473304"
## [1] "Fold 4/10 done. Error: 3269.74553706449"
## [1] "Fold 5/10 done. Error: 3191.93471414357"
## [1] "Fold 6/10 done. Error: 2779.0603182328"
## [1] "Fold 7/10 done. Error: 3236.41489481627"
## [1] "Fold 8/10 done. Error: 4483.21248839556"
## [1] "Fold 9/10 done. Error: 3454.59020729246"
## [1] "Fold 10/10 done. Error: 3716.7940975706"
The fitting verbose contains useful information, showing the error for each fold. Aside from the fitting verbose, we can see a more succinct summary by using the function
##
## regression type:
##
## lambda error num.nonzero
## [1,] 1.8885960 9524.4889 0
## [2,] 1.6131093 9277.9900 1
## [3,] 1.3778075 8544.5941 2
## [4,] 1.1768288 7690.9591 3
## [5,] 1.0051665 6921.4515 4
## [6,] 0.8585444 5766.8107 14
## [7,] 0.7333098 4444.6505 15
## [8,] 0.6263430 3418.5571 15
## [9,] 0.5349793 2663.9599 15
## [10,] 0.4569427 2091.6795 16
## [11,] 0.3902891 1626.3300 22
## [12,] 0.3333582 1232.3194 22
## [13,] 0.2847318 931.3590 22
## [14,] 0.2431984 710.5062 22
## [15,] 0.2077234 547.4188 24
## [16,] 0.1774231 422.2216 24
## [17,] 0.1515426 326.3720 26
## [18,] 0.1294373 252.6180 26
## [19,] 0.1105565 198.4320 27
## [20,] 0.0944298 158.7516 27
The best model is found to be the one at the end of the path:
## [1] 20
Checking the metrics again, we see how CV has generated a model with the correct amount of sparsity that gives FDR levels below the specified values.
fdr_sensitivity(fitted_ids = cv_model$fit$selected_var, true_ids = data$true_var_id,
num_coef = 500)
## $fdr
## [1] 0.03703704
##
## $sensitivity
## [1] 0.9285714
##
## $fpr
## [1] 0.002118644
##
## $f1
## [1] 0.9454545
fdr_sensitivity(fitted_ids = cv_model$fit$selected_grp, true_ids = data$true_grp_id,
num_coef = 100)
## $fdr
## [1] 0.09090909
##
## $sensitivity
## [1] 1
##
## $fpr
## [1] 0.01111111
##
## $f1
## [1] 0.952381
As mentioned, the package can also be used to fit SGS to a binary response. First, we generate some binary data. We can use the same input matrix, \(X\), and true \(\beta\) as before. We split the data into train and test to test the models classification performance.
sigmoid = function(x) {
1/(1 + exp(-x))
}
y = ifelse(sigmoid(data$X %*% data$true_beta + rnorm(400)) >
0.5, 1, 0)
train_y = y[1:350]
test_y = y[351:400]
train_X = data$X[1:350, ]
test_X = data$X[351:400, ]
We can again apply CV.
cv_model = fit_sgs_cv(X = train_X, y = train_y, groups = groups,
type = "logistic", path_length = 20, nfolds = 10, alpha = 0.95,
vFDR = 0.1, gFDR = 0.1, min_frac = 0.05, standardise = "l2",
intercept = FALSE, verbose = TRUE, screen = TRUE)
## [1] "Fold 1/10 done. Error: 0.241428571428571"
## [1] "Fold 2/10 done. Error: 0.187142857142857"
## [1] "Fold 3/10 done. Error: 0.252857142857143"
## [1] "Fold 4/10 done. Error: 0.194285714285714"
## [1] "Fold 5/10 done. Error: 0.164285714285714"
## [1] "Fold 6/10 done. Error: 0.204285714285714"
## [1] "Fold 7/10 done. Error: 0.194285714285714"
## [1] "Fold 8/10 done. Error: 0.111428571428571"
## [1] "Fold 9/10 done. Error: 0.21"
## [1] "Fold 10/10 done. Error: 0.221428571428571"
and again, use the predict function
For logistic regression, the function returns both the predicted
class probabilities (response
) and the predicted class
(class
). We can use this to check the prediction accuracy,
given as \(82\%\).
## [1] 0.48078832 0.50438355 0.09077354 0.11937906 0.35681848
## [1] 0 1 0 0 0
## [1] 0.86
Group SLOPE (gSLOPE) applies adaptive group penalisation to control
the group FDR under orthogonal designs. gSLOPE is described in detail in
Brzyski, D.,
Gossmann, A., Su, W., Bogdan, M. (2019). Group SLOPE – Adaptive
Selection of Groups of Predictors. gSLOPE is implemented in the
sgs
package with the same features as SGS. Here, we briefly
demonstrate how to fit a gSLOPE model.
Screening rules allow the input dimensionality to be reduced before fitting. The strong screening rules for gSLOPE and SGS are described in detail in Feser, F., Evangelou, M. (2024). Strong Screening Rules for Group-based SLOPE Models. Here, we demonstrate the effectiveness of screening rules by looking at the speed improvement they provide. For SGS:
screen_time = system.time(model_screen <- fit_sgs(X = data$X,
y = data$y, groups = groups, type = "linear", path_length = 100,
alpha = 0.95, vFDR = 0.1, gFDR = 0.1, standardise = "l2",
intercept = TRUE, verbose = FALSE, screen = TRUE))
no_screen_time = system.time(model_no_screen <- fit_sgs(X = data$X,
y = data$y, groups = groups, type = "linear", path_length = 100,
alpha = 0.95, vFDR = 0.1, gFDR = 0.1, standardise = "l2",
intercept = TRUE, verbose = FALSE, screen = FALSE))
screen_time
## user system elapsed
## 6.26 0.04 6.35
## user system elapsed
## 7.34 0.03 7.39
and for gSLOPE:
screen_time = system.time(model_screen <- fit_gslope(X = data$X,
y = data$y, groups = groups, type = "linear", path_length = 100,
gFDR = 0.1, standardise = "l2", intercept = TRUE, verbose = FALSE,
screen = TRUE))
no_screen_time = system.time(model_no_screen <- fit_gslope(X = data$X,
y = data$y, groups = groups, type = "linear", path_length = 100,
gFDR = 0.1, standardise = "l2", intercept = TRUE, verbose = FALSE,
screen = FALSE))
screen_time
## user system elapsed
## 2.96 0.02 2.97
## user system elapsed
## 7.31 0.06 7.39