Train a linear model with L2 regularization.

cuda_ml_ridge(x, ...)

# S3 method for default
cuda_ml_ridge(x, ...)

# S3 method for data.frame
cuda_ml_ridge(
  x,
  y,
  alpha = 1,
  fit_intercept = TRUE,
  normalize_input = FALSE,
  ...
)

# S3 method for matrix
cuda_ml_ridge(
  x,
  y,
  alpha = 1,
  fit_intercept = TRUE,
  normalize_input = FALSE,
  ...
)

# S3 method for formula
cuda_ml_ridge(
  formula,
  data,
  alpha = 1,
  fit_intercept = TRUE,
  normalize_input = FALSE,
  ...
)

# S3 method for recipe
cuda_ml_ridge(
  x,
  data,
  alpha = 1,
  fit_intercept = TRUE,
  normalize_input = FALSE,
  ...
)

Arguments

x

Depending on the context:

* A __data frame__ of predictors. * A __matrix__ of predictors. * A __recipe__ specifying a set of preprocessing steps * created from [recipes::recipe()]. * A __formula__ specifying the predictors and the outcome.

...

Optional arguments; currently unused.

y

A numeric vector (for regression) or factor (for classification) of desired responses.

alpha

Multiplier of the L2 penalty term (i.e., the result would become and Ordinary Least Square model if alpha were set to 0). Default: 1.

fit_intercept

If TRUE, then the model tries to correct for the global mean of the response variable. If FALSE, then the model expects data to be centered. Default: TRUE.

normalize_input

Ignored when fit_intercept is FALSE. If TRUE, then the predictors will be normalized to have a L2 norm of 1. Default: FALSE.

formula

A formula specifying the outcome terms on the left-hand side, and the predictor terms on the right-hand side.

data

When a __recipe__ or __formula__ is used, data is specified as a __data frame__ containing the predictors and (if applicable) the outcome.

Value

A ridge regressor that can be used with the 'predict' S3 generic to make predictions on new data points.

Examples

library(cuda.ml) model <- cuda_ml_ridge(formula = mpg ~ ., data = mtcars, alpha = 1e-3) predictions <- predict(model, mtcars[names(mtcars) != "mpg"]) # predictions will be comparable to those from a `glmnet` model with `lambda` # set to 2e-3 and `alpha` set to 0 # (in `glmnet`, `lambda` is the weight of the penalty term, and `alpha` is # the elastic mixing parameter between L1 and L2 penalties. library(glmnet) glmnet_model <- glmnet( x = as.matrix(mtcars[names(mtcars) != "mpg"]), y = mtcars$mpg, alpha = 0, lambda = 2e-3, nlambda = 1, standardize = FALSE ) glm_predictions <- predict( glmnet_model, as.matrix(mtcars[names(mtcars) != "mpg"]), s = 0 ) print( all.equal( as.numeric(glm_predictions), predictions$.pred, tolerance = 1e-3 ) )
#> [1] "Mean relative difference: 1"