Calls saveRDS
on an lgb.Booster
object, making it serializable before the call if
it isn't already.
This function throws a warning and will be removed in future versions.
saveRDS.lgb.Booster( object, file, ascii = FALSE, version = NULL, compress = TRUE, refhook = NULL, raw = TRUE )
object |
|
---|---|
file | a connection or the name of the file where the R object is saved to or read from. |
ascii | a logical. If TRUE or NA, an ASCII representation is written; otherwise (default), a binary one is used. See the comments in the help for save. |
version | the workspace format version to use. |
compress | a logical specifying whether saving to a named file is to use "gzip" compression,
or one of |
refhook | a hook function for handling reference objects. |
raw | whether to save the model in a raw variable or not, recommended to leave it to |
NULL invisibly.
# \donttest{ library(lightgbm) data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) data(agaricus.test, package = "lightgbm") test <- agaricus.test dtest <- lgb.Dataset.create.valid(dtrain, test$data, label = test$label) params <- list( objective = "regression" , metric = "l2" , min_data = 1L , learning_rate = 1.0 ) valids <- list(test = dtest) model <- lgb.train( params = params , data = dtrain , nrounds = 10L , valids = valids , early_stopping_rounds = 5L ) #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000916 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 #> [LightGBM] [Info] Start training from score 0.482113 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[1]: test's l2:6.44165e-17" #> [1] "Will train until there is no improvement in 5 rounds." #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[2]: test's l2:1.97215e-31" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[3]: test's l2:0" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [1] "[4]: test's l2:0" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [1] "[5]: test's l2:0" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [1] "[6]: test's l2:0" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [1] "[7]: test's l2:0" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [1] "[8]: test's l2:0" #> [1] "Early stopping, best iteration is: [3]: test's l2:0" model_file <- tempfile(fileext = ".rds") saveRDS.lgb.Booster(model, model_file) #> Warning: 'saveRDS.lgb.Booster' is deprecated and will be removed in a future release. Use saveRDS() instead. # }