Cross validation logic used by LightGBM
lgb.cv( params = list(), data, nrounds = 100L, nfold = 3L, label = NULL, weight = NULL, obj = NULL, eval = NULL, verbose = 1L, record = TRUE, eval_freq = 1L, showsd = TRUE, stratified = TRUE, folds = NULL, init_model = NULL, colnames = NULL, categorical_feature = NULL, early_stopping_rounds = NULL, callbacks = list(), reset_data = FALSE, serializable = TRUE, eval_train_metric = FALSE )
params | a list of parameters. See the "Parameters" section of the documentation for a list of parameters and valid values. |
---|---|
data | a |
nrounds | number of training rounds |
nfold | the original dataset is randomly partitioned into |
label | Vector of labels, used if |
weight | vector of response values. If not NULL, will set to dataset |
obj | objective function, can be character or custom objective function. Examples include
|
eval | evaluation function(s). This can be a character vector, function, or list with a mixture of strings and functions.
|
verbose | verbosity for output, if <= 0 and |
record | Boolean, TRUE will record iteration message to |
eval_freq | evaluation output frequency, only effective when verbose > 0 and |
showsd |
|
stratified | a |
folds |
|
init_model | path of model file or |
colnames | feature names, if not null, will use this to overwrite the names in dataset |
categorical_feature | categorical features. This can either be a character vector of feature
names or an integer vector with the indices of the features (e.g.
|
early_stopping_rounds | int. Activates early stopping. When this parameter is non-null,
training will stop if the evaluation of any metric on any validation set
fails to improve for |
callbacks | List of callback functions that are applied at each iteration. |
reset_data | Boolean, setting it to TRUE (not the default value) will transform the booster model into a predictor model which frees up memory and the original datasets |
serializable | whether to make the resulting objects serializable through functions such as
|
eval_train_metric |
|
a trained model lgb.CVBooster
.
"early stopping" refers to stopping the training process if the model's performance on a given validation set does not improve for several consecutive iterations.
If multiple arguments are given to eval
, their order will be preserved. If you enable
early stopping by setting early_stopping_rounds
in params
, by default all
metrics will be considered for early stopping.
If you want to only consider the first metric for early stopping, pass
first_metric_only = TRUE
in params
. Note that if you also specify metric
in params
, that metric will be considered the "first" one. If you omit metric
,
a default metric will be used based on your choice for the parameter obj
(keyword argument)
or objective
(passed into params
).
# \donttest{ data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) params <- list( objective = "regression" , metric = "l2" , min_data = 1L , learning_rate = 1.0 ) model <- lgb.cv( params = params , data = dtrain , nrounds = 5L , nfold = 3L ) #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000667 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000656 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000610 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Info] Start training from score 0.474436 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Info] Start training from score 0.490557 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Info] Start training from score 0.481345 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[1]: valid's l2:0.000307078+0.000434274" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[2]: valid's l2:0.000307078+0.000434274" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[3]: valid's l2:0.000307078+0.000434274" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[4]: valid's l2:0.000307078+0.000434274" #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1] "[5]: valid's l2:0.000307078+0.000434274" # }