Given a lgb.Booster, return evaluation results for a particular metric on a particular dataset.

lgb.get.eval.result(booster, data_name, eval_name, iters = NULL,
  is_err = FALSE)

Arguments

booster

Object of class lgb.Booster

data_name

Name of the dataset to return evaluation results for.

eval_name

Name of the evaluation metric to return results for.

iters

An integer vector of iterations you want to get evaluation results for. If NULL (the default), evaluation results for all iterations will be returned.

is_err

TRUE will return evaluation error instead

Value

vector of evaluation result

Examples

# train a regression model library(lightgbm) data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) data(agaricus.test, package = "lightgbm") test <- agaricus.test dtest <- lgb.Dataset.create.valid(dtrain, test$data, label = test$label) params <- list(objective = "regression", metric = "l2") valids <- list(test = dtest) model <- lgb.train( params = params , data = dtrain , nrounds = 10L , valids = valids , min_data = 1L , learning_rate = 1.0 , early_stopping_rounds = 5L )
#> [1]: test's l2:6.44165e-17 #> [2]: test's l2:6.44165e-17 #> [3]: test's l2:6.44165e-17 #> [4]: test's l2:6.44165e-17 #> [5]: test's l2:6.44165e-17 #> [6]: test's l2:6.44165e-17
# Examine valid data_name values print(setdiff(names(model$record_evals), "start_iter"))
#> [1] "test"
# Examine valid eval_name values for dataset "test" print(names(model$record_evals[["test"]]))
#> [1] "l2"
# Get L2 values for "test" dataset lgb.get.eval.result(model, "test", "l2")
#> [1] 6.441652e-17 6.441652e-17 6.441652e-17 6.441652e-17 6.441652e-17 #> [6] 6.441652e-17