lightgbm.LGBMClassifier

class lightgbm.LGBMClassifier(boosting_type='gbdt', num_leaves=31, max_depth=-1, learning_rate=0.1, n_estimators=100, subsample_for_bin=200000, objective=None, class_weight=None, min_split_gain=0.0, min_child_weight=0.001, min_child_samples=20, subsample=1.0, subsample_freq=0, colsample_bytree=1.0, reg_alpha=0.0, reg_lambda=0.0, random_state=None, n_jobs=None, importance_type='split', **kwargs)[source]

Bases: ClassifierMixin, LGBMModel

LightGBM classifier.

__init__(boosting_type='gbdt', num_leaves=31, max_depth=-1, learning_rate=0.1, n_estimators=100, subsample_for_bin=200000, objective=None, class_weight=None, min_split_gain=0.0, min_child_weight=0.001, min_child_samples=20, subsample=1.0, subsample_freq=0, colsample_bytree=1.0, reg_alpha=0.0, reg_lambda=0.0, random_state=None, n_jobs=None, importance_type='split', **kwargs)

Construct a gradient boosting model.

Parameters:
  • boosting_type (str, optional (default='gbdt')) – ‘gbdt’, traditional Gradient Boosting Decision Tree. ‘dart’, Dropouts meet Multiple Additive Regression Trees. ‘rf’, Random Forest.

  • num_leaves (int, optional (default=31)) – Maximum tree leaves for base learners.

  • max_depth (int, optional (default=-1)) – Maximum tree depth for base learners, <=0 means no limit.

  • learning_rate (float, optional (default=0.1)) – Boosting learning rate. You can use callbacks parameter of fit method to shrink/adapt learning rate in training using reset_parameter callback. Note, that this will ignore the learning_rate argument in training.

  • n_estimators (int, optional (default=100)) – Number of boosted trees to fit.

  • subsample_for_bin (int, optional (default=200000)) – Number of samples for constructing bins.

  • objective (str, callable or None, optional (default=None)) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). Default: ‘regression’ for LGBMRegressor, ‘binary’ or ‘multiclass’ for LGBMClassifier, ‘lambdarank’ for LGBMRanker.

  • class_weight (dict, 'balanced' or None, optional (default=None)) – Weights associated with classes in the form {class_label: weight}. Use this parameter only for multi-class classification task; for binary classification task you may use is_unbalance or scale_pos_weight parameters. Note, that the usage of all these parameters will result in poor estimates of the individual class probabilities. You may want to consider performing probability calibration (https://scikit-learn.org/stable/modules/calibration.html) of your model. The ‘balanced’ mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). If None, all classes are supposed to have weight one. Note, that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

  • min_split_gain (float, optional (default=0.)) – Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (float, optional (default=1e-3)) – Minimum sum of instance weight (Hessian) needed in a child (leaf).

  • min_child_samples (int, optional (default=20)) – Minimum number of data needed in a child (leaf).

  • subsample (float, optional (default=1.)) – Subsample ratio of the training instance.

  • subsample_freq (int, optional (default=0)) – Frequency of subsample, <=0 means no enable.

  • colsample_bytree (float, optional (default=1.)) – Subsample ratio of columns when constructing each tree.

  • reg_alpha (float, optional (default=0.)) – L1 regularization term on weights.

  • reg_lambda (float, optional (default=0.)) – L2 regularization term on weights.

  • random_state (int, RandomState object or None, optional (default=None)) – Random number seed. If int, this number is used to seed the C++ code. If RandomState or Generator object (numpy), a random integer is picked based on its state to seed the C++ code. If None, default seeds in C++ code are used.

  • n_jobs (int or None, optional (default=None)) –

    Number of parallel threads to use for training (can be changed at prediction time by passing it as an extra keyword argument).

    For better performance, it is recommended to set this to the number of physical cores in the CPU.

    Negative integers are interpreted as following joblib’s formula (n_cpus + 1 + n_jobs), just like scikit-learn (so e.g. -1 means using all threads). A value of zero corresponds the default number of threads configured for OpenMP in the system. A value of None (the default) corresponds to using the number of physical cores in the system (its correct detection requires either the joblib or the psutil util libraries to be installed).

    Changed in version 4.0.0.

  • importance_type (str, optional (default='split')) – The type of feature importance to be filled into feature_importances_. If ‘split’, result contains numbers of times the feature is used in a model. If ‘gain’, result contains total gains of splits which use the feature.

  • **kwargs

    Other parameters for the model. Check http://lightgbm.readthedocs.io/en/latest/Parameters.html for more parameters.

    Warning

    **kwargs is not supported in sklearn, it may cause unexpected issues.

Note

A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> grad, hess, objective(y_true, y_pred, weight) -> grad, hess or objective(y_true, y_pred, weight, group) -> grad, hess:

y_truenumpy 1-D array of shape = [n_samples]

The target values.

y_prednumpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The predicted values. Predicted values are returned before any transformation, e.g. they are raw margin instead of probability of positive class for binary task.

weightnumpy 1-D array of shape = [n_samples]

The weight of samples. Weights should be non-negative.

groupnumpy 1-D array

Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with group = [10, 20, 40, 10, 10, 10], that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.

gradnumpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The value of the first order derivative (gradient) of the loss with respect to the elements of y_pred for each sample point.

hessnumpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The value of the second order derivative (Hessian) of the loss with respect to the elements of y_pred for each sample point.

For multi-class task, y_pred is a numpy 2-D array of shape = [n_samples, n_classes], and grad and hess should be returned in the same format.

Methods

__init__([boosting_type, num_leaves, ...])

Construct a gradient boosting model.

fit(X, y[, sample_weight, init_score, ...])

Build a gradient boosting model from the training set (X, y).

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

predict(X[, raw_score, start_iteration, ...])

Return the predicted value for each sample.

predict_proba(X[, raw_score, ...])

Return the predicted probability for each class for each sample.

score(X, y[, sample_weight])

Return the mean accuracy on the given test data and labels.

set_fit_request(*[, callbacks, ...])

Request metadata passed to the fit method.

set_params(**params)

Set the parameters of this estimator.

set_predict_proba_request(*[, ...])

Request metadata passed to the predict_proba method.

set_predict_request(*[, num_iteration, ...])

Request metadata passed to the predict method.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

Attributes

best_iteration_

The best iteration of fitted model if early_stopping() callback has been specified.

best_score_

The best score of fitted model.

booster_

The underlying Booster of this model.

classes_

The class label array.

evals_result_

The evaluation results if validation sets have been specified.

feature_importances_

The feature importances (the higher, the more important).

feature_name_

The names of features.

n_classes_

The number of classes.

n_estimators_

True number of boosting iterations performed.

n_features_

The number of features of fitted model.

n_features_in_

The number of features of fitted model.

n_iter_

True number of boosting iterations performed.

objective_

The concrete objective used while fitting this model.

property best_iteration_

The best iteration of fitted model if early_stopping() callback has been specified.

Type:

int

property best_score_

The best score of fitted model.

Type:

dict

property booster_

The underlying Booster of this model.

Type:

Booster

property classes_

The class label array.

Type:

array of shape = [n_classes]

property evals_result_

The evaluation results if validation sets have been specified.

Type:

dict

property feature_importances_

The feature importances (the higher, the more important).

Note

importance_type attribute is passed to the function to configure the type of importance values to be extracted.

Type:

array of shape = [n_features]

property feature_name_

The names of features.

Type:

list of shape = [n_features]

fit(X, y, sample_weight=None, init_score=None, eval_set=None, eval_names=None, eval_sample_weight=None, eval_class_weight=None, eval_init_score=None, eval_metric=None, feature_name='auto', categorical_feature='auto', callbacks=None, init_model=None)[source]

Build a gradient boosting model from the training set (X, y).

Parameters:
  • X (numpy array, pandas DataFrame, H2O DataTable's Frame , scipy.sparse, list of lists of int or float of shape = [n_samples, n_features]) – Input feature matrix.

  • y (numpy array, pandas DataFrame, pandas Series, list of int or float of shape = [n_samples]) – The target values (class labels in classification, real numbers in regression).

  • sample_weight (numpy array, pandas Series, list of int or float of shape = [n_samples] or None, optional (default=None)) – Weights of training data. Weights should be non-negative.

  • init_score (numpy array, pandas DataFrame, pandas Series, list of int or float of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task) or shape = [n_samples, n_classes] (for multi-class task) or None, optional (default=None)) – Init score of training data.

  • eval_set (list or None, optional (default=None)) – A list of (X, y) tuple pairs to use as validation sets.

  • eval_names (list of str, or None, optional (default=None)) – Names of eval_set.

  • eval_sample_weight (list of array (same types as sample_weight supports), or None, optional (default=None)) – Weights of eval data. Weights should be non-negative.

  • eval_class_weight (list or None, optional (default=None)) – Class weights of eval data.

  • eval_init_score (list of array (same types as init_score supports), or None, optional (default=None)) – Init score of eval data.

  • eval_metric (str, callable, list or None, optional (default=None)) – If str, it should be a built-in evaluation metric to use. If callable, it should be a custom evaluation metric, see note below for more details. If list, it can be a list of built-in metrics, a list of custom evaluation metrics, or a mix of both. In either case, the metric from the model parameters will be evaluated and used as well. Default: ‘l2’ for LGBMRegressor, ‘logloss’ for LGBMClassifier, ‘ndcg’ for LGBMRanker.

  • feature_name (list of str, or 'auto', optional (default='auto')) – Feature names. If ‘auto’ and data is pandas DataFrame, data columns names are used.

  • categorical_feature (list of str or int, or 'auto', optional (default='auto')) – Categorical features. If list of int, interpreted as indices. If list of str, interpreted as feature names (need to specify feature_name as well). If ‘auto’ and data is pandas DataFrame, pandas unordered categorical columns are used. All values in categorical features will be cast to int32 and thus should be less than int32 max value (2147483647). Large values could be memory consuming. Consider using consecutive integers starting from zero. All negative values in categorical features will be treated as missing values. The output cannot be monotonically constrained with respect to a categorical feature. Floating point numbers in categorical features will be rounded towards 0.

  • callbacks (list of callable, or None, optional (default=None)) – List of callback functions that are applied at each iteration. See Callbacks in Python API for more information.

  • init_model (str, pathlib.Path, Booster, LGBMModel or None, optional (default=None)) – Filename of LightGBM model, Booster instance or LGBMModel instance used for continue training.

Returns:

self – Returns self.

Return type:

LGBMClassifier

Note

Custom eval function expects a callable with following signatures: func(y_true, y_pred), func(y_true, y_pred, weight) or func(y_true, y_pred, weight, group) and returns (eval_name, eval_result, is_higher_better) or list of (eval_name, eval_result, is_higher_better):

y_truenumpy 1-D array of shape = [n_samples]

The target values.

y_prednumpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The predicted values. In case of custom objective, predicted values are returned before any transformation, e.g. they are raw margin instead of probability of positive class for binary task in this case.

weightnumpy 1-D array of shape = [n_samples]

The weight of samples. Weights should be non-negative.

groupnumpy 1-D array

Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with group = [10, 20, 40, 10, 10, 10], that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.

eval_namestr

The name of evaluation function (without whitespace).

eval_resultfloat

The eval result.

is_higher_betterbool

Is eval result higher better, e.g. AUC is is_higher_better.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

property n_classes_

The number of classes.

Type:

int

property n_estimators_

True number of boosting iterations performed.

This might be less than parameter n_estimators if early stopping was enabled or if boosting stopped early due to limits on complexity like min_gain_to_split.

New in version 4.0.0.

Type:

int

property n_features_

The number of features of fitted model.

Type:

int

property n_features_in_

The number of features of fitted model.

Type:

int

property n_iter_

True number of boosting iterations performed.

This might be less than parameter n_estimators if early stopping was enabled or if boosting stopped early due to limits on complexity like min_gain_to_split.

New in version 4.0.0.

Type:

int

property objective_

The concrete objective used while fitting this model.

Type:

str or callable

predict(X, raw_score=False, start_iteration=0, num_iteration=None, pred_leaf=False, pred_contrib=False, validate_features=False, **kwargs)[source]

Return the predicted value for each sample.

Parameters:
  • X (numpy array, pandas DataFrame, H2O DataTable's Frame , scipy.sparse, list of lists of int or float of shape = [n_samples, n_features]) – Input features matrix.

  • raw_score (bool, optional (default=False)) – Whether to predict raw scores.

  • start_iteration (int, optional (default=0)) – Start index of the iteration to predict. If <= 0, starts from the first iteration.

  • num_iteration (int or None, optional (default=None)) – Total number of iterations used in the prediction. If None, if the best iteration exists and start_iteration <= 0, the best iteration is used; otherwise, all iterations from start_iteration are used (no limits). If <= 0, all iterations from start_iteration are used (no limits).

  • pred_leaf (bool, optional (default=False)) – Whether to predict leaf index.

  • pred_contrib (bool, optional (default=False)) –

    Whether to predict feature contributions.

    Note

    If you want to get more explanations for your model’s predictions using SHAP values, like SHAP interaction values, you can install the shap package (https://github.com/slundberg/shap). Note that unlike the shap package, with pred_contrib we return a matrix with an extra column, where the last column is the expected value.

  • validate_features (bool, optional (default=False)) – If True, ensure that the features used to predict match the ones used to train. Used only if data is pandas DataFrame.

  • **kwargs – Other parameters for the prediction.

Returns:

  • predicted_result (array-like of shape = [n_samples] or shape = [n_samples, n_classes]) – The predicted values.

  • X_leaves (array-like of shape = [n_samples, n_trees] or shape = [n_samples, n_trees * n_classes]) – If pred_leaf=True, the predicted leaf of every tree for each sample.

  • X_SHAP_values (array-like of shape = [n_samples, n_features + 1] or shape = [n_samples, (n_features + 1) * n_classes] or list with n_classes length of such objects) – If pred_contrib=True, the feature contributions for each sample.

predict_proba(X, raw_score=False, start_iteration=0, num_iteration=None, pred_leaf=False, pred_contrib=False, validate_features=False, **kwargs)[source]

Return the predicted probability for each class for each sample.

Parameters:
  • X (numpy array, pandas DataFrame, H2O DataTable's Frame , scipy.sparse, list of lists of int or float of shape = [n_samples, n_features]) – Input features matrix.

  • raw_score (bool, optional (default=False)) – Whether to predict raw scores.

  • start_iteration (int, optional (default=0)) – Start index of the iteration to predict. If <= 0, starts from the first iteration.

  • num_iteration (int or None, optional (default=None)) – Total number of iterations used in the prediction. If None, if the best iteration exists and start_iteration <= 0, the best iteration is used; otherwise, all iterations from start_iteration are used (no limits). If <= 0, all iterations from start_iteration are used (no limits).

  • pred_leaf (bool, optional (default=False)) – Whether to predict leaf index.

  • pred_contrib (bool, optional (default=False)) –

    Whether to predict feature contributions.

    Note

    If you want to get more explanations for your model’s predictions using SHAP values, like SHAP interaction values, you can install the shap package (https://github.com/slundberg/shap). Note that unlike the shap package, with pred_contrib we return a matrix with an extra column, where the last column is the expected value.

  • validate_features (bool, optional (default=False)) – If True, ensure that the features used to predict match the ones used to train. Used only if data is pandas DataFrame.

  • **kwargs – Other parameters for the prediction.

Returns:

  • predicted_probability (array-like of shape = [n_samples] or shape = [n_samples, n_classes]) – The predicted values.

  • X_leaves (array-like of shape = [n_samples, n_trees] or shape = [n_samples, n_trees * n_classes]) – If pred_leaf=True, the predicted leaf of every tree for each sample.

  • X_SHAP_values (array-like of shape = [n_samples, n_features + 1] or shape = [n_samples, (n_features + 1) * n_classes] or list with n_classes length of such objects) – If pred_contrib=True, the feature contributions for each sample.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float

set_fit_request(*, callbacks='$UNCHANGED$', categorical_feature='$UNCHANGED$', eval_class_weight='$UNCHANGED$', eval_init_score='$UNCHANGED$', eval_metric='$UNCHANGED$', eval_names='$UNCHANGED$', eval_sample_weight='$UNCHANGED$', eval_set='$UNCHANGED$', feature_name='$UNCHANGED$', init_model='$UNCHANGED$', init_score='$UNCHANGED$', sample_weight='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • callbacks (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for callbacks parameter in fit.

  • categorical_feature (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for categorical_feature parameter in fit.

  • eval_class_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_class_weight parameter in fit.

  • eval_init_score (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_init_score parameter in fit.

  • eval_metric (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_metric parameter in fit.

  • eval_names (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_names parameter in fit.

  • eval_sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_sample_weight parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_name (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_name parameter in fit.

  • init_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for init_model parameter in fit.

  • init_score (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for init_score parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator.

Parameters:

**params – Parameter names with their new values.

Returns:

self – Returns self.

Return type:

object

set_predict_proba_request(*, num_iteration='$UNCHANGED$', pred_contrib='$UNCHANGED$', pred_leaf='$UNCHANGED$', raw_score='$UNCHANGED$', start_iteration='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict_proba method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict_proba if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict_proba.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • num_iteration (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for num_iteration parameter in predict_proba.

  • pred_contrib (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for pred_contrib parameter in predict_proba.

  • pred_leaf (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for pred_leaf parameter in predict_proba.

  • raw_score (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for raw_score parameter in predict_proba.

  • start_iteration (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for start_iteration parameter in predict_proba.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict_proba.

Returns:

self – The updated object.

Return type:

object

set_predict_request(*, num_iteration='$UNCHANGED$', pred_contrib='$UNCHANGED$', pred_leaf='$UNCHANGED$', raw_score='$UNCHANGED$', start_iteration='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • num_iteration (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for num_iteration parameter in predict.

  • pred_contrib (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for pred_contrib parameter in predict.

  • pred_leaf (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for pred_leaf parameter in predict.

  • raw_score (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for raw_score parameter in predict.

  • start_iteration (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for start_iteration parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self – The updated object.

Return type:

object