lightgbm.DaskLGBMRegressor
- class lightgbm.DaskLGBMRegressor(boosting_type='gbdt', num_leaves=31, max_depth=- 1, learning_rate=0.1, n_estimators=100, subsample_for_bin=200000, objective=None, class_weight=None, min_split_gain=0.0, min_child_weight=0.001, min_child_samples=20, subsample=1.0, subsample_freq=0, colsample_bytree=1.0, reg_alpha=0.0, reg_lambda=0.0, random_state=None, n_jobs=- 1, silent='warn', importance_type='split', client=None, **kwargs)[source]
Bases:
lightgbm.sklearn.LGBMRegressor
,lightgbm.dask._DaskLGBMModel
Distributed version of lightgbm.LGBMRegressor.
- __init__(boosting_type='gbdt', num_leaves=31, max_depth=- 1, learning_rate=0.1, n_estimators=100, subsample_for_bin=200000, objective=None, class_weight=None, min_split_gain=0.0, min_child_weight=0.001, min_child_samples=20, subsample=1.0, subsample_freq=0, colsample_bytree=1.0, reg_alpha=0.0, reg_lambda=0.0, random_state=None, n_jobs=- 1, silent='warn', importance_type='split', client=None, **kwargs)[source]
Construct a gradient boosting model.
- Parameters
boosting_type (str, optional (default='gbdt')) – ‘gbdt’, traditional Gradient Boosting Decision Tree. ‘dart’, Dropouts meet Multiple Additive Regression Trees. ‘goss’, Gradient-based One-Side Sampling. ‘rf’, Random Forest.
num_leaves (int, optional (default=31)) – Maximum tree leaves for base learners.
max_depth (int, optional (default=-1)) – Maximum tree depth for base learners, <=0 means no limit.
learning_rate (float, optional (default=0.1)) – Boosting learning rate. You can use
callbacks
parameter offit
method to shrink/adapt learning rate in training usingreset_parameter
callback. Note, that this will ignore thelearning_rate
argument in training.n_estimators (int, optional (default=100)) – Number of boosted trees to fit.
subsample_for_bin (int, optional (default=200000)) – Number of samples for constructing bins.
objective (str, callable or None, optional (default=None)) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). Default: ‘regression’ for LGBMRegressor, ‘binary’ or ‘multiclass’ for LGBMClassifier, ‘lambdarank’ for LGBMRanker.
class_weight (dict, 'balanced' or None, optional (default=None)) – Weights associated with classes in the form
{class_label: weight}
. Use this parameter only for multi-class classification task; for binary classification task you may useis_unbalance
orscale_pos_weight
parameters. Note, that the usage of all these parameters will result in poor estimates of the individual class probabilities. You may want to consider performing probability calibration (https://scikit-learn.org/stable/modules/calibration.html) of your model. The ‘balanced’ mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data asn_samples / (n_classes * np.bincount(y))
. If None, all classes are supposed to have weight one. Note, that these weights will be multiplied withsample_weight
(passed through thefit
method) ifsample_weight
is specified.min_split_gain (float, optional (default=0.)) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float, optional (default=1e-3)) – Minimum sum of instance weight (hessian) needed in a child (leaf).
min_child_samples (int, optional (default=20)) – Minimum number of data needed in a child (leaf).
subsample (float, optional (default=1.)) – Subsample ratio of the training instance.
subsample_freq (int, optional (default=0)) – Frequency of subsample, <=0 means no enable.
colsample_bytree (float, optional (default=1.)) – Subsample ratio of columns when constructing each tree.
reg_alpha (float, optional (default=0.)) – L1 regularization term on weights.
reg_lambda (float, optional (default=0.)) – L2 regularization term on weights.
random_state (int, RandomState object or None, optional (default=None)) – Random number seed. If int, this number is used to seed the C++ code. If RandomState object (numpy), a random integer is picked based on its state to seed the C++ code. If None, default seeds in C++ code are used.
n_jobs (int, optional (default=-1)) – Number of parallel threads.
silent (bool, optional (default=True)) – Whether to print messages while running boosting.
importance_type (str, optional (default='split')) – The type of feature importance to be filled into
feature_importances_
. If ‘split’, result contains numbers of times the feature is used in a model. If ‘gain’, result contains total gains of splits which use the feature.client (dask.distributed.Client or None, optional (default=None)) – Dask client. If
None
,distributed.default_client()
will be used at runtime. The Dask client used by this class will not be saved if the model object is pickled.**kwargs –
Other parameters for the model. Check http://lightgbm.readthedocs.io/en/latest/Parameters.html for more parameters.
Warning
**kwargs is not supported in sklearn, it may cause unexpected issues.
Methods
__init__
([boosting_type, num_leaves, ...])Construct a gradient boosting model.
fit
(X, y[, sample_weight, init_score, ...])Build a gradient boosting model from the training set (X, y).
get_params
([deep])Get parameters for this estimator.
predict
(X, **kwargs)Return the predicted value for each sample.
set_params
(**params)Set the parameters of this estimator.
to_local
()Create regular version of lightgbm.LGBMRegressor from the distributed version.
Attributes
The best iteration of fitted model if
early_stopping()
callback has been specified.The best score of fitted model.
The underlying Booster of this model.
Dask client.
The evaluation results if validation sets have been specified.
The feature importances (the higher, the more important).
The names of features.
The number of features of fitted model.
The number of features of fitted model.
The concrete objective used while fitting this model.
- property best_iteration_
The best iteration of fitted model if
early_stopping()
callback has been specified.- Type
int
orNone
- property best_score_
The best score of fitted model.
- Type
dict
- property client_
Dask client.
This property can be passed in the constructor or updated with
model.set_params(client=client)
.- Type
dask.distributed.Client
- property evals_result_
The evaluation results if validation sets have been specified.
- Type
dict
orNone
- property feature_importances_
The feature importances (the higher, the more important).
Note
importance_type
attribute is passed to the function to configure the type of importance values to be extracted.- Type
array
of shape = [n_features]
- property feature_name_
The names of features.
- Type
array
of shape = [n_features]
- fit(X, y, sample_weight=None, init_score=None, eval_set=None, eval_names=None, eval_sample_weight=None, eval_init_score=None, eval_metric=None, early_stopping_rounds=None, **kwargs)[source]
Build a gradient boosting model from the training set (X, y).
- Parameters
X (Dask Array or Dask DataFrame of shape = [n_samples, n_features]) – Input feature matrix.
y (Dask Array, Dask DataFrame or Dask Series of shape = [n_samples]) – The target values (class labels in classification, real numbers in regression).
sample_weight (Dask Array or Dask Series of shape = [n_samples] or None, optional (default=None)) – Weights of training data.
init_score (Dask Array or Dask Series of shape = [n_samples] or None, optional (default=None)) – Init score of training data.
eval_set (list or None, optional (default=None)) – A list of (X, y) tuple pairs to use as validation sets.
eval_names (list of str, or None, optional (default=None)) – Names of eval_set.
eval_sample_weight (list of Dask Array or Dask Series, or None, optional (default=None)) – Weights of eval data.
eval_init_score (list of Dask Array or Dask Series, or None, optional (default=None)) – Init score of eval data.
eval_metric (str, callable, list or None, optional (default=None)) – If str, it should be a built-in evaluation metric to use. If callable, it should be a custom evaluation metric, see note below for more details. If list, it can be a list of built-in metrics, a list of custom evaluation metrics, or a mix of both. In either case, the
metric
from the model parameters will be evaluated and used as well. Default: ‘l2’ for LGBMRegressor, ‘logloss’ for LGBMClassifier, ‘ndcg’ for LGBMRanker.verbose (bool or int, optional (default=True)) –
Requires at least one evaluation data. If True, the eval metric on the eval set is printed at each boosting stage. If int, the eval metric on the eval set is printed at every
verbose
boosting stage. The last boosting stage or the boosting stage found by usingearly_stopping_rounds
is also printed.Example
With
verbose
= 4 and at least one item ineval_set
, an evaluation metric is printed every 4 (instead of 1) boosting stages.feature_name (list of str, or 'auto', optional (default='auto')) – Feature names. If ‘auto’ and data is pandas DataFrame, data columns names are used.
categorical_feature (list of str or int, or 'auto', optional (default='auto')) – Categorical features. If list of int, interpreted as indices. If list of str, interpreted as feature names (need to specify
feature_name
as well). If ‘auto’ and data is pandas DataFrame, pandas unordered categorical columns are used. All values in categorical features should be less than int32 max value (2147483647). Large values could be memory consuming. Consider using consecutive integers starting from zero. All negative values in categorical features will be treated as missing values. The output cannot be monotonically constrained with respect to a categorical feature.**kwargs – Other parameters passed through to
LGBMRegressor.fit()
.
Note
Custom eval function expects a callable with following signatures:
func(y_true, y_pred)
,func(y_true, y_pred, weight)
orfunc(y_true, y_pred, weight, group)
and returns (eval_name, eval_result, is_higher_better) or list of (eval_name, eval_result, is_higher_better):- y_truearray-like of shape = [n_samples]
The target values.
- y_predarray-like of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task)
The predicted values. In case of custom
objective
, predicted values are returned before any transformation, e.g. they are raw margin instead of probability of positive class for binary task in this case.- weightarray-like of shape = [n_samples]
The weight of samples.
- grouparray-like
Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with
group = [10, 20, 40, 10, 10, 10]
, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.- eval_namestr
The name of evaluation function (without whitespace).
- eval_resultfloat
The eval result.
- is_higher_betterbool
Is eval result higher better, e.g. AUC is
is_higher_better
.
For multi-class task, the y_pred is group by class_id first, then group by row_id. If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i].
- get_params(deep=True)
Get parameters for this estimator.
- Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- property n_features_
The number of features of fitted model.
- Type
int
- property n_features_in_
The number of features of fitted model.
- Type
int
- property objective_
The concrete objective used while fitting this model.
- Type
str
orcallable
- predict(X, **kwargs)[source]
Return the predicted value for each sample.
- Parameters
X (Dask Array or Dask DataFrame of shape = [n_samples, n_features]) – Input features matrix.
raw_score (bool, optional (default=False)) – Whether to predict raw scores.
start_iteration (int, optional (default=0)) – Start index of the iteration to predict. If <= 0, starts from the first iteration.
num_iteration (int or None, optional (default=None)) – Total number of iterations used in the prediction. If None, if the best iteration exists and start_iteration <= 0, the best iteration is used; otherwise, all iterations from
start_iteration
are used (no limits). If <= 0, all iterations fromstart_iteration
are used (no limits).pred_leaf (bool, optional (default=False)) – Whether to predict leaf index.
pred_contrib (bool, optional (default=False)) –
Whether to predict feature contributions.
Note
If you want to get more explanations for your model’s predictions using SHAP values, like SHAP interaction values, you can install the shap package (https://github.com/slundberg/shap). Note that unlike the shap package, with
pred_contrib
we return a matrix with an extra column, where the last column is the expected value.**kwargs – Other parameters for the prediction.
- Returns
predicted_result (Dask Array of shape = [n_samples]) – The predicted values.
X_leaves (Dask Array of shape = [n_samples, n_trees]) – If
pred_leaf=True
, the predicted leaf of every tree for each sample.X_SHAP_values (Dask Array of shape = [n_samples, n_features + 1]) – If
pred_contrib=True
, the feature contributions for each sample.
- set_params(**params)
Set the parameters of this estimator.
- Parameters
**params – Parameter names with their new values.
- Returns
self – Returns self.
- Return type
object