lightgbm.Dataset
- class lightgbm.Dataset(data, label=None, reference=None, weight=None, group=None, init_score=None, feature_name='auto', categorical_feature='auto', params=None, free_raw_data=True, position=None)[source]
Bases:
object
Dataset in LightGBM.
LightGBM does not train on raw data. It discretizes continuous features into histogram bins, tries to combine categorical features, and automatically handles missing and infinite values.
This class handles that preprocessing, and holds that alternative representation of the input data.
- __init__(data, label=None, reference=None, weight=None, group=None, init_score=None, feature_name='auto', categorical_feature='auto', params=None, free_raw_data=True, position=None)[source]
Initialize Dataset.
- Parameters:
data (str, pathlib.Path, numpy array, pandas DataFrame, H2O DataTable's Frame, scipy.sparse, Sequence, list of Sequence, list of numpy array or pyarrow Table) – Data source of Dataset. If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM) or a LightGBM Dataset binary file.
label (list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Label of the data.
reference (Dataset or None, optional (default=None)) – If this is Dataset for validation, training data should be used as reference.
weight (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Weight for each instance. Weights should be non-negative.
group (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with
group = [10, 20, 40, 10, 10, 10]
, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.init_score (list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None, optional (default=None)) – Init score for Dataset.
feature_name (list of str, or 'auto', optional (default="auto")) – Feature names. If ‘auto’ and data is pandas DataFrame or pyarrow Table, data columns names are used.
categorical_feature (list of str or int, or 'auto', optional (default="auto")) – Categorical features. If list of int, interpreted as indices. If list of str, interpreted as feature names (need to specify
feature_name
as well). If ‘auto’ and data is pandas DataFrame, pandas unordered categorical columns are used. All values in categorical features will be cast to int32 and thus should be less than int32 max value (2147483647). Large values could be memory consuming. Consider using consecutive integers starting from zero. All negative values in categorical features will be treated as missing values. The output cannot be monotonically constrained with respect to a categorical feature. Floating point numbers in categorical features will be rounded towards 0.params (dict or None, optional (default=None)) – Other parameters for Dataset.
free_raw_data (bool, optional (default=True)) – If True, raw data is freed after constructing inner Dataset.
position (numpy 1-D array, pandas Series or None, optional (default=None)) – Position of items used in unbiased learning-to-rank task.
Methods
__init__
(data[, label, reference, weight, ...])Initialize Dataset.
add_features_from
(other)Add features from other Dataset to the current Dataset.
Lazy init.
create_valid
(data[, label, weight, group, ...])Create validation data align with current Dataset.
feature_num_bin
(feature)Get the number of bins for a feature.
get_data
()Get the raw data of the Dataset.
Get the names of columns (features) in the Dataset.
get_field
(field_name)Get property from the Dataset.
Get the group of the Dataset.
Get the initial score of the Dataset.
Get the label of the Dataset.
Get the used parameters in the Dataset.
Get the position of the Dataset.
get_ref_chain
([ref_limit])Get a chain of Dataset objects.
Get the weight of the Dataset.
num_data
()Get the number of rows in the Dataset.
Get the number of columns (features) in the Dataset.
save_binary
(filename)Save Dataset to a binary file.
set_categorical_feature
(categorical_feature)Set categorical features.
set_feature_name
(feature_name)Set feature name.
set_field
(field_name, data)Set property into the Dataset.
set_group
(group)Set group size of Dataset (used for ranking).
set_init_score
(init_score)Set init score of Booster to start from.
set_label
(label)Set label of Dataset.
set_position
(position)Set position of Dataset (used for ranking).
set_reference
(reference)Set reference Dataset.
set_weight
(weight)Set weight of each instance.
subset
(used_indices[, params])Get subset of current Dataset.
- add_features_from(other)[source]
Add features from other Dataset to the current Dataset.
Both Datasets must be constructed before calling this method.
- create_valid(data, label=None, weight=None, group=None, init_score=None, params=None, position=None)[source]
Create validation data align with current Dataset.
- Parameters:
data (str, pathlib.Path, numpy array, pandas DataFrame, H2O DataTable's Frame, scipy.sparse, Sequence, list of Sequence or list of numpy array) – Data source of Dataset. If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM) or a LightGBM Dataset binary file.
label (list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Label of the data.
weight (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Weight for each instance. Weights should be non-negative.
group (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)) – Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with
group = [10, 20, 40, 10, 10, 10]
, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.init_score (list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None, optional (default=None)) – Init score for Dataset.
params (dict or None, optional (default=None)) – Other parameters for validation Dataset.
position (numpy 1-D array, pandas Series or None, optional (default=None)) – Position of items used in unbiased learning-to-rank task.
- Returns:
valid – Validation Dataset with reference to self.
- Return type:
- feature_num_bin(feature)[source]
Get the number of bins for a feature.
New in version 4.0.0.
- Parameters:
feature (int or str) – Index or name of the feature.
- Returns:
number_of_bins – The number of constructed bins for the feature in the Dataset.
- Return type:
int
- get_feature_name()[source]
Get the names of columns (features) in the Dataset.
- Returns:
feature_names – The names of columns (features) in the Dataset.
- Return type:
list of str
- get_field(field_name)[source]
Get property from the Dataset.
Can only be run on a constructed Dataset.
Unlike
get_group()
,get_init_score()
,get_label()
,get_position()
, andget_weight()
, this method ignores any raw data passed intolgb.Dataset()
on the Python side, and will only read data from the constructed C++Dataset
object.- Parameters:
field_name (str) – The field name of the information.
- Returns:
info – A numpy array with information from the Dataset.
- Return type:
numpy array or None
- get_group()[source]
Get the group of the Dataset.
- Returns:
group – Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with
group = [10, 20, 40, 10, 10, 10]
, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc. For a constructedDataset
, this will only returnNone
or a numpy array.- Return type:
list, numpy 1-D array, pandas Series or None
- get_init_score()[source]
Get the initial score of the Dataset.
- Returns:
init_score – Init score of Booster. For a constructed
Dataset
, this will only returnNone
or a numpy array.- Return type:
list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), or None
- get_label()[source]
Get the label of the Dataset.
- Returns:
label – The label information from the Dataset. For a constructed
Dataset
, this will only return a numpy array.- Return type:
list, numpy 1-D array, pandas Series / one-column DataFrame or None
- get_params()[source]
Get the used parameters in the Dataset.
- Returns:
params – The used parameters in this Dataset object.
- Return type:
dict
- get_position()[source]
Get the position of the Dataset.
- Returns:
position – Position of items used in unbiased learning-to-rank task. For a constructed
Dataset
, this will only returnNone
or a numpy array.- Return type:
numpy 1-D array, pandas Series or None
- get_ref_chain(ref_limit=100)[source]
Get a chain of Dataset objects.
Starts with r, then goes to r.reference (if exists), then to r.reference.reference, etc. until we hit
ref_limit
or a reference loop.- Parameters:
ref_limit (int, optional (default=100)) – The limit number of references.
- Returns:
ref_chain – Chain of references of the Datasets.
- Return type:
set of Dataset
- get_weight()[source]
Get the weight of the Dataset.
- Returns:
weight – Weight for each data point from the Dataset. Weights should be non-negative. For a constructed
Dataset
, this will only returnNone
or a numpy array.- Return type:
list, numpy 1-D array, pandas Series or None
- num_data()[source]
Get the number of rows in the Dataset.
- Returns:
number_of_rows – The number of rows in the Dataset.
- Return type:
int
- num_feature()[source]
Get the number of columns (features) in the Dataset.
- Returns:
number_of_columns – The number of columns (features) in the Dataset.
- Return type:
int
- save_binary(filename)[source]
Save Dataset to a binary file.
Note
Please note that init_score is not saved in binary file. If you need it, please set it again after loading Dataset.
- Parameters:
filename (str or pathlib.Path) – Name of the output file.
- Returns:
self – Returns self.
- Return type:
- set_categorical_feature(categorical_feature)[source]
Set categorical features.
- Parameters:
categorical_feature (list of str or int, or 'auto') – Names or indices of categorical features.
- Returns:
self – Dataset with set categorical features.
- Return type:
- set_feature_name(feature_name)[source]
Set feature name.
- Parameters:
feature_name (list of str) – Feature names.
- Returns:
self – Dataset with set feature name.
- Return type:
- set_field(field_name, data)[source]
Set property into the Dataset.
- Parameters:
field_name (str) – The field name of the information.
data (list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray or None) – The data to be set.
- Returns:
self – Dataset with set property.
- Return type:
- set_group(group)[source]
Set group size of Dataset (used for ranking).
- Parameters:
group (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None) – Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with
group = [10, 20, 40, 10, 10, 10]
, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.- Returns:
self – Dataset with set group.
- Return type:
- set_init_score(init_score)[source]
Set init score of Booster to start from.
- Parameters:
init_score (list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None) – Init score for Booster.
- Returns:
self – Dataset with set init score.
- Return type:
- set_label(label)[source]
Set label of Dataset.
- Parameters:
label (list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None) – The label information to be set into Dataset.
- Returns:
self – Dataset with set label.
- Return type:
- set_position(position)[source]
Set position of Dataset (used for ranking).
- Parameters:
position (numpy 1-D array, pandas Series or None, optional (default=None)) – Position of items used in unbiased learning-to-rank task.
- Returns:
self – Dataset with set position.
- Return type:
- set_weight(weight)[source]
Set weight of each instance.
- Parameters:
weight (list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None) – Weight to be set for each data point. Weights should be non-negative.
- Returns:
self – Dataset with set weight.
- Return type:
- subset(used_indices, params=None)[source]
Get subset of current Dataset.
- Parameters:
used_indices (list of int) – Indices used to create the subset.
params (dict or None, optional (default=None)) – These parameters will be passed to Dataset constructor.
- Returns:
subset – Subset of the current Dataset.
- Return type: