Parallel Learning Guide¶
This is a guide for parallel learning of LightGBM.
Follow the Quick Start to know how to use LightGBM first.
List of external libraries in which LightGBM can be used in a distributed fashion
Dask API of LightGBM (formerly it was a separate package) allows to create ML workflow on Dask distributed data structures.
MMLSpark integrates LightGBM into Apache Spark ecosystem. The following example demonstrates how easy it’s possible to utilize the great power of Spark.
Kubeflow Fairing suggests using LightGBM in a Kubernetes cluster. These examples help to get started with LightGBM in a hybrid cloud environment. Also you can use Kubeflow XGBoost Operator to train LightGBM model. Please check this example for how to do this.
Choose Appropriate Parallel Algorithm¶
LightGBM provides 3 parallel learning algorithms now.
Parallel Algorithm |
How to Use |
---|---|
Data parallel |
|
Feature parallel |
|
Voting parallel |
|
These algorithms are suited for different scenarios, which is listed in the following table:
#data is small |
#data is large |
|
---|---|---|
#feature is small |
Feature Parallel |
Data Parallel |
#feature is large |
Feature Parallel |
Voting Parallel |
More details about these parallel algorithms can be found in optimization in parallel learning.
Build Parallel Version¶
Default build version support parallel learning based on the socket.
If you need to build parallel version with MPI support, please refer to Installation Guide.
Preparation¶
Socket Version¶
It needs to collect IP of all machines that want to run parallel learning in and allocate one TCP port (assume 12345 here) for all machines,
and change firewall rules to allow income of this port (12345). Then write these IP and ports in one file (assume mlist.txt
), like following:
machine1_ip 12345
machine2_ip 12345
Run Parallel Learning¶
Socket Version¶
Edit following parameters in config file:
tree_learner=your_parallel_algorithm
, edityour_parallel_algorithm
(e.g. feature/data) here.num_machines=your_num_machines
, edityour_num_machines
(e.g. 4) here.machine_list_file=mlist.txt
,mlist.txt
is created in Preparation section.local_listen_port=12345
,12345
is allocated in Preparation section.Copy data file, executable file, config file and
mlist.txt
to all machines.Run following command on all machines, you need to change
your_config_file
to real config file.For Windows:
lightgbm.exe config=your_config_file
For Linux:
./lightgbm config=your_config_file
MPI Version¶
Edit following parameters in config file:
tree_learner=your_parallel_algorithm
, edityour_parallel_algorithm
(e.g. feature/data) here.num_machines=your_num_machines
, edityour_num_machines
(e.g. 4) here.Copy data file, executable file, config file and
mlist.txt
to all machines.Note: MPI needs to be run in the same path on all machines.
Run following command on one machine (not need to run on all machines), need to change
your_config_file
to real config file.For Windows:
mpiexec.exe /machinefile mlist.txt lightgbm.exe config=your_config_file
For Linux:
mpiexec --machinefile mlist.txt ./lightgbm config=your_config_file