Forecasting with XGBoost, LightGBM and other Gradient Boosting models¶
Gradient boosting models have gained popularity in the machine learning community due to their ability to achieve excellent results in a wide range of use cases, including both regression and classification. Although these models have traditionally been less common in forecasting, recent research has shown that they can be highly effective in this domain. Some of the key advantages of using gradient boosting models for forecasting include:
The ease with which exogenous variables, in addition to autoregressive variables, can be incorporated into the model.
The ability to capture non-linear relationships between variables.
High scalability, which enables the models to handle large volumes of data.
There are several popular implementations of gradient boosting in Python, with four of the most popular being XGBoost, LightGBM, scikit-learn HistGradientBoostingRegressor and CatBoost. All of these libraries follow the scikit-learn API, making them compatible with skforecast.
Note
All of the gradient boosting libraries mentioned above - XGBoost, Lightgbm, HistGradientBoostingRegressor, and CatBoost - can handle categorical features natively, but they require specific encoding techniques.- For XGBoost and Lightgbm models, categorical features need to be encoded as integer numbers and then casted to type 'category'. This second step enables the models to automatically identify which features should be treated as categories.
- For HistGradientBoostingRegressor, categorical features should also be encoded as integer numbers, and the name of the categorical columns must be indicated in the `categorical_features` argument when initializing the regressor.
- The native use of categorical features of CatBoost models is not currently supported by skforecast. To use CatBoost models, it's necessary to convert categorical features into numerical values using techniques like one-hot encoding or label encoding.
Libraries¶
# Libraries
# ==============================================================================
import pandas as pd
import matplotlib.pyplot as plt
from xgboost import XGBRegressor
from skforecast.ForecasterAutoreg import ForecasterAutoreg
Data¶
# Download data
# ==============================================================================
url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o_exog.csv')
data = pd.read_csv(url, sep=',', header=0, names=['date', 'y', 'exog_1', 'exog_2'])
# Data preprocessing
# ==============================================================================
data['date'] = pd.to_datetime(data['date'], format='%Y/%m/%d')
data = data.set_index('date')
data = data.asfreq('MS')
steps = 36
data_train = data.iloc[:-steps, :]
data_test = data.iloc[-steps:, :]
Create and train forecaster¶
# Create and fit forecaster
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = XGBRegressor(random_state = 123),
lags = 8
)
forecaster.fit(y=data_train['y'], exog=data_train[['exog_1', 'exog_2']])
forecaster
================= ForecasterAutoreg ================= Regressor: XGBRegressor(base_score=None, booster=None, callbacks=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, early_stopping_rounds=None, enable_categorical=False, eval_metric=None, feature_types=None, gamma=None, gpu_id=None, grow_policy=None, importance_type=None, interaction_constraints=None, learning_rate=None, max_bin=None, max_cat_threshold=None, max_cat_to_onehot=None, max_delta_step=None, max_depth=None, max_leaves=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, predictor=None, random_state=123, ...) Lags: [1 2 3 4 5 6 7 8] Transformer for y: None Transformer for exog: None Window size: 8 Weight function included: False Exogenous included: True Type of exogenous variable: <class 'pandas.core.frame.DataFrame'> Exogenous variables names: ['exog_1', 'exog_2'] Training range: [Timestamp('1992-04-01 00:00:00'), Timestamp('2005-06-01 00:00:00')] Training index type: DatetimeIndex Training index frequency: MS Regressor parameters: {'objective': 'reg:squarederror', 'base_score': None, 'booster': None, 'callbacks': None, 'colsample_bylevel': None, 'colsample_bynode': None, 'colsample_bytree': None, 'early_stopping_rounds': None, 'enable_categorical': False, 'eval_metric': None, 'feature_types': None, 'gamma': None, 'gpu_id': None, 'grow_policy': None, 'importance_type': None, 'interaction_constraints': None, 'learning_rate': None, 'max_bin': None, 'max_cat_threshold': None, 'max_cat_to_onehot': None, 'max_delta_step': None, 'max_depth': None, 'max_leaves': None, 'min_child_weight': None, 'missing': nan, 'monotone_constraints': None, 'n_estimators': 100, 'n_jobs': None, 'num_parallel_tree': None, 'predictor': None, 'random_state': 123, 'reg_alpha': None, 'reg_lambda': None, 'sampling_method': None, 'scale_pos_weight': None, 'subsample': None, 'tree_method': None, 'validate_parameters': None, 'verbosity': None} Creation date: 2023-04-08 19:11:30 Last fit date: 2023-04-08 19:11:30 Skforecast version: 0.7.0 Python version: 3.10.0 Forecaster id: None
Prediction¶
# Predict
# ==============================================================================
forecaster.predict(steps=10, exog=data_test[['exog_1', 'exog_2']])
2005-07-01 0.882285 2005-08-01 0.971786 2005-09-01 1.106107 2005-10-01 1.064638 2005-11-01 1.094615 2005-12-01 1.139401 2006-01-01 0.948508 2006-02-01 0.784839 2006-03-01 0.774227 2006-04-01 0.789593 Freq: MS, Name: pred, dtype: float64
Feature importance¶
# Predictors importance
# ==============================================================================
forecaster.get_feature_importance()
feature | importance | |
---|---|---|
0 | lag_1 | 0.286422 |
1 | lag_2 | 0.125064 |
2 | lag_3 | 0.001548 |
3 | lag_4 | 0.027828 |
4 | lag_5 | 0.075020 |
5 | lag_6 | 0.011337 |
6 | lag_7 | 0.058954 |
7 | lag_8 | 0.045198 |
8 | exog_1 | 0.075610 |
9 | exog_2 | 0.293018 |
%%html
<style>
.jupyter-wrapper .jp-CodeCell .jp-Cell-inputWrapper .jp-InputPrompt {display: none;}
</style>