Understanding the forecaster parameters¶
Understanding what can be done when initializing a forecaster with skforecast can have a significant impact on the accuracy and effectiveness of the model. This guide highlights key considerations to keep in mind when initializing a forecaster and how these functionalities can be used to create more powerful and accurate forecasting models in Python.
We will explore the arguments that can be included in a ForecasterAutoreg
, but this can be extrapolated to any of the skforecast forecasters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Tip
To be able to create and train a forecaster, at least regressor
and lags
must be specified.
General parameters¶
Regressor¶
Skforecast is a Python library that facilitates using scikit-learn regressors as multi-step forecasters and also works with any regressor compatible with the scikit-learn API. Therefore, any of these regressors can be used to create a forecaster:
1 2 3 4 5 6 7 8 9 |
|
Lags¶
To apply machine learning models to forecasting problems, the time series needs to be transformed into a matrix where each value is associated with a specific time window (known as lags) that precedes it. In the context of time series, a lag with respect to a time step t is defined as the value of the series at previous time steps. For instance, lag 1 represents the value at time step t-1, while lag m represents the value at time step t-m.
This transformation is essential for machine learning models to capture the dependencies and patterns that exist between past and future values in a time series. By using lags as input features, machine learning models can learn from the past and make predictions about future values. The number of lags used as input features in the matrix is an important hyperparameter that needs to be carefully tuned to obtain the best performance of the model.
Time series transformation into a matrix of 5 lags and a vector with the value of the series that follows each row of the matrix.
1 2 3 4 5 6 7 8 9 |
|
Transformers¶
Skforecast has two arguments in all the forecasters that allow more detailed control over input data transformations. This feature is particularly useful as many machine learning models require specific data pre-processing transformations. For example, linear models may benefit from features being scaled, or categorical features being transformed into numerical values.
Both arguments expect an instance of a transformer (preprocessor) compatible with the scikit-learn preprocessing API with the methods: fit, transform, fit_transform and, inverse_transform.
More information: Scikit-learn transformers and pipelines.
Example
In this example, a scikit-learn StandardScaler
preprocessor is used for both the time series and the exogenous variables.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Weighted time series forecasting¶
The presence of unreliable or unrepresentative values in the data history poses a significant challenge, as it hinders model learning. However, most forecasting algorithms require complete time series data, making it impossible to remove these observations. An alternative solution is to reduce the weight of the affected observations during model training. Skforecast facilitates the control of data weights with the weight_func
argument.
More information: Weighted time series forecasting.
Example
The following example shows how a part of the time series can be excluded from the model training by assigning it a weight of zero using the custom_weights
function, depending on the index.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
Inclusion of kwargs in the regressor fit method¶
Some regressors include the possibility to add some additional configuration during the fitting method. The predictor parameter fit_kwargs
allows these arguments to be set when the forecaster is declared.
Danger
To add weights to the forecaster, it must be done through the weight_func
argument and not through a fit_kwargs
.
Example
The following example demonstrates the inclusion of categorical features in an LGBM regressor. This must be done during the LGBMRegressor
fit method. Fit parameters lightgbm
More information: Categorical features.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Forecaster ID¶
Name used as an identifier of the forecaster. It may be used, for example to identify the time series being modeled.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Direct multi-step parameters¶
For the Forecasters that follow a direct multi-step strategy (ForecasterAutoregDirect
and ForecasterAutoregMultiVariate
), there are two additional parameters in addition to those mentioned above.
Steps¶
Direct multi-step forecasting consists of training a different model for each step of the forecast horizon. For example, to predict the next 5 values of a time series, 5 different models are trained, one for each step. As a result, the predictions are independent of each other.
The number of models to be trained is specified by the steps
parameter.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Number of jobs¶
The n_jobs
parameter allows multi-process parallelization to train regressors for all steps
simultaneously.
The benefits of parallelization depend on several factors, including the regressor used, the number of fits to be performed, and the volume of data involved. When the n_jobs
parameter is set to 'auto'
, the level of parallelization is automatically selected based on heuristic rules that aim to choose the best option for each scenario.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|