Parallelization in skforecastĀ¶
Parallelization rulesĀ¶
The n_jobs
argument facilitates the parallelization of specific functionalities to enhance speed within the skforecast library. Parallelization has been strategically integrated at two key levels: during the process of forecaster fitting and during the backtesting phase, which also encompasses hyperparameter search. When the n_jobs
argument is set to its default value of 'auto'
, the library dynamically determines the number of jobs to employ, guided by the ensuing guidelines:
Regressor
If regressor is a LGBMRegressor
with internal n_jobs != 1
, then n_jobs = 1
in forecasting fitting and backtesting. This is because lightgbm
is highly optimized for gradient boosting and parallelizes operations at a very fine-grained level, making additional parallelization unnecessary and potentially harmful due to resource contention.
Forecaster Fitting
If the forecaster is either
ForecasterDirect
orForecasterDirectMultiVariate
, and the underlying regressor happens to be a linear regressor, thenn_jobs
is set to 1.Otherwise, if none of the above conditions hold, the
n_jobs
value is determined ascpu_count() - 1
, aligning with the number of available CPU cores.
Backtesting
If
refit
is an integer, thenn_jobs = 1
. This is because parallelization doesn`t work with intermittent refit.If forecaster is
ForecasterRecursive
and the underlying regressor is linear,n_jobs
is set to 1.If forecaster is
ForecasterRecursive
, the underlying regressor is not a linear regressor, thenn_jobs
is set tocpu_count() - 1
.If forecaster is
ForecasterDirect
orForecasterDirectMultiVariate
andrefit = True
, thenn_jobs
is set tocpu_count() - 1
.If forecaster is
ForecasterDirect
orForecasterDirectMultiVariate
andrefit = False
, thenn_jobs
is set to 1.If forecaster is
ForecasterRecursiveMultiSeries
, thenn_jobs
is set tocpu_count() - 1
.If forecaster is
ForecasterSarimax
orForecasterEquivalentDate
, thenn_jobs = 1
.
ā Warning
The automatic selection of the parallelization level relies on heuristics and is therefore not guaranteed to be optimal. In addition, it is important to keep in mind that many regressors already parallelize their fitting procedures inherently. As a result, introducing additional parallelization may not necessarily improve overall performance. For a more detailed look at parallelization, visit select_n_jobs_backtesting
and select_n_jobs_fit_forecaster
.
Libraries and dataĀ¶
# Libraries
# ==============================================================================
import platform
import psutil
import skforecast
import pandas as pd
import numpy as np
import scipy
import sklearn
import time
import warnings
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.preprocessing import StandardScaler
from lightgbm import LGBMRegressor
from skforecast.recursive import ForecasterRecursive
from skforecast.direct import ForecasterDirect
from skforecast.recursive import ForecasterRecursiveMultiSeries
from skforecast.direct import ForecasterDirectMultiVariate
from skforecast.model_selection import TimeSeriesFold
from skforecast.model_selection import backtesting_forecaster
from skforecast.model_selection import grid_search_forecaster
from skforecast.model_selection import grid_search_forecaster_multiseries
from skforecast.model_selection import backtesting_forecaster_multiseries
# Versions
# ==============================================================================
print(f"Python version : {platform.python_version()}")
print(f"scikit-learn version: {sklearn.__version__}")
print(f"skforecast version : {skforecast.__version__}")
print(f"pandas version : {pd.__version__}")
print(f"numpy version : {np.__version__}")
print(f"scipy version : {scipy.__version__}")
print("")
# System information
# ==============================================================================
print(f"Processor type: {platform.processor()}")
print(f"Platform type: {platform.platform()}")
print(f"Operating system: {platform.system()}")
print(f"Operating system release: {platform.release()}")
print(f"Operating system version: {platform.version()}")
print(f"Number of physical cores: {psutil.cpu_count(logical=False)}")
print(f"Number of logical cores: {psutil.cpu_count(logical=True)}")
Python version : 3.12.4 scikit-learn version: 1.5.2 skforecast version : 0.14.0 pandas version : 2.2.3 numpy version : 2.0.2 scipy version : 1.14.1 Processor type: Intel64 Family 6 Model 140 Stepping 1, GenuineIntel Platform type: Windows-11-10.0.26100-SP0 Operating system: Windows Operating system release: 11 Operating system version: 10.0.26100 Number of physical cores: 4 Number of logical cores: 8
# Data
# ==============================================================================
n = 5_000
rgn = np.random.default_rng(seed=123)
y = pd.Series(rgn.random(size=(n)), name="y")
exog = pd.DataFrame(rgn.random(size=(n, 10)))
exog.columns = [f"exog_{i}" for i in range(exog.shape[1])]
multi_series = pd.DataFrame(rgn.random(size=(n, 10)))
multi_series.columns = [f"series_{i + 1}" for i in range(multi_series.shape[1])]
y_train = y[:-int(n / 2)]
display(y.head())
display(exog.head())
display(multi_series.head())
0 0.682352 1 0.053821 2 0.220360 3 0.184372 4 0.175906 Name: y, dtype: float64
exog_0 | exog_1 | exog_2 | exog_3 | exog_4 | exog_5 | exog_6 | exog_7 | exog_8 | exog_9 | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0.593121 | 0.353471 | 0.336277 | 0.399734 | 0.915459 | 0.822278 | 0.480418 | 0.929802 | 0.950948 | 0.863556 |
1 | 0.764104 | 0.638191 | 0.956624 | 0.178105 | 0.434077 | 0.137480 | 0.837667 | 0.768947 | 0.244235 | 0.815336 |
2 | 0.475312 | 0.312415 | 0.353596 | 0.272162 | 0.772064 | 0.110216 | 0.596551 | 0.688549 | 0.651380 | 0.191837 |
3 | 0.039253 | 0.962713 | 0.189194 | 0.910629 | 0.169796 | 0.697751 | 0.830913 | 0.484824 | 0.634634 | 0.862865 |
4 | 0.872447 | 0.861421 | 0.394829 | 0.877763 | 0.286779 | 0.131008 | 0.450185 | 0.898167 | 0.590147 | 0.045838 |
series_1 | series_2 | series_3 | series_4 | series_5 | series_6 | series_7 | series_8 | series_9 | series_10 | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0.967448 | 0.580646 | 0.643348 | 0.461737 | 0.450859 | 0.894496 | 0.037967 | 0.097698 | 0.094356 | 0.893528 |
1 | 0.207450 | 0.194904 | 0.377063 | 0.975065 | 0.351034 | 0.812253 | 0.265956 | 0.262733 | 0.784995 | 0.674256 |
2 | 0.520431 | 0.985069 | 0.039559 | 0.541797 | 0.612761 | 0.640336 | 0.823467 | 0.768387 | 0.561777 | 0.600835 |
3 | 0.866694 | 0.165510 | 0.819767 | 0.691179 | 0.717778 | 0.392694 | 0.094067 | 0.271990 | 0.467866 | 0.041054 |
4 | 0.406310 | 0.657688 | 0.630730 | 0.694424 | 0.943934 | 0.888538 | 0.470363 | 0.518283 | 0.719674 | 0.010789 |
Benchmark ForecasterRecursiveĀ¶
warnings.filterwarnings("ignore")
print("-------------------")
print("ForecasterRecursive")
print("-------------------")
steps = 100
lags = 50
regressors = [
Ridge(random_state=77, alpha=0.1),
LGBMRegressor(random_state=77, n_jobs=1, n_estimators=50, max_depth=5, verbose=-1),
LGBMRegressor(random_state=77, n_jobs=-1, n_estimators=50, max_depth=5, verbose=-1),
HistGradientBoostingRegressor(random_state=77, max_iter=50, max_depth=5,),
]
param_grids = [
{'alpha': [0.1, 0.1, 0.1]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'max_iter': [50, 50], 'max_depth': [5, 5]}
]
lags_grid = [50, 50, 50]
elapsed_times = []
for regressor, param_grid in zip(regressors, param_grids):
print("")
print(regressor, param_grid)
print("")
forecaster = ForecasterRecursive(
regressor = regressor,
lags = lags,
transformer_exog = StandardScaler()
)
print("Profiling fit")
start = time.time()
forecaster.fit(y=y, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling create_train_X_y")
start = time.time()
_ = forecaster.create_train_X_y(y=y, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = len(y_train),
refit = True,
fixed_train_size = False,
)
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = len(y_train),
refit = False,
)
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = len(y_train),
refit = False,
)
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit no parallel")
start = time.time()
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
methods = [
"fit",
"create_train_X_y",
"backtest_refit_parallel",
"backtest_refit_noparallel",
"backtest_no_refit_parallel",
"backtest_no_refit_noparallel",
"gridSearch_no_refit_parallel",
"gridSearch_no_refit_noparallel"
]
results = pd.DataFrame({
"regressor": np.repeat(np.array([str(regressor) for regressor in regressors]), len(methods)),
"method": np.tile(methods, len(regressors)),
"elapsed_time": elapsed_times
})
results["regressor"] = results["regressor"].str.replace("\n ", " ")
results['parallel'] = results.method.str.contains("_parallel")
results['method'] = results.method.str.replace("_parallel", "")
results['method'] = results.method.str.replace("_noparallel", "")
results = results.sort_values(by=["regressor", "method", "parallel"])
results_pivot = results.pivot_table(
index=["regressor", "method"],
columns="parallel",
values="elapsed_time"
).reset_index()
results_pivot.columns.name = None
results_pivot["pct_improvement"] = (results_pivot[False] - results_pivot[True]) / results_pivot[False] * 100
display(results_pivot)
fig, ax = plt.subplots(figsize=(10, 5))
bars = sns.barplot(data=results_pivot.dropna(), x="method", y="pct_improvement", hue="regressor", ax=ax)
for container in bars.containers:
ax.bar_label(container, fmt='%.1f', padding=3, fontsize=8)
ax.set_title("Parallel vs Sequential (ForecasterRecursive)")
ax.set_ylabel("Percent improvement")
ax.set_xlabel("Method")
ax.legend(fontsize=8, loc='lower left', bbox_to_anchor=(0, -0.31), ncols=1);
------------------- ForecasterRecursive ------------------- Ridge(alpha=0.1, random_state=77) {'alpha': [0.1, 0.1, 0.1]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=-1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel HistGradientBoostingRegressor(max_depth=5, max_iter=50, random_state=77) {'max_iter': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel
regressor | method | False | True | pct_improvement | |
---|---|---|---|---|---|
0 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_no_refit | 2.097107 | 0.848587 | 59.535365 |
1 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_refit | 6.490748 | 2.794112 | 56.952390 |
2 | HistGradientBoostingRegressor(max_depth=5, max... | create_train_X_y | 0.005000 | NaN | NaN |
3 | HistGradientBoostingRegressor(max_depth=5, max... | fit | 0.527734 | NaN | NaN |
4 | HistGradientBoostingRegressor(max_depth=5, max... | gridSearch_no_refit | 8.432383 | 3.252936 | 61.423293 |
5 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 2.136809 | 0.972203 | 54.502080 |
6 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 4.557765 | 3.956567 | 13.190632 |
7 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.005999 | NaN | NaN |
8 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.114982 | NaN | NaN |
9 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 8.766446 | 4.257316 | 51.436232 |
10 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 1.095819 | 0.515112 | 52.992953 |
11 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 4.761011 | 2.660067 | 44.128113 |
12 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.005002 | NaN | NaN |
13 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.239274 | NaN | NaN |
14 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 4.372416 | 2.121355 | 51.483235 |
15 | Ridge(alpha=0.1, random_state=77) | backtest_no_refit | 0.242297 | 0.119852 | 50.535045 |
16 | Ridge(alpha=0.1, random_state=77) | backtest_refit | 0.596156 | 8.014275 | -1244.324804 |
17 | Ridge(alpha=0.1, random_state=77) | create_train_X_y | 0.007996 | NaN | NaN |
18 | Ridge(alpha=0.1, random_state=77) | fit | 0.050004 | NaN | NaN |
19 | Ridge(alpha=0.1, random_state=77) | gridSearch_no_refit | 0.681082 | 0.359283 | 47.248207 |
Benchmark ForecasterDirectĀ¶
print("----------------")
print("ForecasterDirect")
print("----------------")
steps = 10
lags = 10
regressors = [
Ridge(random_state=77, alpha=0.1),
LGBMRegressor(random_state=77, n_jobs=1, n_estimators=50, max_depth=5, verbose=-1),
LGBMRegressor(random_state=77, n_jobs=-1, n_estimators=50, max_depth=5, verbose=-1),
HistGradientBoostingRegressor(random_state=77, max_iter=50, max_depth=5,),
]
param_grids = [
{'alpha': [0.1, 0.1, 0.1]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'max_iter': [50, 50], 'max_depth': [5, 5]}
]
lags_grid = [50, 50, 50]
elapsed_times = []
for regressor, param_grid in zip(regressors, param_grids):
print("")
print(regressor, param_grid)
print("")
forecaster = ForecasterDirect(
regressor = regressor,
steps = steps,
lags = lags,
transformer_exog = StandardScaler()
)
print("Profiling fit")
start = time.time()
forecaster.fit(y=y, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling create_train_X_y")
start = time.time()
_ = forecaster.create_train_X_y(y=y, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = int(len(y) * 0.9),
refit = True,
fixed_train_size = False,
)
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = int(len(y) * 0.9),
refit = False,
)
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit parallel")
start = time.time()
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit no parallel")
start = time.time()
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = y,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
methods = [
"fit",
"create_train_X_y",
"backtest_refit_parallel",
"backtest_refit_noparallel",
"backtest_no_refit_parallel",
"backtest_no_refit_noparallel",
"gridSearch_no_refit_parallel",
"gridSearch_no_refit_noparallel"
]
results = pd.DataFrame({
"regressor": np.repeat(np.array([str(regressor) for regressor in regressors]), len(methods)),
"method": np.tile(methods, len(regressors)),
"elapsed_time": elapsed_times
})
results["regressor"] = results["regressor"].str.replace("\n ", " ")
results['parallel'] = results.method.str.contains("_parallel")
results['method'] = results.method.str.replace("_parallel", "")
results['method'] = results.method.str.replace("_noparallel", "")
results = results.sort_values(by=["regressor", "method", "parallel"])
results_pivot = results.pivot_table(
index=["regressor", "method"],
columns="parallel",
values="elapsed_time"
).reset_index()
results_pivot.columns.name = None
results_pivot["pct_improvement"] = (results_pivot[False] - results_pivot[True]) / results_pivot[False] * 100
display(results_pivot)
fig, ax = plt.subplots(figsize=(10, 5))
bars = sns.barplot(data=results_pivot.dropna(), x="method", y="pct_improvement", hue="regressor", ax=ax)
for container in bars.containers:
ax.bar_label(container, fmt='%.1f', padding=3, fontsize=8)
ax.set_title("Parallel vs Sequential (ForecasterDirect)")
ax.set_ylabel("Percent improvement")
ax.set_xlabel("Method")
ax.legend(fontsize=8, loc='lower left', bbox_to_anchor=(0, -0.31), ncols=1);
---------------- ForecasterDirect ---------------- Ridge(alpha=0.1, random_state=77) {'alpha': [0.1, 0.1, 0.1]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=-1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel HistGradientBoostingRegressor(max_depth=5, max_iter=50, random_state=77) {'max_iter': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel
regressor | method | False | True | pct_improvement | |
---|---|---|---|---|---|
0 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_no_refit | 2.728197 | 5.023978 | -84.150110 |
1 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_refit | 71.989915 | 28.746552 | 60.068641 |
2 | HistGradientBoostingRegressor(max_depth=5, max... | create_train_X_y | 0.009108 | NaN | NaN |
3 | HistGradientBoostingRegressor(max_depth=5, max... | fit | 1.012895 | NaN | NaN |
4 | HistGradientBoostingRegressor(max_depth=5, max... | gridSearch_no_refit | 8.722155 | 27.459687 | -214.826857 |
5 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 1.175862 | 1.919050 | -63.203676 |
6 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 25.116236 | 28.374661 | -12.973379 |
7 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.010000 | NaN | NaN |
8 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.386074 | NaN | NaN |
9 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 6.524236 | 11.774790 | -80.477689 |
10 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 1.028363 | 1.998308 | -94.319227 |
11 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 31.491574 | 12.525722 | 60.225165 |
12 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.006005 | NaN | NaN |
13 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.723918 | NaN | NaN |
14 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 6.923338 | 11.685491 | -68.784073 |
15 | Ridge(alpha=0.1, random_state=77) | backtest_no_refit | 0.158813 | 0.132391 | 16.637442 |
16 | Ridge(alpha=0.1, random_state=77) | backtest_refit | 1.816731 | 0.620145 | 65.864817 |
17 | Ridge(alpha=0.1, random_state=77) | create_train_X_y | 0.006006 | NaN | NaN |
18 | Ridge(alpha=0.1, random_state=77) | fit | 0.062544 | NaN | NaN |
19 | Ridge(alpha=0.1, random_state=77) | gridSearch_no_refit | 0.548561 | 0.618236 | -12.701411 |
Benchmark ForecasterRecursiveMultiSeriesĀ¶
print("------------------------------")
print("ForecasterRecursiveMultiSeries")
print("------------------------------")
steps = 100
lags = 50
regressors = [
Ridge(random_state=77, alpha=0.1),
LGBMRegressor(random_state=77, n_jobs=1, n_estimators=50, max_depth=5, verbose=-1),
LGBMRegressor(random_state=77, n_jobs=-1, n_estimators=50, max_depth=5, verbose=-1),
HistGradientBoostingRegressor(random_state=77, max_iter=50, max_depth=5,),
]
param_grids = [
{'alpha': [0.1, 0.1, 0.1]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'max_iter': [50, 50], 'max_depth': [5, 5]}
]
lags_grid = [50, 50, 50]
elapsed_times = []
for regressor, param_grid in zip(regressors, param_grids):
print("")
print(regressor, param_grid)
print("")
forecaster = ForecasterRecursiveMultiSeries(
regressor = regressor,
lags = lags,
transformer_exog = StandardScaler()
)
print("Profiling fit")
start = time.time()
forecaster.fit(series=multi_series, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling create_train_X_y")
start = time.time()
_ = forecaster.create_train_X_y(series=multi_series, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit parallel")
start = time.time()
cv = TimeSeriesFold(
initial_train_size = len(y_train),
refit = True,
fixed_train_size = False,
steps = steps,
)
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit and no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit parallel")
start = time.time()
cv = TimeSeriesFold(
initial_train_size = len(y_train),
refit = False,
steps = steps,
)
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit parallel")
start = time.time()
results_grid = grid_search_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit no parallel")
start = time.time()
cv = TimeSeriesFold(
initial_train_size = len(y_train),
refit = False,
steps = steps,
)
results_grid = grid_search_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
methods = [
"fit",
"create_train_X_y",
"backtest_refit_parallel",
"backtest_refit_noparallel",
"backtest_no_refit_parallel",
"backtest_no_refit_noparallel",
"gridSearch_no_refit_parallel",
"gridSearch_no_refit_noparallel"
]
results = pd.DataFrame({
"regressor": np.repeat(np.array([str(regressor) for regressor in regressors]), len(methods)),
"method": np.tile(methods, len(regressors)),
"elapsed_time": elapsed_times
})
results["regressor"] = results["regressor"].str.replace("\n ", " ")
results['parallel'] = results.method.str.contains("_parallel")
results['method'] = results.method.str.replace("_parallel", "")
results['method'] = results.method.str.replace("_noparallel", "")
results = results.sort_values(by=["regressor", "method", "parallel"])
results_pivot = results.pivot_table(
index=["regressor", "method"],
columns="parallel",
values="elapsed_time"
).reset_index()
results_pivot.columns.name = None
results_pivot["pct_improvement"] = (results_pivot[False] - results_pivot[True]) / results_pivot[False] * 100
display(results_pivot)
fig, ax = plt.subplots(figsize=(10, 5))
bars = sns.barplot(data=results_pivot.dropna(), x="method", y="pct_improvement", hue="regressor", ax=ax)
for container in bars.containers:
ax.bar_label(container, fmt='%.1f', padding=3, fontsize=8)
ax.set_title("Parallel vs Sequential (ForecasterRecursiveMultiSeries)")
ax.set_ylabel("Percent improvement")
ax.set_xlabel("Method")
ax.legend(fontsize=8, loc='lower left', bbox_to_anchor=(0, -0.31), ncols=1);
------------------------------ ForecasterRecursiveMultiSeries ------------------------------ Ridge(alpha=0.1, random_state=77) {'alpha': [0.1, 0.1, 0.1]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit and no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit and no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=-1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit and no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel HistGradientBoostingRegressor(max_depth=5, max_iter=50, random_state=77) {'max_iter': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit and no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel
regressor | method | False | True | pct_improvement | |
---|---|---|---|---|---|
0 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_no_refit | 1.106152 | 0.664156 | 39.957987 |
1 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_refit | 8.177982 | 4.231594 | 48.256264 |
2 | HistGradientBoostingRegressor(max_depth=5, max... | create_train_X_y | 0.073507 | NaN | NaN |
3 | HistGradientBoostingRegressor(max_depth=5, max... | fit | 0.396801 | NaN | NaN |
4 | HistGradientBoostingRegressor(max_depth=5, max... | gridSearch_no_refit | 4.788807 | 2.866729 | 40.136880 |
5 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 2.918234 | 1.537057 | 47.329215 |
6 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 15.102185 | 11.231904 | 25.627290 |
7 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.093560 | NaN | NaN |
8 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.421425 | NaN | NaN |
9 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 12.388050 | 6.388853 | 48.427290 |
10 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 1.801216 | 1.084726 | 39.778153 |
11 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 19.794949 | 9.823516 | 50.373624 |
12 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.053997 | NaN | NaN |
13 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 1.159806 | NaN | NaN |
14 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 5.983032 | 4.391494 | 26.600865 |
15 | Ridge(alpha=0.1, random_state=77) | backtest_no_refit | 0.476766 | 3.180120 | -567.018602 |
16 | Ridge(alpha=0.1, random_state=77) | backtest_refit | 2.896543 | 1.427100 | 50.730913 |
17 | Ridge(alpha=0.1, random_state=77) | create_train_X_y | 0.085904 | NaN | NaN |
18 | Ridge(alpha=0.1, random_state=77) | fit | 0.187410 | NaN | NaN |
19 | Ridge(alpha=0.1, random_state=77) | gridSearch_no_refit | 1.479561 | 1.118693 | 24.390221 |
Benchmark ForecasterDirectMultiVariateĀ¶
print("----------------------------")
print("ForecasterDirectMultiVariate")
print("----------------------------")
steps = 5
lags = 10
regressors = [
Ridge(random_state=77, alpha=0.1),
LGBMRegressor(random_state=77, n_jobs=1, n_estimators=50, max_depth=5, verbose=-1),
LGBMRegressor(random_state=77, n_jobs=-1, n_estimators=50, max_depth=5, verbose=-1),
HistGradientBoostingRegressor(random_state=77, max_iter=50, max_depth=5,),
]
param_grids = [
{'alpha': [0.1, 0.1, 0.1]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'n_estimators': [50, 50], 'max_depth': [5, 5]},
{'max_iter': [50, 50], 'max_depth': [5, 5]}
]
lags_grid = [50, 50, 50]
elapsed_times = []
for regressor, param_grid in zip(regressors, param_grids):
print("")
print(regressor, param_grid)
print("")
forecaster = ForecasterDirectMultiVariate(
regressor = regressor,
lags = lags,
steps = steps,
level = "series_1",
transformer_exog = StandardScaler()
)
print("Profiling fit")
start = time.time()
forecaster.fit(series=multi_series, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling create_train_X_y")
start = time.time()
_ = forecaster.create_train_X_y(series=multi_series, exog=exog)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = int(len(y) * 0.9),
refit = True,
fixed_train_size = False,
)
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit parallel")
start = time.time()
cv = TimeSeriesFold(
steps = steps,
initial_train_size = int(len(y) * 0.9),
refit = False,
)
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling backtesting no refit no parallel")
start = time.time()
metric, backtest_predictions = backtesting_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
metric = 'mean_squared_error',
interval = None,
n_boot = 500,
random_state = 123,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit parallel")
start = time.time()
results_grid = grid_search_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = -1
)
end = time.time()
elapsed_times.append(end - start)
print("Profiling GridSearch no refit no parallel")
start = time.time()
results_grid = grid_search_forecaster_multiseries(
forecaster = forecaster,
series = multi_series,
exog = exog,
cv = cv,
param_grid = param_grid,
lags_grid = lags_grid,
metric = 'mean_squared_error',
return_best = False,
verbose = False,
show_progress = False,
n_jobs = 1
)
end = time.time()
elapsed_times.append(end - start)
methods = [
"fit",
"create_train_X_y",
"backtest_refit_parallel",
"backtest_refit_noparallel",
"backtest_no_refit_parallel",
"backtest_no_refit_noparallel",
"gridSearch_no_refit_parallel",
"gridSearch_no_refit_noparallel"
]
results = pd.DataFrame({
"regressor": np.repeat(np.array([str(regressor) for regressor in regressors]), len(methods)),
"method": np.tile(methods, len(regressors)),
"elapsed_time": elapsed_times
})
results["regressor"] = results["regressor"].str.replace("\n ", " ")
results['parallel'] = results.method.str.contains("_parallel")
results['method'] = results.method.str.replace("_parallel", "")
results['method'] = results.method.str.replace("_noparallel", "")
results = results.sort_values(by=["regressor", "method", "parallel"])
results_pivot = results.pivot_table(index=["regressor", "method"], columns="parallel", values="elapsed_time").reset_index()
results_pivot.columns.name = None
results_pivot["pct_improvement"] = (results_pivot[False] - results_pivot[True]) / results_pivot[False] * 100
display(results_pivot)
fig, ax = plt.subplots(figsize=(10, 5))
bars = sns.barplot(data=results_pivot.dropna(), x="method", y="pct_improvement", hue="regressor", ax=ax)
for container in bars.containers:
ax.bar_label(container, fmt='%.1f', padding=3, fontsize=8)
ax.set_title("Parallel vs Sequential (ForecasterDirectMultiVariate)")
ax.set_ylabel("Percent improvement")
ax.set_xlabel("Method")
ax.legend(fontsize=8, loc='lower left', bbox_to_anchor=(0, -0.31), ncols=1);
---------------------------- ForecasterDirectMultiVariate ---------------------------- Ridge(alpha=0.1, random_state=77) {'alpha': [0.1, 0.1, 0.1]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel LGBMRegressor(max_depth=5, n_estimators=50, n_jobs=-1, random_state=77, verbose=-1) {'n_estimators': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel HistGradientBoostingRegressor(max_depth=5, max_iter=50, random_state=77) {'max_iter': [50, 50], 'max_depth': [5, 5]} Profiling fit Profiling create_train_X_y Profiling backtesting refit parallel Profiling backtesting refit no parallel Profiling backtesting no refit parallel Profiling backtesting no refit no parallel Profiling GridSearch no refit parallel Profiling GridSearch no refit no parallel
regressor | method | False | True | pct_improvement | |
---|---|---|---|---|---|
0 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_no_refit | 2.853674 | 9.233681 | -223.571717 |
1 | HistGradientBoostingRegressor(max_depth=5, max... | backtest_refit | 129.822874 | 92.496529 | 28.751748 |
2 | HistGradientBoostingRegressor(max_depth=5, max... | create_train_X_y | 0.018000 | NaN | NaN |
3 | HistGradientBoostingRegressor(max_depth=5, max... | fit | 1.330104 | NaN | NaN |
4 | HistGradientBoostingRegressor(max_depth=5, max... | gridSearch_no_refit | 24.548890 | 77.863022 | -217.175326 |
5 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 2.891752 | 3.798469 | -31.355314 |
6 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 111.030921 | 86.056100 | 22.493573 |
7 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.029996 | NaN | NaN |
8 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 0.717866 | NaN | NaN |
9 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 23.100491 | 25.343611 | -9.710274 |
10 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_no_refit | 2.516266 | 3.437793 | -36.622786 |
11 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | backtest_refit | 186.235264 | 65.961237 | 64.581768 |
12 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | create_train_X_y | 0.013000 | NaN | NaN |
13 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | fit | 1.724441 | NaN | NaN |
14 | LGBMRegressor(max_depth=5, n_estimators=50, n_... | gridSearch_no_refit | 34.885274 | 39.199495 | -12.366885 |
15 | Ridge(alpha=0.1, random_state=77) | backtest_no_refit | 0.715881 | 0.862378 | -20.463955 |
16 | Ridge(alpha=0.1, random_state=77) | backtest_refit | 7.449928 | 3.115467 | 58.181243 |
17 | Ridge(alpha=0.1, random_state=77) | create_train_X_y | 0.015999 | NaN | NaN |
18 | Ridge(alpha=0.1, random_state=77) | fit | 0.114108 | NaN | NaN |
19 | Ridge(alpha=0.1, random_state=77) | gridSearch_no_refit | 2.783162 | 3.158578 | -13.488834 |