model_selection
¶
backtesting_forecaster(forecaster, y, steps, metric, initial_train_size=None, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, refit=False, interval=None, n_boot=500, random_state=123, in_sample_residuals=True, verbose=False, show_progress=True)
¶
Backtesting of forecaster model.
If refit
is False, the model is trained only once using the initial_train_size
first observations. If refit
is True, the model is trained in each iteration
increasing the training set. A copy of the original forecaster is created so
it is not modified during the process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
forecaster |
ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect |
Forecaster model. |
required |
y |
Series |
Training time series. |
required |
steps |
int |
Number of steps to predict. |
required |
metric |
Union[str, Callable, list] |
Metric used to quantify the goodness of fit of the model. If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'} If Callable: Function with arguments y_true, y_pred that returns a float. If list: List containing multiple strings and/or Callables. |
required |
initial_train_size |
Optional[int] |
Number of samples in the initial train split. If
|
None |
fixed_train_size |
bool |
If True, train size doesn't increase but moves by |
True |
gap |
int |
Number of samples to be excluded after the end of each training set and before the test set. |
0 |
allow_incomplete_fold |
bool |
Last fold is allowed to have a smaller number of samples than the
|
True |
exog |
Union[pandas.core.series.Series, pandas.core.frame.DataFrame] |
Exogenous variable/s included as predictor/s. Must have the same
number of observations as |
None |
refit |
bool |
Whether to re-fit the forecaster in each iteration. |
False |
interval |
Optional[list] |
Confidence of the prediction interval estimated. Sequence of percentiles
to compute, which must be between 0 and 100 inclusive. For example,
interval of 95% should be as |
None |
n_boot |
int |
Number of bootstrapping iterations used to estimate prediction intervals. |
500 |
random_state |
int |
Sets a seed to the random generator, so that boot intervals are always deterministic. |
123 |
in_sample_residuals |
bool |
If |
True |
verbose |
bool |
Print number of folds and index of training and validation sets used for backtesting. |
False |
show_progress |
bool |
Whether to show a progress bar. Defaults to True. |
True |
Returns:
Type | Description |
---|---|
Tuple[Union[float, list], pandas.core.frame.DataFrame] |
Value(s) of the metric(s). |
Source code in skforecast/model_selection/model_selection.py
def backtesting_forecaster(
forecaster,
y: pd.Series,
steps: int,
metric: Union[str, Callable, list],
initial_train_size: Optional[int]=None,
fixed_train_size: bool=True,
gap: int=0,
allow_incomplete_fold: bool=True,
exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
refit: bool=False,
interval: Optional[list]=None,
n_boot: int=500,
random_state: int=123,
in_sample_residuals: bool=True,
verbose: bool=False,
show_progress: bool=True
) -> Tuple[Union[float, list], pd.DataFrame]:
"""
Backtesting of forecaster model.
If `refit` is False, the model is trained only once using the `initial_train_size`
first observations. If `refit` is True, the model is trained in each iteration
increasing the training set. A copy of the original forecaster is created so
it is not modified during the process.
Parameters
----------
forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
Forecaster model.
y : pandas Series
Training time series.
steps : int
Number of steps to predict.
metric : str, Callable, list
Metric used to quantify the goodness of fit of the model.
If string:
{'mean_squared_error', 'mean_absolute_error',
'mean_absolute_percentage_error', 'mean_squared_log_error'}
If Callable:
Function with arguments y_true, y_pred that returns a float.
If list:
List containing multiple strings and/or Callables.
initial_train_size : int, default `None`
Number of samples in the initial train split. If `None` and `forecaster` is already
trained, no initial train is done and all data is used to evaluate the model. However,
the first `len(forecaster.last_window)` observations are needed to create the
initial predictors, so no predictions are calculated for them. This useful
to backtest the model on the same data used to train it.
`None` is only allowed when `refit` is `False` and `forecaster` is already trained.
fixed_train_size : bool, default `True`
If True, train size doesn't increase but moves by `steps` in each iteration.
gap : int, default `0`
Number of samples to be excluded after the end of each training set and
before the test set.
allow_incomplete_fold : bool, default `True`
Last fold is allowed to have a smaller number of samples than the
`test_size`. If `False`, the last fold is excluded.
exog : pandas Series, pandas DataFrame, default `None`
Exogenous variable/s included as predictor/s. Must have the same
number of observations as `y` and should be aligned so that y[i] is
regressed on exog[i].
refit : bool, default `False`
Whether to re-fit the forecaster in each iteration.
interval : list, default `None`
Confidence of the prediction interval estimated. Sequence of percentiles
to compute, which must be between 0 and 100 inclusive. For example,
interval of 95% should be as `interval = [2.5, 97.5]`. If `None`, no
intervals are estimated. Only available for forecaster of type
ForecasterAutoreg and ForecasterAutoregCustom.
n_boot : int, default `500`
Number of bootstrapping iterations used to estimate prediction
intervals.
random_state : int, default `123`
Sets a seed to the random generator, so that boot intervals are always
deterministic.
in_sample_residuals : bool, default `True`
If `True`, residuals from the training data are used as proxy of
prediction error to create prediction intervals. If `False`, out_sample_residuals
are used if they are already stored inside the forecaster.
verbose : bool, default `False`
Print number of folds and index of training and validation sets used
for backtesting.
show_progress: bool, default `True`
Whether to show a progress bar. Defaults to True.
Returns
-------
metrics_value : float, list
Value(s) of the metric(s).
backtest_predictions : pandas DataFrame
Value of predictions and their estimated interval if `interval` is not `None`.
column pred = predictions.
column lower_bound = lower bound of the interval.
column upper_bound = upper bound interval of the interval.
"""
if type(forecaster).__name__ not in ['ForecasterAutoreg',
'ForecasterAutoregCustom',
'ForecasterAutoregDirect']:
raise TypeError(
("`forecaster` must be of type `ForecasterAutoreg`, `ForecasterAutoregCustom` "
"or `ForecasterAutoregDirect`, for all other types of forecasters "
"use the functions available in the other `model_selection` modules.")
)
check_backtesting_input(
forecaster = forecaster,
steps = steps,
metric = metric,
y = y,
initial_train_size = initial_train_size,
fixed_train_size = fixed_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
refit = refit,
interval = interval,
n_boot = n_boot,
random_state = random_state,
in_sample_residuals = in_sample_residuals,
verbose = verbose,
show_progress = show_progress
)
if type(forecaster).__name__ == 'ForecasterAutoregDirect' and \
forecaster.steps < steps + gap:
raise ValueError(
("When using a ForecasterAutoregDirect, the combination of steps "
f"+ gap ({steps+gap}) cannot be greater than the `steps` parameter "
f"declared when the forecaster is initialized ({forecaster.steps}).")
)
if refit:
metrics_values, backtest_predictions = _backtesting_forecaster_refit(
forecaster = forecaster,
y = y,
steps = steps,
metric = metric,
initial_train_size = initial_train_size,
fixed_train_size = fixed_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
exog = exog,
interval = interval,
n_boot = n_boot,
random_state = random_state,
in_sample_residuals = in_sample_residuals,
verbose = verbose,
show_progress = show_progress
)
else:
metrics_values, backtest_predictions = _backtesting_forecaster_no_refit(
forecaster = forecaster,
y = y,
steps = steps,
metric = metric,
initial_train_size = initial_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
exog = exog,
interval = interval,
n_boot = n_boot,
random_state = random_state,
in_sample_residuals = in_sample_residuals,
verbose = verbose,
show_progress = show_progress
)
return metrics_values, backtest_predictions
grid_search_forecaster(forecaster, y, param_grid, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, return_best=True, verbose=True)
¶
Exhaustive search over specified parameter values for a Forecaster object.
Validation is done using time series backtesting.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
forecaster |
ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect |
Forcaster model. |
required |
y |
Series |
Training time series values. |
required |
param_grid |
dict |
Dictionary with parameters names ( |
required |
steps |
int |
Number of steps to predict. |
required |
metric |
Union[str, Callable, list] |
Metric used to quantify the goodness of fit of the model. If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'} If Callable: Function with arguments y_true, y_pred that returns a float. If list: List containing multiple strings and/or Callables. |
required |
initial_train_size |
int |
Number of samples in the initial train split. |
required |
fixed_train_size |
bool |
If True, train size doesn't increase but moves by |
True |
gap |
int |
Number of samples to be excluded after the end of each training set and before the test set. |
0 |
allow_incomplete_fold |
bool |
Last fold is allowed to have a smaller number of samples than the
|
True |
exog |
Union[pandas.core.series.Series, pandas.core.frame.DataFrame] |
Exogenous variable/s included as predictor/s. Must have the same
number of observations as |
None |
lags_grid |
Optional[list] |
Lists of |
None |
refit |
bool |
Whether to re-fit the forecaster in each iteration of backtesting. |
False |
return_best |
bool |
Refit the |
True |
verbose |
bool |
Print number of folds used for cv or backtesting. |
True |
Returns:
Type | Description |
---|---|
DataFrame |
Results for each combination of parameters. column lags = predictions. column params = lower bound of the interval. column metric = metric value estimated for the combination of parameters. additional n columns with param = value. |
Source code in skforecast/model_selection/model_selection.py
def grid_search_forecaster(
forecaster,
y: pd.Series,
param_grid: dict,
steps: int,
metric: Union[str, Callable, list],
initial_train_size: int,
fixed_train_size: bool=True,
gap: int=0,
allow_incomplete_fold: bool=True,
exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
lags_grid: Optional[list]=None,
refit: bool=False,
return_best: bool=True,
verbose: bool=True
) -> pd.DataFrame:
"""
Exhaustive search over specified parameter values for a Forecaster object.
Validation is done using time series backtesting.
Parameters
----------
forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
Forcaster model.
y : pandas Series
Training time series values.
param_grid : dict
Dictionary with parameters names (`str`) as keys and lists of parameter
settings to try as values.
steps : int
Number of steps to predict.
metric : str, Callable, list
Metric used to quantify the goodness of fit of the model.
If string:
{'mean_squared_error', 'mean_absolute_error',
'mean_absolute_percentage_error', 'mean_squared_log_error'}
If Callable:
Function with arguments y_true, y_pred that returns a float.
If list:
List containing multiple strings and/or Callables.
initial_train_size : int
Number of samples in the initial train split.
fixed_train_size : bool, default `True`
If True, train size doesn't increase but moves by `steps` in each iteration.
gap : int, default `0`
Number of samples to be excluded after the end of each training set and
before the test set.
allow_incomplete_fold : bool, default `True`
Last fold is allowed to have a smaller number of samples than the
`test_size`. If `False`, the last fold is excluded.
exog : pandas Series, pandas DataFrame, default `None`
Exogenous variable/s included as predictor/s. Must have the same
number of observations as `y` and should be aligned so that y[i] is
regressed on exog[i].
lags_grid : list of int, lists, numpy ndarray or range, default `None`
Lists of `lags` to try. Only used if forecaster is an instance of
`ForecasterAutoreg` or `ForecasterAutoregDirect`.
refit : bool, default `False`
Whether to re-fit the forecaster in each iteration of backtesting.
return_best : bool, default `True`
Refit the `forecaster` using the best found parameters on the whole data.
verbose : bool, default `True`
Print number of folds used for cv or backtesting.
Returns
-------
results : pandas DataFrame
Results for each combination of parameters.
column lags = predictions.
column params = lower bound of the interval.
column metric = metric value estimated for the combination of parameters.
additional n columns with param = value.
"""
param_grid = list(ParameterGrid(param_grid))
results = _evaluate_grid_hyperparameters(
forecaster = forecaster,
y = y,
param_grid = param_grid,
steps = steps,
metric = metric,
initial_train_size = initial_train_size,
fixed_train_size = fixed_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
exog = exog,
lags_grid = lags_grid,
refit = refit,
return_best = return_best,
verbose = verbose
)
return results
random_search_forecaster(forecaster, y, param_distributions, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, n_iter=10, random_state=123, return_best=True, verbose=True)
¶
Random search over specified parameter values or distributions for a Forecaster object.
Validation is done using time series backtesting.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
forecaster |
ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect |
Forcaster model. |
required |
y |
Series |
Training time series. |
required |
param_distributions |
dict |
Dictionary with parameters names ( |
required |
steps |
int |
Number of steps to predict. |
required |
metric |
Union[str, Callable, list] |
Metric used to quantify the goodness of fit of the model. If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'} If Callable: Function with arguments y_true, y_pred that returns a float. If list: List containing multiple strings and/or Callables. |
required |
initial_train_size |
int |
Number of samples in the initial train split. |
required |
fixed_train_size |
bool |
If True, train size doesn't increase but moves by |
True |
gap |
int |
Number of samples to be excluded after the end of each training set and before the test set. |
0 |
allow_incomplete_fold |
bool |
Last fold is allowed to have a smaller number of samples than the
|
True |
exog |
Union[pandas.core.series.Series, pandas.core.frame.DataFrame] |
Exogenous variable/s included as predictor/s. Must have the same
number of observations as |
None |
lags_grid |
Optional[list] |
Lists of |
None |
refit |
bool |
Whether to re-fit the forecaster in each iteration of backtesting. |
False |
n_iter |
int |
Number of parameter settings that are sampled per lags configuration. n_iter trades off runtime vs quality of the solution. |
10 |
random_state |
int |
Sets a seed to the random sampling for reproducible output. |
123 |
return_best |
bool |
Refit the |
True |
verbose |
bool |
Print number of folds used for cv or backtesting. |
True |
Returns:
Type | Description |
---|---|
DataFrame |
Results for each combination of parameters. column lags = predictions. column params = lower bound of the interval. column metric = metric value estimated for the combination of parameters. additional n columns with param = value. |
Source code in skforecast/model_selection/model_selection.py
def random_search_forecaster(
forecaster,
y: pd.Series,
param_distributions: dict,
steps: int,
metric: Union[str, Callable, list],
initial_train_size: int,
fixed_train_size: bool=True,
gap: int=0,
allow_incomplete_fold: bool=True,
exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
lags_grid: Optional[list]=None,
refit: bool=False,
n_iter: int=10,
random_state: int=123,
return_best: bool=True,
verbose: bool=True
) -> pd.DataFrame:
"""
Random search over specified parameter values or distributions for a Forecaster object.
Validation is done using time series backtesting.
Parameters
----------
forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
Forcaster model.
y : pandas Series
Training time series.
param_distributions : dict
Dictionary with parameters names (`str`) as keys and
distributions or lists of parameters to try.
steps : int
Number of steps to predict.
metric : str, Callable, list
Metric used to quantify the goodness of fit of the model.
If string:
{'mean_squared_error', 'mean_absolute_error',
'mean_absolute_percentage_error', 'mean_squared_log_error'}
If Callable:
Function with arguments y_true, y_pred that returns a float.
If list:
List containing multiple strings and/or Callables.
initial_train_size : int
Number of samples in the initial train split.
fixed_train_size : bool, default `True`
If True, train size doesn't increase but moves by `steps` in each iteration.
gap : int, default `0`
Number of samples to be excluded after the end of each training set and
before the test set.
allow_incomplete_fold : bool, default `True`
Last fold is allowed to have a smaller number of samples than the
`test_size`. If `False`, the last fold is excluded.
exog : pandas Series, pandas DataFrame, default `None`
Exogenous variable/s included as predictor/s. Must have the same
number of observations as `y` and should be aligned so that y[i] is
regressed on exog[i].
lags_grid : list of int, lists, numpy ndarray or range, default `None`
Lists of `lags` to try. Only used if forecaster is an instance of
`ForecasterAutoreg` or `ForecasterAutoregDirect`.
refit : bool, default `False`
Whether to re-fit the forecaster in each iteration of backtesting.
n_iter : int, default `10`
Number of parameter settings that are sampled per lags configuration.
n_iter trades off runtime vs quality of the solution.
random_state : int, default `123`
Sets a seed to the random sampling for reproducible output.
return_best : bool, default `True`
Refit the `forecaster` using the best found parameters on the whole data.
verbose : bool, default `True`
Print number of folds used for cv or backtesting.
Returns
-------
results : pandas DataFrame
Results for each combination of parameters.
column lags = predictions.
column params = lower bound of the interval.
column metric = metric value estimated for the combination of parameters.
additional n columns with param = value.
"""
param_grid = list(ParameterSampler(param_distributions, n_iter=n_iter, random_state=random_state))
results = _evaluate_grid_hyperparameters(
forecaster = forecaster,
y = y,
param_grid = param_grid,
steps = steps,
metric = metric,
initial_train_size = initial_train_size,
fixed_train_size = fixed_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
exog = exog,
lags_grid = lags_grid,
refit = refit,
return_best = return_best,
verbose = verbose
)
return results
bayesian_search_forecaster(forecaster, y, search_space, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, n_trials=10, random_state=123, return_best=True, verbose=True, engine='optuna', kwargs_create_study={}, kwargs_study_optimize={}, kwargs_gp_minimize='deprecated')
¶
Bayesian optimization for a Forecaster object using time series backtesting and
optuna library.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
forecaster |
ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect |
Forcaster model. |
required |
y |
Series |
Training time series. |
required |
search_space |
Union[Callable, dict] |
If optuna engine: Callable
Function with argument If skopt engine: dict
Dictionary with parameters names ( |
required |
steps |
int |
Number of steps to predict. |
required |
metric |
Union[str, Callable, list] |
Metric used to quantify the goodness of fit of the model. If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'} If Callable: Function with arguments y_true, y_pred that returns a float. If list: List containing multiple strings and/or Callables. |
required |
initial_train_size |
int |
Number of samples in the initial train split. |
required |
fixed_train_size |
bool |
If True, train size doesn't increase but moves by |
True |
gap |
int |
Number of samples to be excluded after the end of each training set and before the test set. |
0 |
allow_incomplete_fold |
bool |
Last fold is allowed to have a smaller number of samples than the
|
True |
exog |
Union[pandas.core.series.Series, pandas.core.frame.DataFrame] |
Exogenous variable/s included as predictor/s. Must have the same
number of observations as |
None |
lags_grid |
Optional[list] |
Lists of |
None |
refit |
bool |
Whether to re-fit the forecaster in each iteration of backtesting. |
False |
n_trials |
int |
Number of parameter settings that are sampled in each lag configuration. When using engine "skopt", the minimum value is 10. |
10 |
random_state |
int |
Sets a seed to the sampling for reproducible output. |
123 |
return_best |
bool |
Refit the |
True |
verbose |
bool |
Print number of folds used for cv or backtesting. |
True |
engine |
str |
If 'optuna': Bayesian optimization runs through the optuna library. If 'skopt': Bayesian optimization runs through the skopt library. Deprecated in version 0.7.0 |
'optuna' |
kwargs_create_study |
dict |
Only applies to engine='optuna'. Keyword arguments (key, value mappings) to pass to optuna.create_study. |
{} |
kwargs_study_optimize |
dict |
Only applies to engine='optuna'. Other keyword arguments (key, value mappings) to pass to study.optimize(). |
{} |
kwargs_gp_minimize |
Any |
Only applies to engine='skopt'. Other keyword arguments (key, value mappings) to pass to skopt.gp_minimize(). Deprecated in version 0.7.0 |
'deprecated' |
Returns:
Type | Description |
---|---|
Tuple[pandas.core.frame.DataFrame, object] |
Results for each combination of parameters. column lags = predictions. column params = lower bound of the interval. column metric = metric value estimated for the combination of parameters. additional n columns with param = value. |
Source code in skforecast/model_selection/model_selection.py
def bayesian_search_forecaster(
forecaster,
y: pd.Series,
search_space: Union[Callable, dict],
steps: int,
metric: Union[str, Callable, list],
initial_train_size: int,
fixed_train_size: bool=True,
gap: int=0,
allow_incomplete_fold: bool=True,
exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
lags_grid: Optional[list]=None,
refit: bool=False,
n_trials: int=10,
random_state: int=123,
return_best: bool=True,
verbose: bool=True,
engine: str='optuna',
kwargs_create_study: dict={},
kwargs_study_optimize: dict={},
kwargs_gp_minimize: Any='deprecated'
) -> Tuple[pd.DataFrame, object]:
"""
Bayesian optimization for a Forecaster object using time series backtesting and
optuna library.
Parameters
----------
forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
Forcaster model.
y : pandas Series
Training time series.
search_space : Callable (optuna), dict (skopt)
If optuna engine: Callable
Function with argument `trial` which returns a dictionary with parameters names
(`str`) as keys and Trial object from optuna (trial.suggest_float,
trial.suggest_int, trial.suggest_categorical) as values.
If skopt engine: dict
Dictionary with parameters names (`str`) as keys and Space object from skopt
(Real, Integer, Categorical) as values.
**Deprecated in version 0.7.0**
steps : int
Number of steps to predict.
metric : str, Callable, list
Metric used to quantify the goodness of fit of the model.
If string:
{'mean_squared_error', 'mean_absolute_error',
'mean_absolute_percentage_error', 'mean_squared_log_error'}
If Callable:
Function with arguments y_true, y_pred that returns a float.
If list:
List containing multiple strings and/or Callables.
initial_train_size : int
Number of samples in the initial train split.
fixed_train_size : bool, default `True`
If True, train size doesn't increase but moves by `steps` in each iteration.
gap : int, default `0`
Number of samples to be excluded after the end of each training set and
before the test set.
allow_incomplete_fold : bool, default `True`
Last fold is allowed to have a smaller number of samples than the
`test_size`. If `False`, the last fold is excluded.
exog : pandas Series, pandas DataFrame, default `None`
Exogenous variable/s included as predictor/s. Must have the same
number of observations as `y` and should be aligned so that y[i] is
regressed on exog[i].
lags_grid : list of int, lists, numpy ndarray or range, default `None`
Lists of `lags` to try. Only used if forecaster is an instance of
`ForecasterAutoreg` or `ForecasterAutoregDirect`.
refit : bool, default `False`
Whether to re-fit the forecaster in each iteration of backtesting.
n_trials : int, default `10`
Number of parameter settings that are sampled in each lag configuration.
When using engine "skopt", the minimum value is 10.
random_state : int, default `123`
Sets a seed to the sampling for reproducible output.
return_best : bool, default `True`
Refit the `forecaster` using the best found parameters on the whole data.
verbose : bool, default `True`
Print number of folds used for cv or backtesting.
engine : str, default `'optuna'`
If 'optuna':
Bayesian optimization runs through the optuna library.
If 'skopt':
Bayesian optimization runs through the skopt library.
**Deprecated in version 0.7.0**
kwargs_create_study : dict, default `{'direction':'minimize', 'sampler':TPESampler(seed=123)}`
Only applies to engine='optuna'.
Keyword arguments (key, value mappings) to pass to optuna.create_study.
kwargs_study_optimize : dict, default `{}`
Only applies to engine='optuna'.
Other keyword arguments (key, value mappings) to pass to study.optimize().
kwargs_gp_minimize : dict, default `{}`
Only applies to engine='skopt'.
Other keyword arguments (key, value mappings) to pass to skopt.gp_minimize().
**Deprecated in version 0.7.0**
Returns
-------
results : pandas DataFrame
Results for each combination of parameters.
column lags = predictions.
column params = lower bound of the interval.
column metric = metric value estimated for the combination of parameters.
additional n columns with param = value.
results_opt_best : optuna object (optuna), scipy object (skopt)
If optuna engine:
The best optimization result returned as a FrozenTrial optuna object.
If skopt engine:
The best optimization result returned as a OptimizeResult object.
**Deprecated in version 0.7.0**
"""
if return_best and exog is not None and (len(exog) != len(y)):
raise ValueError(
f'`exog` must have same number of samples as `y`. '
f'length `exog`: ({len(exog)}), length `y`: ({len(y)})'
)
if engine == 'skopt':
warnings.warn(
("The engine 'skopt' for `bayesian_search_forecaster` is deprecated "
"in favor of 'optuna' engine. To continue using it, use skforecast "
"0.6.0. The optimization will be performed using the 'optuna' engine.")
)
engine = 'optuna'
if engine not in ['optuna']:
raise ValueError(
f"""`engine` only allows 'optuna', got {engine}."""
)
results, results_opt_best = _bayesian_search_optuna(
forecaster = forecaster,
y = y,
exog = exog,
lags_grid = lags_grid,
search_space = search_space,
steps = steps,
metric = metric,
refit = refit,
initial_train_size = initial_train_size,
fixed_train_size = fixed_train_size,
gap = gap,
allow_incomplete_fold = allow_incomplete_fold,
n_trials = n_trials,
random_state = random_state,
return_best = return_best,
verbose = verbose,
kwargs_create_study = kwargs_create_study,
kwargs_study_optimize = kwargs_study_optimize
)
return results, results_opt_best