Skip to content

model_selection

backtesting_forecaster(forecaster, y, steps, metric, initial_train_size=None, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, refit=False, interval=None, n_boot=500, random_state=123, in_sample_residuals=True, verbose=False, show_progress=True)

Backtesting of forecaster model.

If refit is False, the model is trained only once using the initial_train_size first observations. If refit is True, the model is trained in each iteration increasing the training set. A copy of the original forecaster is created so it is not modified during the process.

Parameters:

Name Type Description Default
forecaster ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect

Forecaster model.

required
y pandas Series

Training time series.

required
steps int

Number of steps to predict.

required
metric str, Callable, list

Metric used to quantify the goodness of fit of the model.

  • If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'}
  • If Callable: Function with arguments y_true, y_pred that returns a float.
  • If list: List containing multiple strings and/or Callables.
required
initial_train_size int, default

Number of samples in the initial train split. If None and forecaster is already trained, no initial train is done and all data is used to evaluate the model. However, the first len(forecaster.last_window) observations are needed to create the initial predictors, so no predictions are calculated for them. This useful to backtest the model on the same data used to train it. None is only allowed when refit is False and forecaster is already trained.

None
fixed_train_size bool, default

If True, train size doesn't increase but moves by steps in each iteration.

True
gap int, default

Number of samples to be excluded after the end of each training set and before the test set.

0
allow_incomplete_fold bool, default

Last fold is allowed to have a smaller number of samples than the test_size. If False, the last fold is excluded.

True
exog pandas Series, pandas DataFrame, default

Exogenous variable/s included as predictor/s. Must have the same number of observations as y and should be aligned so that y[i] is regressed on exog[i].

None
refit bool, default

Whether to re-fit the forecaster in each iteration.

False
interval list, default

Confidence of the prediction interval estimated. Sequence of percentiles to compute, which must be between 0 and 100 inclusive. For example, interval of 95% should be as interval = [2.5, 97.5]. If None, no intervals are estimated.

None
n_boot int, default

Number of bootstrapping iterations used to estimate prediction intervals.

500
random_state int, default

Sets a seed to the random generator, so that boot intervals are always deterministic.

123
in_sample_residuals bool, default

If True, residuals from the training data are used as proxy of prediction error to create prediction intervals. If False, out_sample_residuals are used if they are already stored inside the forecaster.

True
verbose bool, default

Print number of folds and index of training and validation sets used for backtesting.

False
show_progress bool

Whether to show a progress bar. Defaults to True.

True

Returns:

Name Type Description
metrics_value float, list

Value(s) of the metric(s).

backtest_predictions pandas DataFrame

Value of predictions and their estimated interval if interval is not None.

  • column pred: predictions.
  • column lower_bound: lower bound of the interval.
  • column upper_bound: upper bound of the interval.
Source code in skforecast/model_selection/model_selection.py
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
def backtesting_forecaster(
    forecaster,
    y: pd.Series,
    steps: int,
    metric: Union[str, Callable, list],
    initial_train_size: Optional[int]=None,
    fixed_train_size: bool=True,
    gap: int=0,
    allow_incomplete_fold: bool=True,
    exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
    refit: bool=False,
    interval: Optional[list]=None,
    n_boot: int=500,
    random_state: int=123,
    in_sample_residuals: bool=True,
    verbose: bool=False,
    show_progress: bool=True
) -> Tuple[Union[float, list], pd.DataFrame]:
    """
    Backtesting of forecaster model.

    If `refit` is False, the model is trained only once using the `initial_train_size`
    first observations. If `refit` is True, the model is trained in each iteration
    increasing the training set. A copy of the original forecaster is created so 
    it is not modified during the process.

    Parameters
    ----------
    forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
        Forecaster model.
    y : pandas Series
        Training time series.
    steps : int
        Number of steps to predict.
    metric : str, Callable, list
        Metric used to quantify the goodness of fit of the model.

            - If `string`: {'mean_squared_error', 'mean_absolute_error',
             'mean_absolute_percentage_error', 'mean_squared_log_error'}
            - If `Callable`: Function with arguments y_true, y_pred that returns 
            a float.
            - If `list`: List containing multiple strings and/or Callables.
    initial_train_size : int, default `None`
        Number of samples in the initial train split. If `None` and `forecaster` is 
        already trained, no initial train is done and all data is used to evaluate the 
        model. However, the first `len(forecaster.last_window)` observations are needed 
        to create the initial predictors, so no predictions are calculated for them. 
        This useful to backtest the model on the same data used to train it.
        `None` is only allowed when `refit` is `False` and `forecaster` is already
        trained.
    fixed_train_size : bool, default `True`
        If True, train size doesn't increase but moves by `steps` in each iteration.
    gap : int, default `0`
        Number of samples to be excluded after the end of each training set and 
        before the test set.
    allow_incomplete_fold : bool, default `True`
        Last fold is allowed to have a smaller number of samples than the 
        `test_size`. If `False`, the last fold is excluded.
    exog : pandas Series, pandas DataFrame, default `None`
        Exogenous variable/s included as predictor/s. Must have the same
        number of observations as `y` and should be aligned so that y[i] is
        regressed on exog[i].
    refit : bool, default `False`
        Whether to re-fit the forecaster in each iteration.
    interval : list, default `None`
        Confidence of the prediction interval estimated. Sequence of percentiles
        to compute, which must be between 0 and 100 inclusive. For example, 
        interval of 95% should be as `interval = [2.5, 97.5]`. If `None`, no
        intervals are estimated.
    n_boot : int, default `500`
        Number of bootstrapping iterations used to estimate prediction
        intervals.
    random_state : int, default `123`
        Sets a seed to the random generator, so that boot intervals are always 
        deterministic.
    in_sample_residuals : bool, default `True`
        If `True`, residuals from the training data are used as proxy of prediction 
        error to create prediction intervals.  If `False`, out_sample_residuals 
        are used if they are already stored inside the forecaster.
    verbose : bool, default `False`
        Print number of folds and index of training and validation sets used 
        for backtesting.
    show_progress: bool, default `True`
        Whether to show a progress bar. Defaults to True.

    Returns
    -------
    metrics_value : float, list
        Value(s) of the metric(s).
    backtest_predictions : pandas DataFrame
        Value of predictions and their estimated interval if `interval` is not `None`.

            - column pred: predictions.
            - column lower_bound: lower bound of the interval.
            - column upper_bound: upper bound of the interval.

    """

    if type(forecaster).__name__ not in ['ForecasterAutoreg', 
                                         'ForecasterAutoregCustom', 
                                         'ForecasterAutoregDirect']:
        raise TypeError(
            ("`forecaster` must be of type `ForecasterAutoreg`, `ForecasterAutoregCustom` "
             "or `ForecasterAutoregDirect`, for all other types of forecasters "
             "use the functions available in the other `model_selection` modules.")
        )

    check_backtesting_input(
        forecaster            = forecaster,
        steps                 = steps,
        metric                = metric,
        y                     = y,
        initial_train_size    = initial_train_size,
        fixed_train_size      = fixed_train_size,
        gap                   = gap,
        allow_incomplete_fold = allow_incomplete_fold,
        refit                 = refit,
        interval              = interval,
        n_boot                = n_boot,
        random_state          = random_state,
        in_sample_residuals   = in_sample_residuals,
        verbose               = verbose,
        show_progress         = show_progress
    )

    if type(forecaster).__name__ == 'ForecasterAutoregDirect' and \
       forecaster.steps < steps + gap:
        raise ValueError(
            ("When using a ForecasterAutoregDirect, the combination of steps "
             f"+ gap ({steps+gap}) cannot be greater than the `steps` parameter "
             f"declared when the forecaster is initialized ({forecaster.steps}).")
        )

    if refit:
        metrics_values, backtest_predictions = _backtesting_forecaster_refit(
            forecaster            = forecaster,
            y                     = y,
            steps                 = steps,
            metric                = metric,
            initial_train_size    = initial_train_size,
            fixed_train_size      = fixed_train_size,
            gap                   = gap,
            allow_incomplete_fold = allow_incomplete_fold,
            exog                  = exog,
            interval              = interval,
            n_boot                = n_boot,
            random_state          = random_state,
            in_sample_residuals   = in_sample_residuals,
            verbose               = verbose,
            show_progress         = show_progress
        )
    else:
        metrics_values, backtest_predictions = _backtesting_forecaster_no_refit(
            forecaster            = forecaster,
            y                     = y,
            steps                 = steps,
            metric                = metric,
            initial_train_size    = initial_train_size,
            gap                   = gap,
            allow_incomplete_fold = allow_incomplete_fold,
            exog                  = exog,
            interval              = interval,
            n_boot                = n_boot,
            random_state          = random_state,
            in_sample_residuals   = in_sample_residuals,
            verbose               = verbose,
            show_progress         = show_progress
        )  

    return metrics_values, backtest_predictions

grid_search_forecaster(forecaster, y, param_grid, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, return_best=True, verbose=True)

Exhaustive search over specified parameter values for a Forecaster object. Validation is done using time series backtesting.

Parameters:

Name Type Description Default
forecaster ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect

Forcaster model.

required
y pandas Series

Training time series values.

required
param_grid dict

Dictionary with parameters names (str) as keys and lists of parameter settings to try as values.

required
steps int

Number of steps to predict.

required
metric str, Callable, list

Metric used to quantify the goodness of fit of the model.

  • If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'}
  • If Callable: Function with arguments y_true, y_pred that returns a float.
  • If list: List containing multiple strings and/or Callables.
required
initial_train_size int

Number of samples in the initial train split.

required
fixed_train_size bool, default

If True, train size doesn't increase but moves by steps in each iteration.

True
gap int, default

Number of samples to be excluded after the end of each training set and before the test set.

0
allow_incomplete_fold bool, default

Last fold is allowed to have a smaller number of samples than the test_size. If False, the last fold is excluded.

True
exog pandas Series, pandas DataFrame, default

Exogenous variable/s included as predictor/s. Must have the same number of observations as y and should be aligned so that y[i] is regressed on exog[i].

None
lags_grid list of int, lists, numpy ndarray or range, default

Lists of lags to try. Only used if forecaster is an instance of ForecasterAutoreg or ForecasterAutoregDirect.

None
refit bool, default

Whether to re-fit the forecaster in each iteration of backtesting.

False
return_best bool, default

Refit the forecaster using the best found parameters on the whole data.

True
verbose bool, default

Print number of folds used for cv or backtesting.

True

Returns:

Name Type Description
results pandas DataFrame

Results for each combination of parameters.

  • column lags: lags configuration for each iteration.
  • column params: parameters configuration for each iteration.
  • column metric: metric value estimated for each iteration.
  • additional n columns with param = value.
Source code in skforecast/model_selection/model_selection.py
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
def grid_search_forecaster(
    forecaster,
    y: pd.Series,
    param_grid: dict,
    steps: int,
    metric: Union[str, Callable, list],
    initial_train_size: int,
    fixed_train_size: bool=True,
    gap: int=0,
    allow_incomplete_fold: bool=True,
    exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
    lags_grid: Optional[list]=None,
    refit: bool=False,
    return_best: bool=True,
    verbose: bool=True
) -> pd.DataFrame:
    """
    Exhaustive search over specified parameter values for a Forecaster object.
    Validation is done using time series backtesting.

    Parameters
    ----------
    forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
        Forcaster model.
    y : pandas Series
        Training time series values. 
    param_grid : dict
        Dictionary with parameters names (`str`) as keys and lists of parameter
        settings to try as values.
    steps : int
        Number of steps to predict.
    metric : str, Callable, list
        Metric used to quantify the goodness of fit of the model.

            - If `string`: {'mean_squared_error', 'mean_absolute_error',
             'mean_absolute_percentage_error', 'mean_squared_log_error'}
            - If `Callable`: Function with arguments y_true, y_pred that returns 
            a float.
            - If `list`: List containing multiple strings and/or Callables.
    initial_train_size : int 
        Number of samples in the initial train split.
    fixed_train_size : bool, default `True`
        If True, train size doesn't increase but moves by `steps` in each iteration.
    gap : int, default `0`
        Number of samples to be excluded after the end of each training set and 
        before the test set.
    allow_incomplete_fold : bool, default `True`
        Last fold is allowed to have a smaller number of samples than the 
        `test_size`. If `False`, the last fold is excluded.
    exog : pandas Series, pandas DataFrame, default `None`
        Exogenous variable/s included as predictor/s. Must have the same
        number of observations as `y` and should be aligned so that y[i] is
        regressed on exog[i].
    lags_grid : list of int, lists, numpy ndarray or range, default `None`
        Lists of `lags` to try. Only used if forecaster is an instance of 
        `ForecasterAutoreg` or `ForecasterAutoregDirect`.
    refit : bool, default `False`
        Whether to re-fit the forecaster in each iteration of backtesting.
    return_best : bool, default `True`
        Refit the `forecaster` using the best found parameters on the whole data.
    verbose : bool, default `True`
        Print number of folds used for cv or backtesting.

    Returns
    -------
    results : pandas DataFrame
        Results for each combination of parameters.

            - column lags: lags configuration for each iteration.
            - column params: parameters configuration for each iteration.
            - column metric: metric value estimated for each iteration.
            - additional n columns with param = value.

    """

    param_grid = list(ParameterGrid(param_grid))

    results = _evaluate_grid_hyperparameters(
        forecaster            = forecaster,
        y                     = y,
        param_grid            = param_grid,
        steps                 = steps,
        metric                = metric,
        initial_train_size    = initial_train_size,
        fixed_train_size      = fixed_train_size,
        gap                   = gap,
        allow_incomplete_fold = allow_incomplete_fold,
        exog                  = exog,
        lags_grid             = lags_grid,
        refit                 = refit,
        return_best           = return_best,
        verbose               = verbose
    )

    return results

random_search_forecaster(forecaster, y, param_distributions, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, n_iter=10, random_state=123, return_best=True, verbose=True)

Random search over specified parameter values or distributions for a Forecaster object. Validation is done using time series backtesting.

Parameters:

Name Type Description Default
forecaster ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect

Forcaster model.

required
y pandas Series

Training time series.

required
param_distributions dict

Dictionary with parameters names (str) as keys and distributions or lists of parameters to try.

required
steps int

Number of steps to predict.

required
metric str, Callable, list

Metric used to quantify the goodness of fit of the model.

  • If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'}
  • If Callable: Function with arguments y_true, y_pred that returns a float.
  • If list: List containing multiple strings and/or Callables.
required
initial_train_size int

Number of samples in the initial train split.

required
fixed_train_size bool, default

If True, train size doesn't increase but moves by steps in each iteration.

True
gap int, default

Number of samples to be excluded after the end of each training set and before the test set.

0
allow_incomplete_fold bool, default

Last fold is allowed to have a smaller number of samples than the test_size. If False, the last fold is excluded.

True
exog pandas Series, pandas DataFrame, default

Exogenous variable/s included as predictor/s. Must have the same number of observations as y and should be aligned so that y[i] is regressed on exog[i].

None
lags_grid list of int, lists, numpy ndarray or range, default

Lists of lags to try. Only used if forecaster is an instance of ForecasterAutoreg or ForecasterAutoregDirect.

None
refit bool, default

Whether to re-fit the forecaster in each iteration of backtesting.

False
n_iter int, default

Number of parameter settings that are sampled per lags configuration. n_iter trades off runtime vs quality of the solution.

10
random_state int, default

Sets a seed to the random sampling for reproducible output.

123
return_best bool, default

Refit the forecaster using the best found parameters on the whole data.

True
verbose bool, default

Print number of folds used for cv or backtesting.

True

Returns:

Name Type Description
results pandas DataFrame

Results for each combination of parameters.

  • column lags: lags configuration for each iteration.
  • column params: parameters configuration for each iteration.
  • column metric: metric value estimated for each iteration.
  • additional n columns with param = value.
Source code in skforecast/model_selection/model_selection.py
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
def random_search_forecaster(
    forecaster,
    y: pd.Series,
    param_distributions: dict,
    steps: int,
    metric: Union[str, Callable, list],
    initial_train_size: int,
    fixed_train_size: bool=True,
    gap: int=0,
    allow_incomplete_fold: bool=True,
    exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
    lags_grid: Optional[list]=None,
    refit: bool=False,
    n_iter: int=10,
    random_state: int=123,
    return_best: bool=True,
    verbose: bool=True
) -> pd.DataFrame:
    """
    Random search over specified parameter values or distributions for a Forecaster 
    object. Validation is done using time series backtesting.

    Parameters
    ----------
    forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
        Forcaster model.
    y : pandas Series
        Training time series. 
    param_distributions : dict
        Dictionary with parameters names (`str`) as keys and 
        distributions or lists of parameters to try.
    steps : int
        Number of steps to predict.
    metric : str, Callable, list
        Metric used to quantify the goodness of fit of the model.

            - If `string`: {'mean_squared_error', 'mean_absolute_error',
             'mean_absolute_percentage_error', 'mean_squared_log_error'}
            - If `Callable`: Function with arguments y_true, y_pred that returns 
            a float.
            - If `list`: List containing multiple strings and/or Callables.
    initial_train_size : int 
        Number of samples in the initial train split.
    fixed_train_size : bool, default `True`
        If True, train size doesn't increase but moves by `steps` in each iteration.
    gap : int, default `0`
        Number of samples to be excluded after the end of each training set and 
        before the test set.
    allow_incomplete_fold : bool, default `True`
        Last fold is allowed to have a smaller number of samples than the 
        `test_size`. If `False`, the last fold is excluded.
    exog : pandas Series, pandas DataFrame, default `None`
        Exogenous variable/s included as predictor/s. Must have the same
        number of observations as `y` and should be aligned so that y[i] is
        regressed on exog[i]. 
    lags_grid : list of int, lists, numpy ndarray or range, default `None`
        Lists of `lags` to try. Only used if forecaster is an instance of 
        `ForecasterAutoreg` or `ForecasterAutoregDirect`.
    refit : bool, default `False`
        Whether to re-fit the forecaster in each iteration of backtesting.
    n_iter : int, default `10`
        Number of parameter settings that are sampled per lags configuration. 
        n_iter trades off runtime vs quality of the solution.
    random_state : int, default `123`
        Sets a seed to the random sampling for reproducible output.
    return_best : bool, default `True`
        Refit the `forecaster` using the best found parameters on the whole data.
    verbose : bool, default `True`
        Print number of folds used for cv or backtesting.

    Returns
    -------
    results : pandas DataFrame
        Results for each combination of parameters.

            - column lags: lags configuration for each iteration.
            - column params: parameters configuration for each iteration.
            - column metric: metric value estimated for each iteration.
            - additional n columns with param = value.

    """

    param_grid = list(ParameterSampler(param_distributions, n_iter=n_iter, random_state=random_state))

    results = _evaluate_grid_hyperparameters(
        forecaster            = forecaster,
        y                     = y,
        param_grid            = param_grid,
        steps                 = steps,
        metric                = metric,
        initial_train_size    = initial_train_size,
        fixed_train_size      = fixed_train_size,
        gap                   = gap,
        allow_incomplete_fold = allow_incomplete_fold,
        exog                  = exog,
        lags_grid             = lags_grid,
        refit                 = refit,
        return_best           = return_best,
        verbose               = verbose
    )

    return results

bayesian_search_forecaster(forecaster, y, search_space, steps, metric, initial_train_size, fixed_train_size=True, gap=0, allow_incomplete_fold=True, exog=None, lags_grid=None, refit=False, n_trials=10, random_state=123, return_best=True, verbose=True, engine='optuna', kwargs_create_study={}, kwargs_study_optimize={}, kwargs_gp_minimize='deprecated')

Bayesian optimization for a Forecaster object using time series backtesting and optuna library.

Parameters:

Name Type Description Default
forecaster ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect

Forcaster model.

required
y pandas Series

Training time series.

required
search_space Callable(optuna)

Function with argument trial which returns a dictionary with parameters names (str) as keys and Trial object from optuna (trial.suggest_float, trial.suggest_int, trial.suggest_categorical) as values.

required
steps int

Number of steps to predict.

required
metric str, Callable, list

Metric used to quantify the goodness of fit of the model.

  • If string: {'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_log_error'}
  • If Callable: Function with arguments y_true, y_pred that returns a float.
  • If list: List containing multiple strings and/or Callables.
required
initial_train_size int

Number of samples in the initial train split.

required
fixed_train_size bool, default

If True, train size doesn't increase but moves by steps in each iteration.

True
gap int, default

Number of samples to be excluded after the end of each training set and before the test set.

0
allow_incomplete_fold bool, default

Last fold is allowed to have a smaller number of samples than the test_size. If False, the last fold is excluded.

True
exog pandas Series, pandas DataFrame, default

Exogenous variable/s included as predictor/s. Must have the same number of observations as y and should be aligned so that y[i] is regressed on exog[i].

None
lags_grid list of int, lists, numpy ndarray or range, default

Lists of lags to try. Only used if forecaster is an instance of ForecasterAutoreg or ForecasterAutoregDirect.

None
refit bool, default

Whether to re-fit the forecaster in each iteration of backtesting.

False
n_trials int, default

Number of parameter settings that are sampled in each lag configuration.

10
random_state int, default

Sets a seed to the sampling for reproducible output.

123
return_best bool, default

Refit the forecaster using the best found parameters on the whole data.

True
verbose bool, default

Print number of folds used for cv or backtesting.

True
engine str, default

Bayesian optimization runs through the optuna library.

'optuna'
kwargs_create_study dict, default

Only applies to engine='optuna'. Keyword arguments (key, value mappings) to pass to optuna.create_study.

{}
kwargs_study_optimize dict, default

Only applies to engine='optuna'. Other keyword arguments (key, value mappings) to pass to study.optimize().

{}
kwargs_gp_minimize dict, default

Only applies to engine='skopt'. Other keyword arguments (key, value mappings) to pass to skopt.gp_minimize(). Deprecated in version 0.7.0

'deprecated'

Returns:

Name Type Description
results pandas DataFrame

Results for each combination of parameters.

  • column lags: lags configuration for each iteration.
  • column params: parameters configuration for each iteration.
  • column metric: metric value estimated for each iteration.
  • additional n columns with param = value.
results_opt_best optuna object (optuna)

The best optimization result returned as a FrozenTrial optuna object.

Source code in skforecast/model_selection/model_selection.py
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
def bayesian_search_forecaster(
    forecaster,
    y: pd.Series,
    search_space: Callable,
    steps: int,
    metric: Union[str, Callable, list],
    initial_train_size: int,
    fixed_train_size: bool=True,
    gap: int=0,
    allow_incomplete_fold: bool=True,
    exog: Optional[Union[pd.Series, pd.DataFrame]]=None,
    lags_grid: Optional[list]=None,
    refit: bool=False,
    n_trials: int=10,
    random_state: int=123,
    return_best: bool=True,
    verbose: bool=True,
    engine: str='optuna',
    kwargs_create_study: dict={},
    kwargs_study_optimize: dict={},
    kwargs_gp_minimize: Any='deprecated'
) -> Tuple[pd.DataFrame, object]:
    """
    Bayesian optimization for a Forecaster object using time series backtesting and 
    optuna library.

    Parameters
    ----------
    forecaster : ForecasterAutoreg, ForecasterAutoregCustom, ForecasterAutoregDirect
        Forcaster model.
    y : pandas Series
        Training time series. 
    search_space : Callable (optuna)
        Function with argument `trial` which returns a dictionary with parameters names 
        (`str`) as keys and Trial object from optuna (trial.suggest_float, 
        trial.suggest_int, trial.suggest_categorical) as values.
    steps : int
        Number of steps to predict.
    metric : str, Callable, list
        Metric used to quantify the goodness of fit of the model.

            - If `string`: {'mean_squared_error', 'mean_absolute_error',
             'mean_absolute_percentage_error', 'mean_squared_log_error'}
            - If `Callable`: Function with arguments y_true, y_pred that returns 
            a float.
            - If `list`: List containing multiple strings and/or Callables.
    initial_train_size : int 
        Number of samples in the initial train split.
    fixed_train_size : bool, default `True`
        If True, train size doesn't increase but moves by `steps` in each iteration.
    gap : int, default `0`
        Number of samples to be excluded after the end of each training set and 
        before the test set.
    allow_incomplete_fold : bool, default `True`
        Last fold is allowed to have a smaller number of samples than the 
        `test_size`. If `False`, the last fold is excluded.
    exog : pandas Series, pandas DataFrame, default `None`
        Exogenous variable/s included as predictor/s. Must have the same
        number of observations as `y` and should be aligned so that y[i] is
        regressed on exog[i]. 
    lags_grid : list of int, lists, numpy ndarray or range, default `None`
        Lists of `lags` to try. Only used if forecaster is an instance of 
        `ForecasterAutoreg` or `ForecasterAutoregDirect`.
    refit : bool, default `False`
        Whether to re-fit the forecaster in each iteration of backtesting.
    n_trials : int, default `10`
        Number of parameter settings that are sampled in each lag configuration.
    random_state : int, default `123`
        Sets a seed to the sampling for reproducible output.
    return_best : bool, default `True`
        Refit the `forecaster` using the best found parameters on the whole data.
    verbose : bool, default `True`
        Print number of folds used for cv or backtesting.
    engine : str, default `'optuna'`
        Bayesian optimization runs through the optuna library.
    kwargs_create_study : dict, default `{'direction':'minimize', 'sampler':TPESampler(seed=123)}`
        Only applies to engine='optuna'. Keyword arguments (key, value mappings) 
        to pass to optuna.create_study.
    kwargs_study_optimize : dict, default `{}`
        Only applies to engine='optuna'. Other keyword arguments (key, value mappings) 
        to pass to study.optimize().
    kwargs_gp_minimize : dict, default `{}`
        Only applies to engine='skopt'. Other keyword arguments (key, value mappings) 
        to pass to skopt.gp_minimize().
        **Deprecated in version 0.7.0**

    Returns
    -------
    results : pandas DataFrame
        Results for each combination of parameters.

            - column lags: lags configuration for each iteration.
            - column params: parameters configuration for each iteration.
            - column metric: metric value estimated for each iteration.
            - additional n columns with param = value.
    results_opt_best : optuna object (optuna)  
        The best optimization result returned as a FrozenTrial optuna object.

    """

    if return_best and exog is not None and (len(exog) != len(y)):
        raise ValueError(
            f'`exog` must have same number of samples as `y`. '
            f'length `exog`: ({len(exog)}), length `y`: ({len(y)})'
        )

    if engine == 'skopt':
        warnings.warn(
            ("The engine 'skopt' for `bayesian_search_forecaster` is deprecated "
             "in favor of 'optuna' engine. To continue using it, use skforecast "
             "0.6.0. The optimization will be performed using the 'optuna' engine.")
        )
        engine = 'optuna'

    if engine not in ['optuna']:
        raise ValueError(
            f"""`engine` only allows 'optuna', got {engine}."""
        )

    results, results_opt_best = _bayesian_search_optuna(
                                    forecaster            = forecaster,
                                    y                     = y,
                                    exog                  = exog,
                                    lags_grid             = lags_grid,
                                    search_space          = search_space,
                                    steps                 = steps,
                                    metric                = metric,
                                    refit                 = refit,
                                    initial_train_size    = initial_train_size,
                                    fixed_train_size      = fixed_train_size,
                                    gap                   = gap,
                                    allow_incomplete_fold = allow_incomplete_fold,
                                    n_trials              = n_trials,
                                    random_state          = random_state,
                                    return_best           = return_best,
                                    verbose               = verbose,
                                    kwargs_create_study   = kwargs_create_study,
                                    kwargs_study_optimize = kwargs_study_optimize
                                )

    return results, results_opt_best