Skip to content

ForecasterRnn

ForecasterRnn(regressor, levels, lags='auto', steps='auto', transformer_series=MinMaxScaler(feature_range=(0, 1)), weight_func=None, fit_kwargs={}, forecaster_id=None, n_jobs=None, transformer_exog=None)

Bases: ForecasterBase

This class turns any regressor compatible with the Keras API into a Keras RNN multi-serie multi-step forecaster. A unique model is created to forecast all time steps and series. Keras enables workflows on top of either JAX, TensorFlow, or PyTorch. See documentation for more details.

Parameters:

Name Type Description Default
regressor regressor or pipeline compatible with the Keras API

An instance of a regressor or pipeline compatible with the Keras API.

required
levels (str, list)

Name of one or more time series to be predicted. This determine the series the forecaster will be handling. If None, all series used during training will be available for prediction.

required
lags (int, list, str)

Lags used as predictors. If 'auto', lags used are from 1 to N, where N is extracted from the input layer self.regressor.layers[0].input_shape[0][1].

`'auto'`
transformer_series (object, dict)

An instance of a transformer (preprocessor) compatible with the scikit-learn preprocessing API with methods: fit, transform, fit_transform and inverse_transform. Transformation is applied to each series before training the forecaster. ColumnTransformers are not allowed since they do not have inverse_transform method.

  • If single transformer: it is cloned and applied to all series.
  • If dict of transformers: a different transformer can be used for each series.
`sklearn.preprocessing.MinMaxScaler(feature_range=(0, 1))`
fit_kwargs dict

Additional arguments to be passed to the fit method of the regressor.

`None`
forecaster_id (str, int)

Name used as an identifier of the forecaster.

`None`
steps (int, list, str)

Steps to be predicted. If 'auto', steps used are from 1 to N, where N is extracted from the output layer self.regressor.layers[-1].output_shape[1].

`'auto'`
lags Optional[Union[int, list, str]]

Not used, present here for API consistency by convention.

'auto'
transformer_exog Ignored

Not used, present here for API consistency by convention.

None
weight_func Ignored

Not used, present here for API consistency by convention.

None
n_jobs Ignored

Not used, present here for API consistency by convention.

None

Attributes:

Name Type Description
regressor regressor or pipeline compatible with the Keras API

An instance of a regressor or pipeline compatible with the Keras API. An instance of this regressor is trained for each step. All of them are stored in self.regressors_.

levels (str, list)

Name of one or more time series to be predicted. This determine the series the forecaster will be handling. If None, all series used during training will be available for prediction.

steps numpy ndarray

Number of future steps the forecaster will predict when using method predict(). Since a different model is created for each step, this value should be defined before training.

lags numpy ndarray

Lags used as predictors.

transformer_series (object, dict)

An instance of a transformer (preprocessor) compatible with the scikit-learn preprocessing API with methods: fit, transform, fit_transform and inverse_transform. Transformation is applied to each series before training the forecaster. ColumnTransformers are not allowed since they do not have inverse_transform method.

transformer_series_ dict

Dictionary with the transformer for each series. It is created cloning the objects in transformer_series and is used internally to avoid overwriting.

transformer_exog Ignored

Not used, present here for API consistency by convention.

max_lag int

Maximum value of lag included in lags.

window_size int

Size of the window needed to create the predictors.

window_size_diff int

This attribute has the same value as window_size as this Forecaster doesn't support differentiation. Present here for API consistency.

last_window pandas Series

Last window seen by the forecaster during training. It stores the values needed to predict the next step immediately after the training data.

index_type type

Type of index of the input used in training.

index_freq str

Frequency of Index of the input used in training.

training_range pandas Index

First and last values of index of the data used during training.

included_exog bool

If the forecaster has been trained using exogenous variable/s.

exog_type type

Type of exogenous variable/s used in training.

exog_dtypes dict

Type of each exogenous variable/s used in training. If transformer_exog is used, the dtypes are calculated after the transformation.

exog_col_names list

Names of the exogenous variables used during training.

series_col_names list

Names of the series used during training.

X_train_dim_names dict

Labels for the multi-dimensional arrays created internally for training.

y_train_dim_names dict

Labels for the multi-dimensional arrays created internally for training.

fit_kwargs dict

Additional arguments to be passed to the fit method of the regressor.

in_sample_residuals dict

Residuals of the models when predicting training data. Only stored up to 1000 values per model in the form {step: residuals}. If transformer_series is not None, residuals are stored in the transformed scale.

out_sample_residuals dict

Residuals of the models when predicting non training data. Only stored up to 1000 values per model in the form {step: residuals}. If transformer_series is not None, residuals are assumed to be in the transformed scale. Use set_out_sample_residuals() method to set values.

fitted bool

Tag to identify if the regressor has been fitted (trained).

creation_date str

Date of creation.

fit_date str

Date of last fit.

skforcast_version str

Version of skforecast library used to create the forecaster.

python_version str

Version of python used to create the forecaster.

forecaster_id (str, int)

Name used as an identifier of the forecaster.

history dict

Dictionary with the history of the training of each step. It is created internally to avoid overwriting.

dropna_from_series Ignored

Not used, present here for API consistency by convention.

Source code in skforecast\ForecasterRnn\ForecasterRnn.py
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
def __init__(
    self,
    regressor: object,
    levels: Union[str, list],
    lags: Optional[Union[int, list, str]] = "auto",
    steps: Optional[Union[int, list, str]] = "auto",
    transformer_series: Optional[Union[object, dict]] = MinMaxScaler(
        feature_range=(0, 1)
    ),
    weight_func: Optional[Callable] = None,
    fit_kwargs: Optional[dict] = {},
    forecaster_id: Optional[Union[str, int]] = None,
    n_jobs: Any = None,
    transformer_exog: Any = None
) -> None:
    self.levels = None
    self.transformer_series = transformer_series
    self.transformer_series_ = None
    self.transformer_exog = None
    self.weight_func = weight_func
    self.source_code_weight_func = None
    self.max_lag = None
    self.window_size = None
    self.last_window = None
    self.index_type = None
    self.index_freq = None
    self.training_range = None
    self.included_exog = False
    self.exog_type = None
    self.exog_dtypes = None
    self.exog_col_names = None
    self.series_col_names = None
    self.X_train_dim_names = None
    self.y_train_dim_names = None
    self.fitted = False
    self.creation_date = pd.Timestamp.today().strftime("%Y-%m-%d %H:%M:%S")
    self.fit_date = None
    self.skforecast_version = skforecast.__version__
    self.python_version = sys.version.split(" ")[0]
    self.forecaster_id = forecaster_id
    self.history = None
    self.dropna_from_series = False # Ignored in this forecaster

    # Infer parameters from the model
    self.regressor = regressor
    layer_init = self.regressor.layers[0]

    if lags == "auto":
        if keras.__version__ < "3.0":
            self.lags = np.arange(layer_init.input_shape[0][1]) + 1
        else:
            self.lags = np.arange(layer_init.output.shape[1]) + 1

        warnings.warn(
            "Setting `lags` = 'auto'. `lags` are inferred from the regressor " 
            "architecture. Avoid the warning with lags=lags."
        )
    elif isinstance(lags, int):
        self.lags = np.arange(lags) + 1
    elif isinstance(lags, list):
        self.lags = np.array(lags)
    else:
        raise TypeError(
            f"`lags` argument must be an int, list or 'auto'. Got {type(lags)}."
        )

    self.max_lag = np.max(self.lags)
    self.window_size = self.max_lag
    self.window_size_diff = self.max_lag

    layer_end = self.regressor.layers[-1]

    try:
        if keras.__version__ < "3.0":
            self.series = layer_end.output_shape[-1]
        else:
            self.series = layer_end.output.shape[-1]
    # if does not work, break the and raise an error the input shape should
    # be shape=(lags, n_series))
    except:
        raise TypeError(
            "Input shape of the regressor should be Input(shape=(lags, n_series))."
        )

    if steps == "auto":
        if keras.__version__ < "3.0":
            self.steps = np.arange(layer_end.output_shape[1]) + 1
        else:
            self.steps = np.arange(layer_end.output.shape[1]) + 1
        warnings.warn(
            "`steps` default value = 'auto'. `steps` inferred from regressor "
            "architecture. Avoid the warning with steps=steps."
        )
    elif isinstance(steps, int):
        self.steps = np.arange(steps) + 1
    elif isinstance(steps, list):
        self.steps = np.array(steps)
    else:
        raise TypeError(
            f"`steps` argument must be an int, list or 'auto'. Got {type(steps)}."
        )

    self.max_step = np.max(self.steps)
    if keras.__version__ < "3.0":
        self.outputs = layer_end.output_shape[-1]
    else:
        self.outputs = layer_end.output.shape[-1]

    if not isinstance(levels, (list, str, type(None))):
        raise TypeError(
            f"`levels` argument must be a string, list or. Got {type(levels)}."
        )

    if isinstance(levels, str):
        self.levels = [levels]
    elif isinstance(levels, list):
        self.levels = levels
    else:
        raise TypeError(
            f"`levels` argument must be a string or a list. Got {type(levels)}."
        )

    self.series_val = None
    if "series_val" in fit_kwargs:
        self.series_val = fit_kwargs["series_val"]
        fit_kwargs.pop("series_val")

    self.fit_kwargs = check_select_fit_kwargs(
        regressor=self.regressor, fit_kwargs=fit_kwargs
    )

_create_lags(y)

Transforms a 1d array into a 3d array (X) and a 3d array (y). Each row in X is associated with a value of y and it represents the lags that precede it.

Notice that, the returned matrix X_data, contains the lag 1 in the first column, the lag 2 in the second column and so on.

Parameters:

Name Type Description Default
y numpy ndarray

1d numpy ndarray Training time series.

required

Returns:

Name Type Description
X_data numpy ndarray

3d numpy ndarray with the lagged values (predictors). Shape: (samples - max(lags), len(lags))

y_data numpy ndarray

3d numpy ndarray with the values of the time series related to each row of X_data for each step. Shape: (len(max_step), samples - max(lags))

Source code in skforecast\ForecasterRnn\ForecasterRnn.py
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
def _create_lags(
    self,
    y: np.ndarray
) -> Tuple[np.ndarray, np.ndarray]:
    """
    Transforms a 1d array into a 3d array (X) and a 3d array (y). Each row
    in X is associated with a value of y and it represents the lags that
    precede it.

    Notice that, the returned matrix X_data, contains the lag 1 in the first
    column, the lag 2 in the second column and so on.

    Parameters
    ----------
    y : numpy ndarray
        1d numpy ndarray Training time series.

    Returns
    -------
    X_data : numpy ndarray
        3d numpy ndarray with the lagged values (predictors).
        Shape: (samples - max(lags), len(lags))
    y_data : numpy ndarray
        3d numpy ndarray with the values of the time series related to each
        row of `X_data` for each step.
        Shape: (len(max_step), samples - max(lags))

    """

    n_splits = len(y) - self.max_lag - self.max_step + 1  # rows of y_data
    if n_splits <= 0:
        raise ValueError(
            (
                f"The maximum lag ({self.max_lag}) must be less than the length "
                f"of the series minus the maximum of steps ({len(y)-self.max_step})."
            )
        )

    X_data = np.full(
        shape=(n_splits, (self.max_lag)), fill_value=np.nan, dtype=float
    )
    for i, lag in enumerate(range(self.max_lag - 1, -1, -1)):
        X_data[:, i] = y[self.max_lag - lag - 1 : -(lag + self.max_step)]

    y_data = np.full(
        shape=(n_splits, self.max_step), fill_value=np.nan, dtype=float
    )
    for step in range(self.max_step):
        y_data[:, step] = y[self.max_lag + step : self.max_lag + step + n_splits]

    # Get lags index
    X_data = X_data[:, self.lags - 1]

    # Get steps index
    y_data = y_data[:, self.steps-1]

    return X_data, y_data

create_train_X_y(series, exog=None)

Create training matrices. The resulting multi-dimensional matrices contain the target variable and predictors needed to train the model.

Parameters:

Name Type Description Default
series pandas DataFrame

Training time series.

required
exog Ignored

Not used, present here for API consistency by convention. This type of forecaster does not allow exogenous variables.

None

Returns:

Name Type Description
X_train ndarray

Training values (predictors) for each step. The resulting array has 3 dimensions: (time_points, n_lags, n_series)

y_train ndarray

Values (target) of the time series related to each row of X_train. The resulting array has 3 dimensions: (time_points, n_steps, n_levels)

dimension_names dict

Labels for the multi-dimensional arrays created internally for training.

Source code in skforecast\ForecasterRnn\ForecasterRnn.py
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
def create_train_X_y(
    self, series: pd.DataFrame, exog: Any = None
) -> Tuple[np.ndarray, np.ndarray, dict]:
    """
    Create training matrices. The resulting multi-dimensional matrices contain
    the target variable and predictors needed to train the model.

    Parameters
    ----------
    series : pandas DataFrame
        Training time series.
    exog : Ignored
        Not used, present here for API consistency by convention. This type of
        forecaster does not allow exogenous variables.

    Returns
    -------
    X_train : np.ndarray
        Training values (predictors) for each step. The resulting array has
        3 dimensions: (time_points, n_lags, n_series)
    y_train : np.ndarray
        Values (target) of the time series related to each row of `X_train`.
        The resulting array has 3 dimensions: (time_points, n_steps, n_levels)
    dimension_names : dict
        Labels for the multi-dimensional arrays created internally for training.

    """

    if not isinstance(series, pd.DataFrame):
        raise TypeError(f"`series` must be a pandas DataFrame. Got {type(series)}.")

    series_col_names = list(series.columns)

    if not set(self.levels).issubset(set(series.columns)):
        raise ValueError(
            (
                f"`levels` defined when initializing the forecaster must be included "
                f"in `series` used for trainng. {set(self.levels) - set(series.columns)} "
                f"not found."
            )
        )

    if len(series) < self.max_lag + self.max_step:
        raise ValueError(
            (
                f"Minimum length of `series` for training this forecaster is "
                f"{self.max_lag + self.max_step}. Got {len(series)}. Reduce the "
                f"number of predicted steps, {self.max_step}, or the maximum "
                f"lag, {self.max_lag}, if no more data is available."
            )
        )

    if self.transformer_series is None:
        self.transformer_series_ = {serie: None for serie in series_col_names}
    elif not isinstance(self.transformer_series, dict):
        self.transformer_series_ = {
            serie: clone(self.transformer_series) for serie in series_col_names
        }
    else:
        self.transformer_series_ = {serie: None for serie in series_col_names}
        # Only elements already present in transformer_series_ are updated
        self.transformer_series_.update(
            (k, v)
            for k, v in deepcopy(self.transformer_series).items()
            if k in self.transformer_series_
        )
        series_not_in_transformer_series = set(series.columns) - set(
            self.transformer_series.keys()
        )
        if series_not_in_transformer_series:
            warnings.warn(
                (
                    f"{series_not_in_transformer_series} not present in "
                    f"`transformer_series`. No transformation is applied to "
                    f"these series."
                ),
                IgnoredArgumentWarning,
            )

    # Step 1: Create lags for all columns
    X_train = []
    y_train = []

    for i, serie in enumerate(series.columns):
        x = series[serie]
        check_y(y=x)
        x = transform_series(
            series=x,
            transformer=self.transformer_series_[serie],
            fit=True,
            inverse_transform=False,
        )
        X, _ = self._create_lags(x)
        X_train.append(X)

    for i, serie in enumerate(self.levels):
        y = series[serie]
        check_y(y=y)
        y = transform_series(
            series=y,
            transformer=self.transformer_series_[serie],
            fit=True,
            inverse_transform=False,
        )

        _, y = self._create_lags(y)
        y_train.append(y)

    X_train = np.stack(X_train, axis=2)
    y_train = np.stack(y_train, axis=2)

    train_index = series.index.to_list()[
        self.max_lag : (len(series.index.to_list()) - self.max_step + 1)
    ]
    dimension_names = {
        "X_train": {
            0: train_index,
            1: ["lag_" + str(l) for l in self.lags],
            2: series.columns.to_list(),
        },
        "y_train": {
            0: train_index,
            1: ["step_" + str(l) for l in self.steps],
            2: self.levels,
        },
    }

    return X_train, y_train, dimension_names

fit(series, store_in_sample_residuals=True, exog=None, suppress_warnings=False, store_last_window='Ignored')

Training Forecaster.

Additional arguments to be passed to the fit method of the regressor can be added with the fit_kwargs argument when initializing the forecaster.

Parameters:

Name Type Description Default
series pandas DataFrame

Training time series.

required
store_in_sample_residuals bool

If True, in-sample residuals will be stored in the forecaster object after fitting.

`True`
exog Ignored

Not used, present here for API consistency by convention.

None
suppress_warnings bool

If True, skforecast warnings will be suppressed during the prediction process. See skforecast.exceptions.warn_skforecast_categories for more information.

`False`
store_last_window Ignored

Not used, present here for API consistency by convention.

'Ignored'

Returns:

Type Description
None
Source code in skforecast\ForecasterRnn\ForecasterRnn.py
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def fit(
    self,
    series: pd.DataFrame,
    store_in_sample_residuals: bool = True,
    exog: Any = None,
    suppress_warnings: bool = False,
    store_last_window: str = "Ignored",
) -> None:
    """
    Training Forecaster.

    Additional arguments to be passed to the `fit` method of the regressor
    can be added with the `fit_kwargs` argument when initializing the forecaster.

    Parameters
    ----------
    series : pandas DataFrame
        Training time series.
    store_in_sample_residuals : bool, default `True`
        If `True`, in-sample residuals will be stored in the forecaster object
        after fitting.
    exog : Ignored
        Not used, present here for API consistency by convention.
    suppress_warnings : bool, default `False`
        If `True`, skforecast warnings will be suppressed during the prediction 
        process. See skforecast.exceptions.warn_skforecast_categories for more
        information.
    store_last_window : Ignored
        Not used, present here for API consistency by convention.
    Returns
    -------
    None

    """

    set_skforecast_warnings(suppress_warnings, action='ignore')

    # Reset values in case the forecaster has already been fitted.
    self.index_type = None
    self.index_freq = None
    self.last_window = None
    self.included_exog = None
    self.exog_type = None
    self.exog_dtypes = None
    self.exog_col_names = None
    self.series_col_names = None
    self.X_train_dim_names = None
    self.y_train_dim_names = None
    self.in_sample_residuals = None
    self.fitted = False
    self.training_range = None

    self.series_col_names = list(series.columns)

    X_train, y_train, X_train_dim_names = self.create_train_X_y(series=series)
    self.X_train_dim_names = X_train_dim_names["X_train"]
    self.y_train_dim_names = X_train_dim_names["y_train"]

    if self.series_val is not None:
        X_val, y_val, _ = self.create_train_X_y(series=self.series_val)
        history = self.regressor.fit(
            x=X_train, y=y_train, validation_data=(X_val, y_val), **self.fit_kwargs
        )
    else:
        history = self.regressor.fit(x=X_train, y=y_train, **self.fit_kwargs)

    self.history = history.history
    self.fitted = True
    self.fit_date = pd.Timestamp.today().strftime("%Y-%m-%d %H:%M:%S")
    _, y_index = preprocess_y(y=series[self.levels], return_values=False)
    self.training_range = y_index[[0, -1]]
    self.index_type = type(y_index)
    if isinstance(y_index, pd.DatetimeIndex):
        self.index_freq = y_index.freqstr
    else:
        self.index_freq = y_index.step

    self.last_window = series.iloc[-self.max_lag :].copy()

    set_skforecast_warnings(suppress_warnings, action='default')

predict(steps=None, levels=None, last_window=None, exog=None, suppress_warnings=False)

Predict n steps ahead

Parameters:

Name Type Description Default
steps (int, list, None)

Predict n steps. The value of steps must be less than or equal to the value of steps defined when initializing the forecaster. Starts at 1.

  • If int: Only steps within the range of 1 to int are predicted.
  • If list: List of ints. Only the steps contained in the list are predicted.
  • If None: As many steps are predicted as were defined at initialization.
`None`
levels (str, list)

Name of one or more time series to be predicted. It must be included in levels defined when initializing the forecaster. If None, all all series used during training will be available for prediction.

`None`
last_window pandas DataFrame

Series values used to create the predictors (lags) needed in the first iteration of the prediction (t + 1). If last_window = None, the values stored in self.last_window are used to calculate the initial predictors, and the predictions start right after training data.

`None`
exog Ignored

Not used, present here for API consistency by convention.

None
suppress_warnings bool

If True, skforecast warnings will be suppressed during the fitting process. See skforecast.exceptions.warn_skforecast_categories for more information.

`False`

Returns:

Name Type Description
predictions pandas DataFrame

Predicted values.

Source code in skforecast\ForecasterRnn\ForecasterRnn.py
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
def predict(
    self,
    steps: Optional[Union[int, list]] = None,
    levels: Optional[Union[str, list]] = None,
    last_window: Optional[pd.DataFrame] = None,
    exog: Any = None,
    suppress_warnings: bool = False
) -> pd.DataFrame:
    """
    Predict n steps ahead

    Parameters
    ----------
    steps : int, list, None, default `None`
        Predict n steps. The value of `steps` must be less than or equal to the
        value of steps defined when initializing the forecaster. Starts at 1.

        - If `int`: Only steps within the range of 1 to int are predicted.
        - If `list`: List of ints. Only the steps contained in the list
        are predicted.
        - If `None`: As many steps are predicted as were defined at
        initialization.
    levels : str, list, default `None`
        Name of one or more time series to be predicted. It must be included
        in `levels` defined when initializing the forecaster. If `None`, all
        all series used during training will be available for prediction.
    last_window : pandas DataFrame, default `None`
        Series values used to create the predictors (lags) needed in the
        first iteration of the prediction (t + 1).
        If `last_window = None`, the values stored in `self.last_window` are
        used to calculate the initial predictors, and the predictions start
        right after training data.
    exog : Ignored
        Not used, present here for API consistency by convention.
    suppress_warnings : bool, default `False`
        If `True`, skforecast warnings will be suppressed during the fitting 
        process. See skforecast.exceptions.warn_skforecast_categories for more
        information.

    Returns
    -------
    predictions : pandas DataFrame
        Predicted values.

    """

    set_skforecast_warnings(suppress_warnings, action='ignore')

    if levels is None:
        levels = self.levels
    elif isinstance(levels, str):
        levels = [levels]
    if isinstance(steps, int):
        steps = list(np.arange(steps) + 1)
    elif steps is None:
        if isinstance(self.steps, int):
            steps = list(np.arange(self.steps) + 1)
        elif isinstance(self.steps, (list, np.ndarray)):
            steps = list(np.array(self.steps))
    elif isinstance(steps, list):
        steps = list(np.array(steps))

    for step in steps:
        if not isinstance(step, (int, np.int64, np.int32)):
            raise TypeError(
                (
                    f"`steps` argument must be an int, a list of ints or `None`. "
                    f"Got {type(steps)}."
                )
            )

    if last_window is None:
        last_window = self.last_window

    check_predict_input(
        forecaster_name=type(self).__name__,
        steps=steps,
        fitted=self.fitted,
        included_exog=self.included_exog,
        index_type=self.index_type,
        index_freq=self.index_freq,
        window_size=self.window_size,
        last_window=last_window,
        last_window_exog=None,
        exog=None,
        exog_type=None,
        exog_col_names=None,
        interval=None,
        alpha=None,
        max_steps=self.max_step,
        levels=levels,
        levels_forecaster=self.levels,
        series_col_names=self.series_col_names,
    )

    last_window = last_window.iloc[-self.window_size :,].copy()

    for serie_name in self.series_col_names:
        last_window_serie = transform_series(
            series=last_window[serie_name],
            transformer=self.transformer_series_[serie_name],
            fit=False,
            inverse_transform=False,
        )
        last_window_values, last_window_index = preprocess_last_window(
            last_window=last_window_serie
        )
        last_window.loc[:, serie_name] = last_window_values

    X = np.reshape(last_window.to_numpy(), (1, self.max_lag, last_window.shape[1]))
    predictions = self.regressor.predict(X, verbose=0)
    predictions_reshaped = np.reshape(
        predictions, (predictions.shape[1], predictions.shape[2])
    )

    # if len(self.levels) == 1:
    #     predictions_reshaped = np.reshape(predictions, (predictions.shape[1], 1))
    # else:
    #     predictions_reshaped = np.reshape(
    #         predictions, (predictions.shape[1], predictions.shape[2])
    #     )
    idx = expand_index(index=last_window_index, steps=max(steps))

    predictions = pd.DataFrame(
        data=predictions_reshaped[np.array(steps) - 1],
        columns=self.levels,
        index=idx[np.array(steps) - 1],
    )
    predictions = predictions[levels]

    for serie in levels:
        x = predictions[serie]
        check_y(y=x)
        x = transform_series(
            series=x,
            transformer=self.transformer_series_[serie],
            fit=False,
            inverse_transform=True,
        )
        predictions.loc[:, serie] = x

    set_skforecast_warnings(suppress_warnings, action='default')

    return predictions

plot_history(ax=None, **fig_kw)

Plots the training and validation loss curves from the given history object stores in the ForecasterRnn.

Parameters:

Name Type Description Default
ax Axes

Pre-existing ax for the plot. Otherwise, call matplotlib.pyplot.subplots() internally.

`None`
fig_kw dict

Other keyword arguments are passed to matplotlib.pyplot.subplots().

{}

Returns:

Name Type Description
fig Figure

Matplotlib Figure.

Source code in skforecast\ForecasterRnn\ForecasterRnn.py
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
def plot_history(
    self, ax: matplotlib.axes.Axes = None, **fig_kw
) -> matplotlib.figure.Figure:
    """
    Plots the training and validation loss curves from the given history object stores
    in the ForecasterRnn.

    Parameters
    ----------
    ax : matplotlib.axes.Axes, default `None`
        Pre-existing ax for the plot. Otherwise, call matplotlib.pyplot.subplots()
        internally.
    fig_kw : dict
        Other keyword arguments are passed to matplotlib.pyplot.subplots().

    Returns
    -------
    fig: matplotlib.figure.Figure
        Matplotlib Figure.

    """

    if ax is None:
        fig, ax = plt.subplots(1, 1, **fig_kw)
    else:
        fig = ax.get_figure()

    # Setting up the plot style

    if self.history is None:
        raise ValueError("ForecasterRnn has not been fitted yet.")

    # Plotting training loss
    ax.plot(
        range(1, len(self.history["loss"]) + 1),
        self.history["loss"],
        color="b",
        label="Training Loss",
    )

    # Plotting validation loss
    if "val_loss" in self.history:
        ax.plot(
            range(1, len(self.history["val_loss"]) + 1),
            self.history["val_loss"],
            color="r",
            label="Validation Loss",
        )

    # Labeling the axes and adding a title
    ax.set_xlabel("Epochs")
    ax.set_ylabel("Loss")
    ax.set_title("Training and Validation Loss")

    # Adding a legend
    ax.legend()

    # Displaying grid for better readability
    ax.grid(True, linestyle="--", alpha=0.7)

    # Setting x-axis ticks to integers only
    ax.set_xticks(range(1, len(self.history["loss"]) + 1))

set_params(params)

Set new values to the parameters of the scikit learn model stored in the forecaster. It is important to note that all models share the same configuration of parameters and hyperparameters.

Parameters:

Name Type Description Default
params dict

Parameters values.

required

Returns:

Type Description
None
Source code in skforecast\ForecasterRnn\ForecasterRnn.py
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
def set_params(
    self, 
    params: dict
) -> None:  # TODO testear
    """
    Set new values to the parameters of the scikit learn model stored in the
    forecaster. It is important to note that all models share the same
    configuration of parameters and hyperparameters.

    Parameters
    ----------
    params : dict
        Parameters values.

    Returns
    -------
    None

    """

    self.regressor = clone(self.regressor)
    self.regressor.reset_states()
    self.regressor.compile(**params)

set_fit_kwargs(fit_kwargs)

Set new values for the additional keyword arguments passed to the fit method of the regressor.

Parameters:

Name Type Description Default
fit_kwargs dict

Dict of the form {"argument": new_value}.

required

Returns:

Type Description
None
Source code in skforecast\ForecasterRnn\ForecasterRnn.py
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
def set_fit_kwargs(
    self,
    fit_kwargs: dict
) -> None:
    """
    Set new values for the additional keyword arguments passed to the `fit`
    method of the regressor.

    Parameters
    ----------
    fit_kwargs : dict
        Dict of the form {"argument": new_value}.

    Returns
    -------
    None

    """

    self.fit_kwargs = check_select_fit_kwargs(self.regressor, fit_kwargs=fit_kwargs)

set_lags(lags)

Not used, present here for API consistency by convention.

Returns:

Type Description
None
Source code in skforecast\ForecasterRnn\ForecasterRnn.py
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
def set_lags(
    self, 
    lags: Any
) -> None:
    """
    Not used, present here for API consistency by convention.

    Returns
    -------
    None

    """

    pass

create_and_compile_model(series, lags, steps, levels=None, recurrent_layer='LSTM', recurrent_units=100, dense_units=64, activation='relu', optimizer=Adam(learning_rate=0.01), loss=MeanSquaredError(), compile_kwargs={})

Creates a neural network model for time series prediction with flexible recurrent layers.

Parameters:

Name Type Description Default
series pandas DataFrame

Input time series.

required
lags (int, list)

Number of lagged time steps to consider in the input, or a list of specific lag indices.

required
steps (int, list)

Number of steps to predict into the future, or a list of specific step indices.

required
levels (str, int, list)

Number of output levels (features) to predict, or a list of specific level indices. If None, defaults to the number of input series.

`None`
recurrent_layer str

Type of recurrent layer to be used ('LSTM' or 'RNN').

`'LSTM'`
recurrent_units (int, list)

Number of units in the recurrent layer(s). Can be an integer or a list of integers for multiple layers.

`100`
dense_units (int, list)

List of integers representing the number of units in each dense layer.

`64`
activation str

Activation function for the recurrent and dense layers.

`'relu'`
optimizer object

Optimization algorithm and learning rate.

`Adam(learning_rate=0.01)`
loss object

Loss function for model training.

`MeanSquaredError()`
compile_kwargs dict

Additional arguments for model compilation.

`{}`

Returns:

Name Type Description
model Model

Compiled neural network model.

Source code in skforecast\ForecasterRnn\utils.py
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
def create_and_compile_model(
    series: pd.DataFrame,
    lags: Union[int, list],
    steps: Union[int, list],
    levels: Optional[Union[str, int, list]]=None,
    recurrent_layer: str="LSTM",
    recurrent_units: Union[int, list]=100,
    dense_units: Union[int, list]=64,
    activation: str="relu",
    optimizer: object=Adam(learning_rate=0.01),
    loss: object=MeanSquaredError(),
    compile_kwargs: dict={},
) -> keras.models.Model:
    """
    Creates a neural network model for time series prediction with flexible recurrent layers.

    Parameters
    ----------
    series : pandas DataFrame
        Input time series.
    lags : int, list
        Number of lagged time steps to consider in the input, or a list of 
        specific lag indices.
    steps : int, list
        Number of steps to predict into the future, or a list of specific step 
        indices.
    levels : str, int, list, default `None`
        Number of output levels (features) to predict, or a list of specific 
        level indices. If None, defaults to the number of input series.
    recurrent_layer : str, default `'LSTM'`
        Type of recurrent layer to be used ('LSTM' or 'RNN').
    recurrent_units : int, list, default `100`
        Number of units in the recurrent layer(s). Can be an integer or a 
        list of integers for multiple layers.
    dense_units : int, list, default `64`
        List of integers representing the number of units in each dense layer.
    activation : str, default `'relu'`
        Activation function for the recurrent and dense layers.
    optimizer : object, default `Adam(learning_rate=0.01)`
        Optimization algorithm and learning rate.
    loss : object, default `MeanSquaredError()`
        Loss function for model training.
    compile_kwargs : dict, default `{}` 
        Additional arguments for model compilation.

    Returns
    -------
    model : keras.models.Model
        Compiled neural network model.

    """

    if keras.__version__ > "3":
        print(f"keras version: {keras.__version__}")
        print(f"Using backend: {keras.backend.backend()}")
        if keras.backend.backend() == "tensorflow":
            import tensorflow
            print(f"tensorflow version: {tensorflow.__version__}")
        elif keras.backend.backend() == "torch":
            import torch
            print(f"torch version: {torch.__version__}")
        elif keras.backend.backend() == "jax":
            import jax
            print(f"jax version: {jax.__version__}")
        else:
            print("Backend not recognized")

    err_msg = f"`series` must be a pandas DataFrame. Got {type(series)}."

    if not isinstance(series, pd.DataFrame):
        raise TypeError(err_msg)

    n_series = series.shape[1]

    # Dense units must be a list, None or int
    if not isinstance(dense_units, (list, int, type(None))):
        raise TypeError(
            f"`dense_units` argument must be a list or int. Got {type(dense_units)}."
        )
    if isinstance(dense_units, int):
        dense_units = [dense_units]

    # Recurrent units must be a list or int
    if not isinstance(recurrent_units, (list, int)):
        raise TypeError(
            f"`recurrent_units` argument must be a list or int. Got {type(recurrent_units)}."
        )
    if isinstance(recurrent_units, int):
        recurrent_units = [recurrent_units]

    # Lags, steps and levels must be int or list
    if not isinstance(lags, (int, list)):
        raise TypeError(f"`lags` argument must be a list or int. Got {type(lags)}.")
    if not isinstance(steps, (int, list)):
        raise TypeError(f"`steps` argument must be a list or int. Got {type(steps)}.")
    if not isinstance(levels, (str, int, list, type(None))):
        raise TypeError(
            f"`levels` argument must be a string, list or int. Got {type(levels)}."
        )

    if isinstance(lags, list):
        lags = len(lags)
    if isinstance(steps, list):
        steps = len(steps)
    if isinstance(levels, list):
        levels = len(levels)
    elif isinstance(levels, (str)):
        levels = 1
    elif isinstance(levels, type(None)):
        levels = series.shape[1]
    elif isinstance(levels, int):
        pass
    else:
        raise TypeError(
            f"`levels` argument must be a string, list or int. Got {type(levels)}."
        )

    input_layer = Input(shape=(lags, n_series))
    x = input_layer

    # Dynamically create multiple recurrent layers if recurrent_units is a list
    if isinstance(recurrent_units, list):
        for units in recurrent_units[:-1]:  # All layers except the last one
            if recurrent_layer == "LSTM":
                x = LSTM(units, activation=activation, return_sequences=True)(x)
            elif recurrent_layer == "RNN":
                x = SimpleRNN(units, activation=activation, return_sequences=True)(x)
            else:
                raise ValueError(f"Invalid recurrent layer: {recurrent_layer}")
        # Last layer without return_sequences
        if recurrent_layer == "LSTM":
            x = LSTM(recurrent_units[-1], activation=activation)(x)
        elif recurrent_layer == "RNN":
            x = SimpleRNN(recurrent_units[-1], activation=activation)(x)
        else:
            raise ValueError(f"Invalid recurrent layer: {recurrent_layer}")
    else:
        # Single recurrent layer
        if recurrent_layer == "LSTM":
            x = LSTM(recurrent_units, activation=activation)(x)
        elif recurrent_layer == "RNN":
            x = SimpleRNN(recurrent_units, activation=activation)(x)
        else:
            raise ValueError(f"Invalid recurrent layer: {recurrent_layer}")

    # Dense layers
    if dense_units is not None:
        for nn in dense_units:
            x = Dense(nn, activation=activation)(x)

    # Output layer
    x = Dense(levels * steps, activation="linear")(x)
    # model = Model(inputs=input_layer, outputs=x)
    output_layer = keras.layers.Reshape((steps, levels))(x)
    model = Model(inputs=input_layer, outputs=output_layer)

    # Compile the model if optimizer, loss or compile_kwargs are passed
    if optimizer is not None or loss is not None or compile_kwargs:
        # give more priority to the parameters passed in the function check if the 
        # parameters passes in compile_kwargs include optimizer and loss if so, 
        # delete them from compile_kwargs and raise a warning
        if "optimizer" in compile_kwargs.keys():
            compile_kwargs.pop("optimizer")
            warnings.warn("`optimizer` passed in `compile_kwargs`. Ignoring it.")
        if "loss" in compile_kwargs.keys():
            compile_kwargs.pop("loss")
            warnings.warn("`loss` passed in `compile_kwargs`. Ignoring it.")

        model.compile(optimizer=optimizer, loss=loss, **compile_kwargs)

    return model