Scikit-learn compatible interface for foundation time-series models.
Currently supports Amazon Chronos-2, Google TimesFM 2.5, Salesforce
Moirai-2 and TabICLv2. For full skforecast ecosystem integration
(backtesting, model selection, etc.) use ForecasterFoundation
instead.
Parameters:
Name
Type
Description
Default
model_id
str
HuggingFace model ID. The adapter is resolved automatically from
the model_id prefix. Available model IDs:
Amazon Chronos-2 (supports exog):
'amazon/chronos-2'
'autogluon/chronos-2-small'
'autogluon/chronos-2-synth'
Google TimesFM 2.5 (does not support exog):
'google/timesfm-2.5-200m-pytorch'
Salesforce Moirai-2 (does not support exog):
'Salesforce/moirai-2.0-R-small'
TabICLv2 (supports exog):
'soda-inria/tabicl'
required
**kwargs
Any
Additional keyword arguments forwarded to the underlying adapter.
Valid keys depend on the adapter selected by model_id. See the
corresponding adapter class (ChronosAdapter, TimesFMAdapter,
MoiraiAdapter, TabICLAdapter) for the full parameter list, or
refer to the model documentation linked in the References section
below.
The underlying adapter instance, instantiated automatically based on
the model_id prefix. The concrete type depends on the model — e.g.
ChronosAdapter for autogluon/chronos-* models.
Per-series dict of pandas DataFrame containing the last context_length
exog variables from the training data, stored during fit. None if
the adapter does not support exogenous variables or no exog was
provided. Mirrors adapter.context_exog_.
Each adapter imports its own backend library lazily (i.e. inside the
method that first needs it) rather than at module level. This means
that only the library required by the adapter you actually use needs to
be installed, other foundation-model backends remain optional.
Context stored during fit, used as default context for predict if no
override is provided.
Returns:
Name
Type
Description
context_exog_
(dict[str, DataFrame], None)
Per-series dict of pandas DataFrame containing the last
context_length exog variables from the training data, stored
during fit. None if the adapter does not support exogenous
variables or no exog was provided. Mirrors adapter.context_exog_.
def_check_preprocess_context(self,series:pd.Series|pd.DataFrame|dict[str,pd.Series],exog:(pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None]|None)=None,)->tuple[dict[str,pd.Series],dict[str,pd.Index],list[str],dict[str,pd.DataFrame|None]|None,list[str]|None]:""" Normalize and validate context input to a per-series dict. Parameters ---------- series : pandas Series, pandas DataFrame, dict Time series to normalize and validate. - If `pandas Series`: single-series mode. - If wide `pandas DataFrame` or `dict[str, pandas Series]`: multi-series mode. exog : pandas Series, pandas DataFrame, dict, default None Exogenous variables aligned to `series`. - If `pandas Series` or `pandas DataFrame`: broadcast to all series. - If `dict`: per-series exogenous variables. Returns ------- context : dict Per-series dict of pandas Series, trimmed to the last `context_length` observations. series_indexes : dict Index of each series before trimming. series_names_in_ : list Names of the series. context_exog : dict or None Per-series dict of exogenous DataFrames trimmed to the last `context_length` observations. `None` if `exog` is `None`. exog_names_in_ : list or None Names of the exogenous variables. `None` if `exog` is `None`. """series_dict,series_indexes=check_preprocess_series_foundation(series)series_names_in_=list(series_dict.keys())ifexogisnotNone:exog_dict,exog_names_in_=check_preprocess_exog_multiseries(series_names_in_=series_names_in_,series_index_type=type(series_indexes[series_names_in_[0]]),exog=exog,exog_dict={name:Nonefornameinseries_names_in_},)# NOTE: As no trim is applied to the series, it is only needed to # align exog.series_dict,exog_dict=align_series_and_exog_multiseries(series_dict=series_dict,exog_dict=exog_dict,trim_series_nan=False,)context={name:s.iloc[-self.context_length:]forname,sinseries_dict.items()}ifexogisnotNone:context_exog={name:(e.iloc[-self.context_length:]ifeisnotNoneelseNone)forname,einexog_dict.items()}else:context_exog=Noneexog_names_in_=Nonereturncontext,series_indexes,series_names_in_,context_exog,exog_names_in_
deffit(self,series:pd.Series|pd.DataFrame|dict[str,pd.Series],exog:(pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None]|None)=None,)->FoundationModel:""" Fit the model by storing the training series and optional exog. Parameters ---------- series : pandas Series, pandas DataFrame, dict Training time series. - If `pandas Series`: single-series mode. - If wide `pandas DataFrame` (each column = one series): multi-series mode. - If `dict[str, pandas Series]`: multi-series mode; keys are series names. exog : pandas Series, pandas DataFrame, dict, default None Historical exogenous variables aligned to `series`. - If `pandas Series` or `pandas DataFrame`: broadcast to all series. - If `dict`: per-series exogenous variables. Returns ------- self : FoundationModel """self.index_type_=Noneself.index_freq_=Noneself.context_range_=Noneself.series_names_in_=Noneself.is_multiple_series_=Falseself.exog_in_=Falseself.exog_names_in_=Noneself.exog_names_in_per_series_=Noneself.exog_type_in_=Noneself.fit_date=Nonecontext,series_indexes,series_names_in_,context_exog,exog_names_in_=(self._check_preprocess_context(series=series,exog=exog,))self.adapter.fit(context=context,context_exog=context_exog,)self.series_names_in_=series_names_in_self.is_multiple_series_=len(series_names_in_)>1ifcontext_exogisnotNoneandlen(exog_names_in_)>0:self.exog_in_=Trueself.exog_names_in_=exog_names_in_self.exog_names_in_per_series_={k:list(v.columns)ifvisnotNoneelseNonefork,vincontext_exog.items()}self.exog_type_in_=type(exog)self.fit_date=pd.Timestamp.today().strftime('%Y-%m-%d %H:%M:%S')self.context_range_={k:v[[0,-1]]fork,vinseries_indexes.items()}self.index_type_=type(series_indexes[series_names_in_[0]])ifisinstance(series_indexes[series_names_in_[0]],pd.DatetimeIndex):self.index_freq_=series_indexes[series_names_in_[0]].freqelse:self.index_freq_=series_indexes[series_names_in_[0]].stepreturnself
@staticmethoddef_exog_to_dict(exog:pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None],series_names_in:list[str],)->dict[str,pd.DataFrame|pd.Series|None]:""" Normalize any supported exog format into a per-series dict. Parameters ---------- exog : pandas Series, pandas DataFrame, dict Future exogenous variables in any supported format. - If `pandas Series` (flat index): broadcast to all series. - If `pandas Series` (MultiIndex): converted to dict, then keyed per series. - If `pandas DataFrame` (flat index): broadcast to all series. - If `pandas DataFrame` (MultiIndex / long-format): converted to dict per series ID. - If `dict`: used directly, missing series keys filled as `None`. series_names_in : list[str] Series names that define the output dict keys. Returns ------- exog_dict : dict Per-series dict with exactly the keys in `series_names_in`. """ifisinstance(exog,dict):return{name:exog.get(name,None)fornameinseries_names_in}ifisinstance(exog,pd.Series):ifisinstance(exog.index,pd.MultiIndex):exog=exog.to_frame()else:return{name:exogfornameinseries_names_in}# At this point exog is always a DataFrame (original or coerced)ifisinstance(exog.index,pd.MultiIndex):ifnotisinstance(exog.index.levels[1],pd.DatetimeIndex):raiseTypeError("The second level of the MultiIndex in `exog` must be a ""pandas DatetimeIndex. "f"Found {type(exog.index.levels[1])}.")per_series={sid:group.droplevel(0)forsid,groupinexog.groupby(level=0,sort=False)}warnings.warn("Passing a long-format DataFrame as `exog` requires ""additional internal transformations, which can increase ""computational time. It is recommended to use a dictionary ""of pandas Series or DataFrames instead.",InputTypeWarning,stacklevel=5,)return{name:per_series.get(name,None)fornameinseries_names_in}return{name:exogfornameinseries_names_in}
Normalize, broadcast, and align future exogenous variables to the
forecast horizon in a single pass.
Performs the full pipeline for future exog:
Type coercion: long-format MultiIndex Series/DataFrame is
converted to a dict keyed by series ID.
Broadcast / dict normalisation: flat Series or DataFrame is
broadcast to every series; a dict is filled with None for
missing keys; None input produces an all-None dict.
Temporal alignment: each per-series exog is aligned to the
forecast horizon using the resolved context. For DatetimeIndex
data, exog is reindexed to the exact expected range (NaN-filling
gaps). For other index types a length check and optional
RangeIndex start verification are applied.
This function is self-contained — it does not depend on any
metadata stored at fit time. Alignment is driven entirely by the
context that will be used for prediction.
Parameters:
Name
Type
Description
Default
steps
int
Number of steps ahead to forecast.
required
context
dict[str, pandas Series]
Per-series resolved context. Each value is a pandas Series whose
index provides the reference end-point and frequency for
alignment.
required
exog
pandas Series, pandas DataFrame, dict
Future exogenous variables in any supported format.
If None: returns {name: None ...} for every series.
If pandas Series (flat index): broadcast to all series.
If pandas Series (MultiIndex): converted to dict, then
keyed per series.
If pandas DataFrame (flat index): broadcast to all series.
If pandas DataFrame (MultiIndex / long-format): converted
to dict per series ID.
If dict: used directly, missing series keys filled as
None.
None
series_names_in
list[str]
Series names that define the output dict keys.
required
Returns:
Name
Type
Description
exog_aligned
dict
Per-series dict with exactly the keys in series_names_in. Each
non-None value is a pandas DataFrame with exactly steps rows
aligned to the forecast horizon. Series inputs are coerced to
single-column DataFrames.
Raises:
Type
Description
TypeError
If exog is a long-format DataFrame whose second MultiIndex
level is not a DatetimeIndex, or if exog is an unsupported
type.
ValueError
If a non-DatetimeIndex exog has fewer than steps rows, or if
a RangeIndex exog does not start at the expected position.
Source code in skforecast/foundation/_foundation_model.py
def_prepare_future_exog(self,steps:int,context:dict[str,pd.Series],exog:(pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None]|None),series_names_in:list[str],)->dict[str,pd.DataFrame|None]:""" Normalize, broadcast, and align future exogenous variables to the forecast horizon in a single pass. Performs the full pipeline for future exog: 1. **Type coercion**: long-format MultiIndex Series/DataFrame is converted to a dict keyed by series ID. 2. **Broadcast / dict normalisation**: flat Series or DataFrame is broadcast to every series; a dict is filled with `None` for missing keys; `None` input produces an all-None dict. 3. **Temporal alignment**: each per-series exog is aligned to the forecast horizon using the resolved context. For `DatetimeIndex` data, exog is reindexed to the exact expected range (NaN-filling gaps). For other index types a length check and optional `RangeIndex` start verification are applied. This function is self-contained — it does not depend on any metadata stored at `fit` time. Alignment is driven entirely by the context that will be used for prediction. Parameters ---------- steps : int Number of steps ahead to forecast. context : dict[str, pandas Series] Per-series resolved context. Each value is a pandas Series whose index provides the reference end-point and frequency for alignment. exog : pandas Series, pandas DataFrame, dict, default None Future exogenous variables in any supported format. - If `None`: returns `{name: None ...}` for every series. - If `pandas Series` (flat index): broadcast to all series. - If `pandas Series` (MultiIndex): converted to dict, then keyed per series. - If `pandas DataFrame` (flat index): broadcast to all series. - If `pandas DataFrame` (MultiIndex / long-format): converted to dict per series ID. - If `dict`: used directly, missing series keys filled as `None`. series_names_in : list[str] Series names that define the output dict keys. Returns ------- exog_aligned : dict Per-series dict with exactly the keys in `series_names_in`. Each non-None value is a pandas DataFrame with exactly `steps` rows aligned to the forecast horizon. Series inputs are coerced to single-column DataFrames. Raises ------ TypeError If `exog` is a long-format DataFrame whose second MultiIndex level is not a `DatetimeIndex`, or if `exog` is an unsupported type. ValueError If a non-DatetimeIndex exog has fewer than `steps` rows, or if a `RangeIndex` exog does not start at the expected position. """# Early return: no exog providedifexogisNone:return{name:Nonefornameinseries_names_in}# Type guardifnotisinstance(exog,(pd.Series,pd.DataFrame,dict)):raiseTypeError(f"`exog` must be a pandas Series, DataFrame, dict, or None. "f"Got {type(exog)}.")# Normalize any input format (Series, DataFrame, dict) into a# per-series dict keyed by series name.exog_dict=self._exog_to_dict(exog,series_names_in)# Determine index type and freq once from the first context series.# All series share the same type and freq, and are non-empty with a# valid freq/step (guaranteed by check_preprocess_series upstream).first_ctx=next(iter(context.values()))is_datetime_ctx=isinstance(first_ctx.index,pd.DatetimeIndex)freq=first_ctx.index.freqifis_datetime_ctxelsefirst_ctx.index.step# Align each series' exog to its forecast horizonexog_aligned={}nan_filled_series=[]fornameinseries_names_in:e=exog_dict.get(name)ifeisNone:exog_aligned[name]=Nonecontinueifisinstance(e,pd.Series):e=e.to_frame()ctx=context[name]ref_end=ctx.index[-1]label=f"`exog` for series '{name}'"# DatetimeIndex: reindex to the exact expected date range,# filling gaps with NaN.ifis_datetime_ctxandisinstance(e.index,pd.DatetimeIndex):expected_idx=pd.date_range(start=ref_end+freq,periods=steps,freq=freq)e_aligned=e.reindex(expected_idx)ife_aligned.isnull().any(axis=None):nan_filled_series.append(name)exog_aligned[name]=e_alignedelse:# RangeIndex / other: length check + optional start validation,# then truncate to the forecast horizon.iflen(e)<steps:raiseValueError(f"{label} must have at least {steps} values. "f"Got {len(e)}.")ifisinstance(e.index,pd.RangeIndex):expected_start=ref_end+freqife.index[0]!=expected_start:raiseValueError(f"To make predictions {label} must start one step "f"ahead of `context`.\n"f" `context` ends at: {ref_end}.\n"f" {label} starts at: {e.index[0]}.\n"f" Expected index: {expected_start}.")exog_aligned[name]=e.iloc[:steps]# Batch warning for all series whose exog had missing timestampsifnan_filled_series:warnings.warn(f"`exog` for series {nan_filled_series} has been reindexed "f"to match the expected forecast horizon. Missing timestamps "f"were filled with NaN.",MissingValuesWarning,stacklevel=4,)returnexog_aligned
Subset of series to predict. If None, all series in context are
predicted.
None
context
pandas Series, pandas DataFrame, dict
Override the stored context with this window.
If pandas Series: single-series override.
If wide pandas DataFrame or dict[str, pandas Series]:
multi-series override.
None
context_exog
pandas Series, pandas DataFrame, dict
Historical exog corresponding to context.
None
exog
pandas Series, pandas DataFrame, dict
Future known exogenous variables for the forecast horizon.
If pandas Series or pandas DataFrame: broadcast to all
series.
If dict: per-series exogenous variables.
None
quantiles
(list, tuple)
Quantile levels to return, e.g. [0.1, 0.5, 0.9]. If None,
returns a point forecast (median).
None
check_inputs
bool
If True, the context and context_exog inputs are validated
and normalized via _check_preprocess_context. If False,
context must already be a dict[str, pandas Series] and
context_exog must be a dict[str, pandas DataFrame | None]
or None. This argument is created for internal use and is not
recommended to be changed.
True
Returns:
Name
Type
Description
predictions
pandas DataFrame
Value of predictions. The DataFrame includes the following columns:
level: Name of the series.
pred: Predicted values (point forecast, median).
If quantiles is not None, the pred column is replaced by
one column per quantile level (e.g., q_0.1, q_0.5, q_0.9).
Notes
Foundation models are pre-trained and do not learn from the data passed
to fit. The fit method only stores context (the last context_length
observations) and metadata. This leads to four distinct behaviors
depending on the combination of is_fitted and context:
Not fitted, context=None: raises ValueError. There is no context
available for prediction.
Fitted, context=None: uses the context and context_exog_ stored
during fit. If the user supplies context_exog, it is ignored with a
warning.
Not fitted, context provided (zero-shot mode): The model uses
context and context_exog (if provided) as context for prediction.
Fitted, context provided: Stored context is ignored, the
provided context and context_exog (if provided) are used for
prediction.
Source code in skforecast/foundation/_foundation_model.py
defpredict(self,steps:int,levels:str|list[str]|None=None,context:pd.Series|pd.DataFrame|dict[str,pd.Series]|None=None,context_exog:(pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None]|None)=None,exog:(pd.Series|pd.DataFrame|dict[str,pd.DataFrame|pd.Series|None]|None)=None,quantiles:list[float]|tuple[float]|None=None,check_inputs:bool=True,)->pd.DataFrame:""" Predict n steps ahead. Parameters ---------- steps : int Number of steps ahead to forecast. levels : str, list, default None Subset of series to predict. If `None`, all series in `context` are predicted. context : pandas Series, pandas DataFrame, dict, default None Override the stored context with this window. - If `pandas Series`: single-series override. - If wide `pandas DataFrame` or `dict[str, pandas Series]`: multi-series override. context_exog : pandas Series, pandas DataFrame, dict, default None Historical exog corresponding to `context`. exog : pandas Series, pandas DataFrame, dict, default None Future known exogenous variables for the forecast horizon. - If `pandas Series` or `pandas DataFrame`: broadcast to all series. - If `dict`: per-series exogenous variables. quantiles : list, tuple, default None Quantile levels to return, e.g. `[0.1, 0.5, 0.9]`. If `None`, returns a point forecast (median). check_inputs : bool, default True If `True`, the `context` and `context_exog` inputs are validated and normalized via `_check_preprocess_context`. If `False`, `context` must already be a `dict[str, pandas Series]` and `context_exog` must be a `dict[str, pandas DataFrame | None]` or `None`. This argument is created for internal use and is not recommended to be changed. Returns ------- predictions : pandas DataFrame Value of predictions. The DataFrame includes the following columns: - level: Name of the series. - pred: Predicted values (point forecast, median). If `quantiles` is not `None`, the `pred` column is replaced by one column per quantile level (e.g., `q_0.1`, `q_0.5`, `q_0.9`). Notes ----- Foundation models are pre-trained and do not learn from the data passed to `fit`. The `fit` method only stores context (the last `context_length` observations) and metadata. This leads to four distinct behaviors depending on the combination of `is_fitted` and `context`: - **Not fitted, `context=None`**: raises `ValueError`. There is no context available for prediction. - **Fitted, `context=None`**: uses the context and `context_exog_` stored during `fit`. If the user supplies `context_exog`, it is ignored with a warning. - **Not fitted, `context` provided (zero-shot mode)**: The model uses `context` and `context_exog` (if provided) as context for prediction. - **Fitted, `context` provided**: Stored context is ignored, the provided `context` and `context_exog` (if provided) are used for prediction. """ifnotself.is_fittedandcontextisNone:raiseValueError("Call `fit` before `predict`, or pass `context`.")ifnotisinstance(steps,(int,np.integer))orsteps<1:raiseValueError("`steps` must be a positive integer.")ifquantilesisnotNone:ifnotisinstance(quantiles,(list,tuple)):raiseTypeError("`quantiles` must be a `list` or `tuple`. For example, quantiles ""0.1, 0.5, and 0.9 should be as `quantiles = [0.1, 0.5, 0.9]`.")forqinquantiles:ifnot0.0<=q<=1.0:raiseValueError(f"All quantiles must be between 0 and 1. Got {q}.")# Context (past data)ifcontextisNone:ifcontext_exogisnotNone:warnings.warn("`context_exog` is ignored when `context` is not provided. ""The stored `context_exog_` from `fit` is used instead.",IgnoredArgumentWarning,stacklevel=3,)context=self.adapter.context_series_names_in=self.series_names_in_context_exog=self.adapter.context_exog_elifcheck_inputs:context,_,series_names_in,context_exog,_=self._check_preprocess_context(series=context,exog=context_exog,)else:series_names_in=list(context.keys())iflevelsisnotNone:requested_levels=[levels]ifisinstance(levels,str)elselist(levels)unknown=[lvforlvinrequested_levelsiflvnotinseries_names_in]ifunknown:raiseValueError(f"`levels` {unknown} not found in available series "f"{list(series_names_in)}.")series_names_in=requested_levelscontext={name:context[name]fornameinrequested_levels}ifcontext_exogisnotNone:context_exog={name:context_exog.get(name)fornameinrequested_levels}# Future exogifnotself.allow_exog:has_exog=(exogisnotNone)or(context_exogisnotNone)ifhas_exog:warnings.warn(f"{type(self.adapter).__name__} does not currently ""support covariates. `exog` and `context_exog` ""are ignored.",IgnoredArgumentWarning,stacklevel=3,)exog=Nonecontext_exog=Noneelse:ifcheck_inputs:exog=self._prepare_future_exog(steps=steps,context=context,exog=exog,series_names_in=series_names_in,)# Adapter returns dict[str, np.ndarray] with shape (steps, n_q)raw_predictions=self.adapter.predict(steps=steps,context=context,context_exog=context_exog,exog=exog,quantiles=quantiles,)# Build long-format DataFrame from raw predictionsn_series=len(series_names_in)per_series_indices=[expand_index(context[name].index,steps=steps)fornameinseries_names_in]ifn_series==1:long_index=per_series_indices[0]else:idx_arr=np.column_stack([idx.to_numpy()foridxinper_series_indices]).ravel()long_index=(pd.DatetimeIndex(idx_arr)ifisinstance(per_series_indices[0],pd.DatetimeIndex)elsepd.Index(idx_arr))level_col=np.tile(series_names_in,steps)col_names=["pred"]ifquantilesisNoneelse[f"q_{q}"forqinquantiles]n_cols=len(col_names)# Pre-allocate (steps, n_series, n_cols), fill per series, then reshape# to step-major (steps*n_series, n_cols) — one allocation instead of one# per quantile, and the ravel order matches level_col / long_index.pred_matrix=np.empty((steps,n_series,n_cols),dtype=np.float64)fori,nameinenumerate(series_names_in):pred_matrix[:,i,:]=raw_predictions[name]pred_matrix=pred_matrix.reshape(steps*n_series,n_cols)predictions:dict[str,np.ndarray]={"level":level_col}forj,colinenumerate(col_names):predictions[col]=pred_matrix[:,j]predictions=pd.DataFrame(predictions,index=long_index)returnpredictions
Get parameters for this estimator (sklearn-compatible).
Parameters:
Name
Type
Description
Default
deep
Any
Not used, present here for API consistency by convention.
None
Returns:
Name
Type
Description
params
dict
Parameter names mapped to their current values.
Notes
Required so that sklearn.base.clone can create an unfitted copy
of this object, which is used internally by deepcopy_forecaster
during backtesting. The pre-loaded pipeline is intentionally excluded
so that clones are created without copying heavy model weights; the
pipeline is reloaded lazily on the first predict call.
Source code in skforecast/foundation/_foundation_model.py
defget_params(self,deep:Any=None)->dict:""" Get parameters for this estimator (sklearn-compatible). Parameters ---------- deep : Any, default None Not used, present here for API consistency by convention. Returns ------- params : dict Parameter names mapped to their current values. Notes ----- Required so that `sklearn.base.clone` can create an unfitted copy of this object, which is used internally by `deepcopy_forecaster` during backtesting. The pre-loaded pipeline is intentionally excluded so that clones are created without copying heavy model weights; the pipeline is reloaded lazily on the first `predict` call. """returnself.adapter.get_params()
defset_params(self,**params)->FoundationModel:""" Set parameters for this estimator (sklearn-compatible). After calling this method, the FoundationModel is reset to an unfitted state. Parameters ---------- **params : Estimator parameters forwarded to the underlying adapter's `set_params`. Use `model_id` to change the model ID. All other keys are adapter-specific. Returns ------- self : FoundationModel The same object with updated parameters. """try:self.adapter.set_params(**params)exceptValueErrorasexc:raiseValueError(str(exc).replace(type(self.adapter).__name__,"FoundationModel"))fromexcself.index_type_=Noneself.index_freq_=Noneself.context_range_=Noneself.series_names_in_=Noneself.is_multiple_series_=Falseself.exog_in_=Falseself.exog_names_in_=Noneself.exog_names_in_per_series_=Noneself.exog_type_in_=Noneself.fit_date=Noneself.adapter.context_=Noneself.adapter.context_exog_=Noneself.adapter.is_fitted=Falsereturnself
HuggingFace model ID, e.g. "autogluon/chronos-2-small".
required
pipeline
BaseChronosPipeline
Pre-loaded pipeline instance. If None, the pipeline is loaded
lazily on the first call to predict.
None
context_length
int
Maximum number of historical observations to use as context. At fit
time only the last context_length observations are stored. At
predict time, if context is longer than context_length it is
trimmed to this length; if it is shorter, all available observations
are used as-is. Defaults to 8192, which matches the maximum context
window of Chronos. Must be a positive integer.
8192
predict_kwargs
dict
Additional keyword arguments forwarded to the pipeline's
predict_quantiles method.
None
device_map
str
Device placement for the model. "auto" selects the best
available accelerator (CUDA > MPS > CPU). Also accepts explicit
values such as "cuda", "mps", or "cpu", forwarded to
BaseChronosPipeline.from_pretrained.
'auto'
torch_dtype
object
Torch dtype forwarded to BaseChronosPipeline.from_pretrained.
None
cross_learning
bool
If True, Chronos shares information across all series in
the batch when predicting in multi-series mode. Forwarded
directly to predict_quantiles. Ignored in single-series mode.
HuggingFace model ID, e.g. "autogluon/chronos-2-small".
required
pipeline
BaseChronosPipeline
Pre-loaded pipeline instance. If None, the pipeline is
loaded lazily on the first call to predict.
None
context_length
int
Maximum number of historical observations to retain as context.
At fit time only the last context_length observations of
series (and exog) are stored. At predict time, if
context is longer than context_length it is trimmed to
this length before inference; if it is shorter, all available
observations are passed as-is and the model handles reduced
context gracefully. Defaults to 8192, which matches the
maximum context window of Chronos. Must be a positive
integer.
8192
predict_kwargs
dict
Additional keyword arguments forwarded verbatim to the
pipeline's predict_quantiles method.
None
device_map
str
Device placement for the model. "auto" selects the best
available accelerator (CUDA > MPS > CPU). Also accepts
explicit values such as "cuda", "mps", or "cpu",
forwarded to BaseChronosPipeline.from_pretrained.
'auto'
torch_dtype
object
Torch dtype forwarded to BaseChronosPipeline.from_pretrained
(e.g. torch.bfloat16).
None
cross_learning
bool
If True, Chronos shares information across all series in
the batch when predicting in multi-series mode. Forwarded
directly to predict_quantiles. Ignored in single-series mode.
def__init__(self,model_id:str,*,pipeline:Any|None=None,context_length:int=8192,predict_kwargs:dict[str,Any]|None=None,device_map:str="auto",torch_dtype:Any|None=None,cross_learning:bool=False,)->None:""" Initialise the adapter. Parameters ---------- model_id : str HuggingFace model ID, e.g. "autogluon/chronos-2-small". pipeline : BaseChronosPipeline, default None Pre-loaded pipeline instance. If `None`, the pipeline is loaded lazily on the first call to `predict`. context_length : int, default 8192 Maximum number of historical observations to retain as context. At `fit` time only the last `context_length` observations of `series` (and `exog`) are stored. At `predict` time, if `context` is longer than `context_length` it is trimmed to this length before inference; if it is shorter, all available observations are passed as-is and the model handles reduced context gracefully. Defaults to 8192, which matches the maximum context window of Chronos. Must be a positive integer. predict_kwargs : dict, default None Additional keyword arguments forwarded verbatim to the pipeline's `predict_quantiles` method. device_map : str, default 'auto' Device placement for the model. `"auto"` selects the best available accelerator (CUDA > MPS > CPU). Also accepts explicit values such as `"cuda"`, `"mps"`, or `"cpu"`, forwarded to `BaseChronosPipeline.from_pretrained`. torch_dtype : object, default None Torch dtype forwarded to `BaseChronosPipeline.from_pretrained` (e.g. `torch.bfloat16`). cross_learning : bool, default False If `True`, Chronos shares information across all series in the batch when predicting in multi-series mode. Forwarded directly to `predict_quantiles`. Ignored in single-series mode. """ifnotisinstance(context_length,int)orcontext_length<1:raiseValueError(f"`context_length` must be a positive integer. Got {context_length!r}.")self.model_id=model_idself._pipeline=pipelineself.context_=Noneself.context_exog_=Noneself.context_length=context_lengthself.predict_kwargs=predict_kwargsor{}self.device_map=device_mapself.torch_dtype=torch_dtypeself.cross_learning=cross_learningself.is_fitted=False
defset_params(self,**params)->ChronosAdapter:""" Set adapter parameters. Resets the pipeline when a device or dtype param changes, since those are baked into the loaded pipeline. Parameters ---------- **params : Valid keys: `model_id`, `cross_learning`, `context_length`, `device_map`, `torch_dtype`, `predict_kwargs`. Returns ------- self : ChronosAdapter """valid={'model_id','cross_learning','context_length','device_map','torch_dtype','predict_kwargs',}invalid=set(params)-validifinvalid:raiseValueError(f"Invalid parameter(s) for ChronosAdapter: {sorted(invalid)}. "f"Valid parameters are: {sorted(valid)}.")pipeline_reset_keys={'model_id','device_map','torch_dtype'}ifparams.keys()&pipeline_reset_keys:self._pipeline=Noneforkey,valueinparams.items():ifkey=='predict_kwargs':self.predict_kwargs=valueor{}elifkey=='context_length':ifnotisinstance(value,int)orvalue<1:raiseValueError(f"`context_length` must be a positive integer. Got {value!r}.")self.context_length=valueelse:setattr(self,key,value)returnself
deffit(self,context:dict[str,pd.Series],context_exog:dict[str,pd.DataFrame|pd.Series|None],)->ChronosAdapter:""" Store the training series and optional historical exogenous variables. No model training occurs since Chronos is a zero-shot inference model. All input normalization and validation is performed upstream by `FoundationModel`; this method receives canonical dicts only. Parameters ---------- context : dict pandas Series Normalized training series, one entry per series. context_exog : dict pandas DataFrame, pandas Series, or None Per-series historical exogenous variables (past covariates). Returns ------- self : ChronosAdapter """self.context_=contextself.context_exog_=context_exogself.is_fitted=Truereturnself
defpredict(self,steps:int,context:dict[str,pd.Series],context_exog:dict[str,pd.DataFrame|pd.Series|None],exog:dict[str,pd.DataFrame|pd.Series|None],quantiles:list[float]|tuple[float]|None)->dict[str,np.ndarray]:""" Generate predictions using the Chronos pipeline. All input normalization, validation, and context trimming is performed upstream by `FoundationModel`; this method receives pre-processed dicts only. Parameters ---------- steps : int Number of steps ahead to forecast. context : dict Per-series context windows (already trimmed to `context_length`). context_exog : dict Per-series past covariates (already trimmed). exog : dict Per-series future covariates for the forecast horizon. quantiles : list of float or None Quantile levels to return. If `None`, a point forecast (median, quantile 0.5) is produced. Returns ------- predictions : dict Keys are series names. Each value is a 2-D array of shape `(steps, n_quantiles)`. """# NOTE: the pipeline is loaded lazily here so that the adapter can be# instantiated and fitted without requiring Chronos to be installed.self._load_pipeline()series_names_in=list(context.keys())quantile_levels=list(quantiles)ifquantilesisnotNoneelse[0.5]inputs_list=[self._build_chronos_input(context=context[name].to_numpy(),context_exog=context_exog[name]ifcontext_exogisnotNoneelseNone,exog=exog[name]ifexogisnotNoneelseNone,)fornameinseries_names_in]quantile_preds,_=self._pipeline.predict_quantiles(inputs=inputs_list,prediction_length=steps,quantile_levels=quantile_levels,cross_learning=self.cross_learningiflen(series_names_in)>1elseFalse,**self.predict_kwargs,)predictions:dict[str,np.ndarray]={}fori,nameinenumerate(series_names_in):q_arr=quantile_preds[i].squeeze(0)ifhasattr(q_arr,"detach"):q_arr=q_arr.detach().cpu().numpy()else:q_arr=np.asarray(q_arr)predictions[name]=q_arrreturnpredictions
Load the Chronos pipeline into self._pipeline if not already set.
Returns:
Type
Description
None
Raises:
Type
Description
ImportError
If chronos-forecasting >=2.0 is not installed.
Notes
The pipeline is imported lazily from chronos and instantiated via
BaseChronosPipeline.from_pretrained, which auto-dispatches to the
correct pipeline class based on the model config. Optional
device_map and torch_dtype stored at initialisation are
forwarded to the constructor. This method is a no-op when
self._pipeline is already populated.
def_load_pipeline(self)->None:""" Load the Chronos pipeline into `self._pipeline` if not already set. Returns ------- None Raises ------ ImportError If `chronos-forecasting` >=2.0 is not installed. Notes ----- The pipeline is imported lazily from `chronos` and instantiated via `BaseChronosPipeline.from_pretrained`, which auto-dispatches to the correct pipeline class based on the model config. Optional `device_map` and `torch_dtype` stored at initialisation are forwarded to the constructor. This method is a no-op when `self._pipeline` is already populated. """ifself._pipelineisnotNone:returntry:fromchronosimportBaseChronosPipelineexceptImportErrorasexc:raiseImportError("chronos-forecasting >=2.0 is required. ""Install it with `pip install chronos-forecasting`.")fromexckwargs:dict[str,Any]={}kwargs["device_map"]=self.device_mapifself.torch_dtypeisnotNone:kwargs["torch_dtype"]=self.torch_dtypeself._pipeline=BaseChronosPipeline.from_pretrained(self.model_id,**kwargs)
Numeric columns (int, float) and boolean columns are cast to
float32. All other dtypes (object, string, Categorical) are left
as-is so that Chronos can handle them as categorical covariates
natively.
Parameters:
Name
Type
Description
Default
col_data
array - like
A single covariate column (e.g. a pandas Series or 1-D array).
required
Returns:
Name
Type
Description
col_array
numpy ndarray
A 1-D numpy array. Numeric/bool are cast to float32. Others
keep their original dtype (typically object for string and
categorical data).
@staticmethoddef_to_covariate_array(col_data:Any)->np.ndarray:""" Convert a covariate column to a numpy array. Numeric columns (int, float) and boolean columns are cast to `float32`. All other dtypes (object, string, Categorical) are left as-is so that Chronos can handle them as categorical covariates natively. Parameters ---------- col_data : array-like A single covariate column (e.g. a pandas Series or 1-D array). Returns ------- col_array : numpy ndarray A 1-D numpy array. Numeric/bool are cast to `float32`. Others keep their original dtype (typically `object` for string and categorical data). """# Handle pandas Series first to correctly process nullable extension# dtypes (pd.Int64Dtype, pd.Float64Dtype, pd.BooleanDtype): np.asarray()# on those produces dtype=object with pd.NA sentinels instead of float32.ifisinstance(col_data,pd.Series):ifpd.api.types.is_numeric_dtype(col_data)orpd.api.types.is_bool_dtype(col_data):returncol_data.astype(np.float32).to_numpy()returncol_data.to_numpy()# Fallback for numpy arrays, lists, etc.arr=np.asarray(col_data)ifarr.dtype.kindin("i","u","f","b"):# integer, unsigned int, float, boolreturnarr.astype(np.float32)returnarr
Build the input dict consumed by the pipeline's predict_quantiles method.
Parameters:
Name
Type
Description
Default
context
numpy ndarray
1-D array of observed time series values used as context. Must be
castable to float32.
required
context_exog
pandas DataFrame, pandas Series
Historical exogenous variables whose index is aligned to
context. Each column (or the single Series, referenced by
its name) becomes an entry in the returned
"past_covariates" dict. Numeric and boolean columns are
cast to float32; string and categorical columns are passed
as-is and handled natively by Chronos.
None
exog
pandas DataFrame, pandas Series
Future-known exogenous variables covering the forecast horizon.
Must have exactly prediction_length rows. Each column
becomes an entry in the returned "future_covariates" dict.
Numeric and boolean columns are cast to float32; string and
categorical columns are passed as-is.
None
Returns:
Name
Type
Description
input_dict
dict
Dictionary with mandatory key "target" (1-D float32numpy ndarray) and optional keys "past_covariates" and
"future_covariates", each mapping column names to 1-D
arrays (float32 for numeric/bool columns, object dtype
for string/categorical columns).
def_build_chronos_input(self,context:np.ndarray,context_exog:pd.DataFrame|pd.Series|None=None,exog:pd.DataFrame|pd.Series|None=None,)->dict[str,Any]:""" Build the input dict consumed by the pipeline's `predict_quantiles` method. Parameters ---------- context : numpy ndarray 1-D array of observed time series values used as context. Must be castable to `float32`. context_exog : pandas DataFrame, pandas Series, default None Historical exogenous variables whose index is aligned to `context`. Each column (or the single Series, referenced by its name) becomes an entry in the returned "past_covariates" dict. Numeric and boolean columns are cast to `float32`; string and categorical columns are passed as-is and handled natively by Chronos. exog : pandas DataFrame, pandas Series, default None Future-known exogenous variables covering the forecast horizon. Must have exactly `prediction_length` rows. Each column becomes an entry in the returned "future_covariates" dict. Numeric and boolean columns are cast to `float32`; string and categorical columns are passed as-is. Returns ------- input_dict : dict Dictionary with mandatory key "target" (1-D `float32` `numpy ndarray`) and optional keys "past_covariates" and "future_covariates", each mapping column names to 1-D arrays (`float32` for numeric/bool columns, `object` dtype for string/categorical columns). """input_dict={"target":np.asarray(context,dtype=np.float32)}ifcontext_exogisnotNone:df=(context_exogifisinstance(context_exog,pd.DataFrame)elsecontext_exog.to_frame())input_dict["past_covariates"]={col:ChronosAdapter._to_covariate_array(df[col])forcolindf.columns}ifexogisnotNone:df=(exogifisinstance(exog,pd.DataFrame)elseexog.to_frame())input_dict["future_covariates"]={col:ChronosAdapter._to_covariate_array(df[col])forcolindf.columns}returninput_dict
HuggingFace model ID, e.g. "google/timesfm-2.5-200m-pytorch".
required
model
object
Pre-loaded and compiled TimesFM model instance. If None, the
model is loaded and compiled lazily on the first predict call.
None
context_length
int
Maximum number of historical observations to use as context. At fit
time only the last context_length observations are stored. At
predict time, if context is longer than context_length it
is trimmed to this length; if it is shorter, all available
observations are used as-is. Must be a positive integer. Defaults to
512. TimesFM supports up to 16_384.
512
max_horizon
int
Maximum forecast horizon. If predict is called with
steps > max_horizon, a ValueError is raised. The model is
compiled lazily for the exact requested steps (up to this
ceiling) to avoid unnecessary decode iterations. Must be a
positive integer.
512
forecast_config_kwargs
dict
Additional keyword arguments forwarded verbatim to
timesfm.ForecastConfig at compile time. Supported keys:
normalize_inputs, use_continuous_quantile_head,
force_flip_invariance, infer_is_positive,
fix_quantile_crossing. Do not include max_context or
max_horizon here — those are controlled by the corresponding
adapter parameters.
TimesFM supports only the fixed quantile levels
[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]. Requesting any
other level raises a ValueError.
Covariate support (via TimesFM's forecast_with_covariates) is not
yet implemented. Passing exog or context_exog issues an
IgnoredArgumentWarning and the values are discarded.
HuggingFace model ID, e.g. "google/timesfm-2.5-200m-pytorch".
required
model
object
Pre-loaded and compiled TimesFM model instance. If None, the
model is loaded and compiled lazily on the first predict call.
None
context_length
int
Maximum number of historical observations to retain as context.
At fit time only the last context_length observations of
series are stored. At predict time, if context is
longer than context_length it is trimmed to this length;
if it is shorter, all available observations are passed as-is.
Must be a positive integer.
512
max_horizon
int
Maximum forecast horizon. If predict is called with
steps > max_horizon, a ValueError is raised. The model
is compiled lazily for the exact requested steps (up to
this ceiling) to avoid unnecessary decode iterations. Must
be a positive integer.
512
forecast_config_kwargs
dict
Additional keyword arguments forwarded verbatim to
timesfm.ForecastConfig at compile time.
def__init__(self,model_id:str,*,model:Any|None=None,context_length:int=512,max_horizon:int=512,forecast_config_kwargs:dict[str,Any]|None=None,)->None:""" Initialise the adapter. Parameters ---------- model_id : str HuggingFace model ID, e.g. "google/timesfm-2.5-200m-pytorch". model : object, default None Pre-loaded and compiled TimesFM model instance. If `None`, the model is loaded and compiled lazily on the first `predict` call. context_length : int, default 512 Maximum number of historical observations to retain as context. At `fit` time only the last `context_length` observations of `series` are stored. At `predict` time, if `context` is longer than `context_length` it is trimmed to this length; if it is shorter, all available observations are passed as-is. Must be a positive integer. max_horizon : int, default 512 Maximum forecast horizon. If `predict` is called with `steps > max_horizon`, a `ValueError` is raised. The model is compiled lazily for the exact requested `steps` (up to this ceiling) to avoid unnecessary decode iterations. Must be a positive integer. forecast_config_kwargs : dict, default None Additional keyword arguments forwarded verbatim to `timesfm.ForecastConfig` at compile time. """ifnotisinstance(context_length,int)orcontext_length<1:raiseValueError(f"`context_length` must be a positive integer. Got {context_length!r}.")ifnotisinstance(max_horizon,int)ormax_horizon<1:raiseValueError(f"`max_horizon` must be a positive integer. Got {max_horizon!r}.")self.model_id=model_idself._model=modelself.context_=Noneself.context_exog_=Noneself.context_length=context_lengthself.max_horizon=max_horizonself.forecast_config_kwargs=dict(forecast_config_kwargs)ifforecast_config_kwargselse{}self.is_fitted=False
Set adapter parameters. Resets the model when parameters that affect
compilation change (model_id, context_length, max_horizon,
forecast_config_kwargs).
defset_params(self,**params)->TimesFMAdapter:""" Set adapter parameters. Resets the model when parameters that affect compilation change (`model_id`, `context_length`, `max_horizon`, `forecast_config_kwargs`). Parameters ---------- **params : Valid keys: `model_id`, `context_length`, `max_horizon`, `forecast_config_kwargs`. Returns ------- self : TimesFMAdapter """valid={'model_id','context_length','max_horizon','forecast_config_kwargs'}invalid=set(params)-validifinvalid:raiseValueError(f"Invalid parameter(s) for TimesFMAdapter: {sorted(invalid)}. "f"Valid parameters are: {sorted(valid)}.")model_reset_keys={'model_id','context_length','max_horizon','forecast_config_kwargs'}ifparams.keys()&model_reset_keys:self._model=Noneforkey,valueinparams.items():ifkey=='context_length':ifnotisinstance(value,int)orvalue<1:raiseValueError(f"`context_length` must be a positive integer. Got {value!r}.")self.context_length=valueelifkey=='max_horizon':ifnotisinstance(value,int)orvalue<1:raiseValueError(f"`max_horizon` must be a positive integer. Got {value!r}.")self.max_horizon=valueelifkey=='forecast_config_kwargs':self.forecast_config_kwargs=dict(value)ifvalueelse{}else:setattr(self,key,value)returnself
deffit(self,context:dict[str,pd.Series],context_exog:Any,)->TimesFMAdapter:""" Store the training series. No model training occurs since TimesFM is a zero-shot inference model. All input normalization and validation is performed upstream by `FoundationModel`; this method receives canonical dicts only. Parameters ---------- context : dict pandas Series Normalized training series, one entry per series. context_exog : Any Not used, present here for API consistency by convention. Returns ------- self : TimesFMAdapter """self.context_=contextself.is_fitted=Truereturnself
defpredict(self,steps:int,context:dict[str,pd.Series],context_exog:Any,exog:Any,quantiles:list[float]|tuple[float]|None,)->dict[str,np.ndarray]:""" Generate predictions using the TimesFM model. All input normalization, validation, and context trimming is performed upstream by `FoundationModel`; this method receives pre-processed dicts only. Parameters ---------- steps : int Number of steps ahead to forecast. context : dict Per-series context windows (already trimmed to `context_length`). context_exog : Any Not used, present here for API consistency by convention. exog : Any Not used, present here for API consistency by convention. quantiles : list of float or None Quantile levels. Must be a subset of `SUPPORTED_QUANTILES`. Returns ------- predictions : dict Keys are series names. Each value is a 2-D array of shape `(steps, n_quantiles)`. Raises ------ ValueError If a requested quantile level is not in `SUPPORTED_QUANTILES` or `steps` exceeds `max_horizon`. """ifquantilesisnotNone:quantile_list=list(quantiles)forqinquantile_list:ifnotany(abs(q-sq)<1e-9forsqinself.SUPPORTED_QUANTILES):raiseValueError(f"TimesFM only supports quantile levels "f"{self.SUPPORTED_QUANTILES}. Got {q!r}. "f"Quantile interpolation is not supported.")else:quantile_list=Noneifsteps>self.max_horizon:raiseValueError(f"`steps` ({steps}) exceeds `max_horizon` ({self.max_horizon}).")self._load_model()self._ensure_compiled(steps)series_names_in=list(context.keys())inputs_list=[context[name].to_numpy()fornameinseries_names_in]point_forecast,quantile_forecast=self._model.forecast(horizon=steps,inputs=inputs_list,)# point_forecast : (n_series, steps)# quantile_forecast: (n_series, steps, 10) — idx 0 = mean, 1-9 = q0.1-q0.9predictions:dict[str,np.ndarray]={}fori,nameinenumerate(series_names_in):ifquantile_listisNone:# Point forecast: shape (steps, 1)predictions[name]=np.asarray(point_forecast[i]).reshape(-1,1)else:q_indices=[round(q*10)forqinquantile_list]qf=np.asarray(quantile_forecast[i])predictions[name]=qf[:,q_indices]# (steps, n_quantiles)returnpredictions
Load (but do not compile) the TimesFM model into self._model
if not already set.
Returns:
Type
Description
None
Raises:
Type
Description
ImportError
If timesfm[torch] is not installed.
Notes
The model is imported lazily from timesfm and loaded via
TimesFM_2p5_200M_torch.from_pretrained. Compilation is deferred to
_ensure_compiled, which is called from predict with the actual
forecast horizon so that the compiled decode graph is sized exactly
for the requested number of steps rather than the (much larger)
max_horizon ceiling. This method is a no-op when self._model is
already populated.
def_load_model(self)->None:""" Load (but do not compile) the TimesFM model into `self._model` if not already set. Returns ------- None Raises ------ ImportError If `timesfm[torch]` is not installed. Notes ----- The model is imported lazily from `timesfm` and loaded via `TimesFM_2p5_200M_torch.from_pretrained`. Compilation is deferred to `_ensure_compiled`, which is called from `predict` with the actual forecast horizon so that the compiled decode graph is sized exactly for the requested number of steps rather than the (much larger) `max_horizon` ceiling. This method is a no-op when `self._model` is already populated. """ifself._modelisnotNone:returntry:importtimesfmexceptImportErrorasexc:raiseImportError("timesfm is required for TimesFMAdapter. ""Install it with `pip install git+https://github.com/google-research/timesfm.git`.")fromexc# Workaround for a compatibility issue between huggingface_hub and# timesfm: huggingface_hub's `from_pretrained` passes `proxies` and# `resume_download` to `_from_pretrained`, but timesfm's# `_from_pretrained` does not declare them as explicit parameters, so# they fall into **model_kwargs and are forwarded to __init__, raising# a TypeError. A local subclass overrides `_from_pretrained` to absorb# those kwargs without modifying any global state.class_TimesFMCompat(timesfm.TimesFM_2p5_200M_torch):@classmethoddef_from_pretrained(cls,*,proxies=None,resume_download=None,**kwargs):# type: ignore[override]returnsuper()._from_pretrained(**kwargs)self._model=_TimesFMCompat.from_pretrained(self.model_id)
Compile the model for the given forecast horizon if not already
compiled for at least steps steps.
Parameters:
Name
Type
Description
Default
steps
int
The forecast horizon that the model must support.
required
Returns:
Type
Description
None
Notes
This is separated from _load_model so that compilation uses the
actual number of requested forecast steps rather than max_horizon.
TimesFM's compiled decode always runs forecast_config.max_horizon
autoregressive decode iterations regardless of the requested horizon;
the true horizon is only used to slice the output afterwards. When
the compiled max_horizon is large (e.g. the default 512) but
steps is small (e.g. 12), the model performs up to
(max_horizon - 1) // output_patch_len unnecessary extra transformer
forward passes per inference call. Compiling here with
max_horizon = steps reduces those wasted passes to zero for the
typical backtesting case where steps is constant across folds.
If the model was already compiled for a horizon >= steps (e.g. a
pre-compiled model passed via the model constructor argument), this
method is a no-op.
def_ensure_compiled(self,steps:int)->None:""" Compile the model for the given forecast horizon if not already compiled for at least `steps` steps. Parameters ---------- steps : int The forecast horizon that the model must support. Returns ------- None Notes ----- This is separated from `_load_model` so that compilation uses the *actual* number of requested forecast steps rather than `max_horizon`. TimesFM's compiled decode always runs `forecast_config.max_horizon` autoregressive decode iterations regardless of the requested horizon; the true horizon is only used to *slice* the output afterwards. When the compiled `max_horizon` is large (e.g. the default 512) but `steps` is small (e.g. 12), the model performs up to `(max_horizon - 1) // output_patch_len` unnecessary extra transformer forward passes per inference call. Compiling here with `max_horizon = steps` reduces those wasted passes to zero for the typical backtesting case where `steps` is constant across folds. If the model was already compiled for a horizon `>= steps` (e.g. a pre-compiled model passed via the `model` constructor argument), this method is a no-op. """fc=getattr(self._model,'forecast_config',None)iffcisnotNoneandsteps<=fc.max_horizon:returnimporttimesfmself._model.compile(timesfm.ForecastConfig(max_context=self.context_length,max_horizon=steps,**self.forecast_config_kwargs,))
HuggingFace model ID, e.g. "Salesforce/moirai-2.0-R-small".
Must be a Salesforce/moirai-2.0-R-{small,base,large} variant.
required
module
object
Pre-loaded Moirai2Module instance. If None, the module is
loaded lazily on the first call to predict.
None
context_length
int
Maximum number of historical observations to use as context. At fit
time only the last context_length observations are stored. At
predict time, if context is longer than context_length
it is trimmed to this length; if it is shorter, all available
observations are used as-is. Must be a positive integer.
2048
device
str
Device placement for the model. "auto" selects the best
available accelerator (CUDA > MPS > CPU). Also accepts explicit
values such as "cuda", "mps", or "cpu".
Moirai supports only the fixed quantile levels
[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]. Requesting any
other level raises a ValueError.
Covariate support via the high-level Moirai2Forecast.predict() API
is not functional: the padding/truncation loop inside predict()
clips every list-valued field — including feat_dynamic_real — to
context_length, discarding the future portion that future
covariates require. Passing exog or context_exog issues an
IgnoredArgumentWarning and the values are discarded.
HuggingFace model ID, e.g. "Salesforce/moirai-2.0-R-small".
required
module
object
Pre-loaded Moirai2Module instance. If None, the module
is loaded lazily on the first call to predict.
None
context_length
int
Maximum number of historical observations to retain as context.
At fit time only the last context_length observations of
series are stored. At predict time, if context
is longer than context_length it is trimmed to this length;
if it is shorter, all available observations are passed as-is.
Must be a positive integer.
2048
device
str
Device placement for the model. "auto" selects the best
available accelerator (CUDA > MPS > CPU). Also accepts
explicit values such as "cuda", "mps", or "cpu".
def__init__(self,model_id:str,*,module:Any|None=None,context_length:int=2048,device:str="auto",)->None:""" Initialise the adapter. Parameters ---------- model_id : str HuggingFace model ID, e.g. `"Salesforce/moirai-2.0-R-small"`. module : object, default None Pre-loaded `Moirai2Module` instance. If `None`, the module is loaded lazily on the first call to `predict`. context_length : int, default 2048 Maximum number of historical observations to retain as context. At `fit` time only the last `context_length` observations of `series` are stored. At `predict` time, if `context` is longer than `context_length` it is trimmed to this length; if it is shorter, all available observations are passed as-is. Must be a positive integer. device : str, default 'auto' Device placement for the model. `"auto"` selects the best available accelerator (CUDA > MPS > CPU). Also accepts explicit values such as `"cuda"`, `"mps"`, or `"cpu"`. """ifnotisinstance(context_length,int)orcontext_length<1:raiseValueError(f"`context_length` must be a positive integer. "f"Got {context_length!r}.")self.model_id=model_idself._module=moduleself.context_=Noneself.context_exog_=Noneself.context_length=context_lengthself.device=deviceself._forecast_obj=Noneself.is_fitted=False
defset_params(self,**params)->MoiraiAdapter:""" Set adapter parameters. Resets the module and forecast object when `model_id` or `context_length` changes. Parameters ---------- **params : Valid keys: `model_id`, `context_length`, `device`. Returns ------- self : MoiraiAdapter """valid={'model_id','context_length','device'}invalid=set(params)-validifinvalid:raiseValueError(f"Invalid parameter(s) for MoiraiAdapter: {sorted(invalid)}. "f"Valid parameters are: {sorted(valid)}.")ifparams.keys()&{'model_id','context_length','device'}:self._module=Noneself._forecast_obj=Noneforkey,valueinparams.items():ifkey=='context_length':ifnotisinstance(value,int)orvalue<1:raiseValueError(f"`context_length` must be a positive integer. "f"Got {value!r}.")self.context_length=valueelse:setattr(self,key,value)returnself
deffit(self,context:dict[str,pd.Series],context_exog:Any,)->MoiraiAdapter:""" Store the training series. No model training occurs since Moirai is a zero-shot inference model. All input normalization and validation is performed upstream by `FoundationModel`; this method receives canonical dicts only. Parameters ---------- context : dict pandas Series Normalized training series, one entry per series. context_exog : Any Not used, present here for API consistency by convention. Returns ------- self : MoiraiAdapter """self.context_=contextself.is_fitted=Truereturnself
defpredict(self,steps:int,context:dict[str,pd.Series],context_exog:Any,exog:Any,quantiles:list[float]|tuple[float]|None,)->dict[str,np.ndarray]:""" Generate predictions using Moirai. All input normalization, validation, and context trimming is performed upstream by `FoundationModel`; this method receives pre-processed dicts only. Parameters ---------- steps : int Number of steps ahead to forecast. context : dict pandas Series Per-series context windows (already trimmed to `context_length`). context_exog : Any Not used, present here for API consistency by convention. exog : Any Not used, present here for API consistency by convention. quantiles : list of float or None Quantile levels. Must be a subset of `SUPPORTED_QUANTILES`. Returns ------- predictions : dict Keys are series names. Each value is a 2-D array of shape `(steps, n_quantiles)`. Raises ------ ValueError If a requested quantile level is not in `SUPPORTED_QUANTILES`. """ifquantilesisnotNone:quantile_list=list(quantiles)forqinquantile_list:ifnotany(abs(q-sq)<1e-9forsqinself.SUPPORTED_QUANTILES):raiseValueError(f"Moirai only supports quantile levels "f"{self.SUPPORTED_QUANTILES}. Got {q!r}. "f"Quantile interpolation is not supported.")else:quantile_list=Nonequantile_levels=quantile_listifquantile_listisnotNoneelse[0.5]q_indices=[next(ifori,sqinenumerate(self.SUPPORTED_QUANTILES)ifabs(q-sq)<1e-9)forqinquantile_levels]series_names_in=list(context.keys())inputs_list=[context[name].to_numpy(dtype=np.float32).reshape(-1,1)fornameinseries_names_in]raw=self._run_inference(inputs_list,steps)predictions:dict[str,np.ndarray]={}fori,nameinenumerate(series_names_in):predictions[name]=raw[i][q_indices,:].T# (steps, n_quantiles)returnpredictions
Load the Moirai2Module into self._module if not already set.
Returns:
Type
Description
None
Raises:
Type
Description
ImportError
If uni2ts is not installed.
Notes
The module is imported lazily from uni2ts and instantiated via
Moirai2Module.from_pretrained, then set to evaluation mode.
This method is a no-op when self._module is already populated.
def_load_module(self)->None:""" Load the `Moirai2Module` into `self._module` if not already set. Returns ------- None Raises ------ ImportError If `uni2ts` is not installed. Notes ----- The module is imported lazily from `uni2ts` and instantiated via `Moirai2Module.from_pretrained`, then set to evaluation mode. This method is a no-op when `self._module` is already populated. """ifself._moduleisnotNone:returntry:fromuni2ts.model.moirai2importMoirai2ModuleexceptImportErrorasexc:raiseImportError("uni2ts is required for MoiraiAdapter. ""Install it with `pip install uni2ts`.")fromexcself._module=Moirai2Module.from_pretrained(self.model_id)self._module.eval()
Build the Moirai2Forecast inference wrapper if not already set.
Returns:
Type
Description
None
Raises:
Type
Description
ImportError
If uni2ts is not installed.
Notes
Calls _load_module then wraps self._module in a
Moirai2Forecast with prediction_length=1 (overridden
per-call via hparams_context), sets it to evaluation mode,
and moves it to the device specified by self.device.
This method is a no-op when self._forecast_obj is already
populated.
def_ensure_forecast_obj(self)->None:""" Build the `Moirai2Forecast` inference wrapper if not already set. Returns ------- None Raises ------ ImportError If `uni2ts` is not installed. Notes ----- Calls `_load_module` then wraps `self._module` in a `Moirai2Forecast` with `prediction_length=1` (overridden per-call via `hparams_context`), sets it to evaluation mode, and moves it to the device specified by `self.device`. This method is a no-op when `self._forecast_obj` is already populated. """ifself._forecast_objisnotNone:returnself._load_module()fromuni2ts.model.moirai2importMoirai2Forecastself._forecast_obj=Moirai2Forecast(module=self._module,prediction_length=1,context_length=self.context_length,target_dim=1,feat_dynamic_real_dim=0,past_feat_dynamic_real_dim=0,).eval()resolved_device=_resolve_torch_device(self.device)ifresolved_device=="mps":warnings.warn("MPS device is not supported by Moirai because the uni2ts ""library uses float64 operations internally. Falling back ""to CPU.",stacklevel=6,)resolved_device="cpu"self._forecast_obj.to(resolved_device)
def_run_inference(self,inputs_list:list[np.ndarray],steps:int,)->np.ndarray:""" Run batched inference with `Moirai2Forecast`. Parameters ---------- inputs_list : list of numpy ndarray List of 2-D arrays with shape `(T, 1)`, one per series. Each array holds `float32` values. steps : int Forecast horizon. Returns ------- raw : numpy ndarray Array of shape `(n_series, 9, steps)` containing quantile forecasts for the 9 fixed levels in `SUPPORTED_QUANTILES` order. """self._ensure_forecast_obj()withself._forecast_obj.hparams_context(prediction_length=steps):raw=self._forecast_obj.predict(inputs_list)returnraw
Adapter for TabICL zero-shot time-series foundation models.
Parameters:
Name
Type
Description
Default
model_id
str
HuggingFace model ID, e.g. "soda-inria/tabicl".
required
model
object
Pre-instantiated TabICLForecaster instance. If None, a new
instance is created lazily on the first call to predict. Intended
for testing only.
None
context_length
int
Maximum number of historical observations to use as context. At fit
time only the last context_length observations are stored. At
predict time, if context is longer than context_length it is
trimmed to this length; if it is shorter, all available observations
are used as-is. Must be a positive integer.
4096
point_estimate
str
Method used to derive the point forecast from the TabICL output.
Accepted values: 'mean', 'median'.
'mean'
tabicl_config
dict
Additional keyword arguments forwarded verbatim to
TabICLRegressor at inference time. If None, defaults to empty
dict (TabICL's own defaults).
None
temporal_features
list
List of TimeTransform instances applied to the time series before
inference. If None, TabICL uses its default transforms:
[IndexEncoder(), DatetimeEncoder(), AutoPeriodicEncoder()]. Pass
an empty list to disable all temporal feature engineering.
Internal TabICLForecaster instance. None until the first call
to predict, after which it is cached for reuse.
Notes
TabICL supports arbitrary quantile levels (any float in [0, 1]),
unlike models with fixed quantile sets such as TimesFM or Moirai.
Covariate support is available: extra columns in context and exog
are forwarded as covariates. TabICL uses only the intersection of columns
present in both context and future data (missing values are filled with
NaN).
Series with a RangeIndex are accepted. Internally, TabICL requires
datetime timestamps, so a synthetic daily DatetimeIndex (starting
2000-01-01) is used. Calendar-based transforms
(DatetimeEncoder, AutoPeriodicEncoder) will not be meaningful for
such series; consider passing temporal_features=[] or
temporal_features=[IndexEncoder()] in that case.
Pre-instantiated TabICLForecaster instance. If None, a new
instance is created lazily on the first call to predict.
Intended for testing only.
None
context_length
int
Maximum number of historical observations to retain as context.
At fit time only the last context_length observations of
series (and exog) are stored. At predict time, if
context is longer than context_length it is trimmed to
this length before inference; if it is shorter, all available
observations are passed as-is. Must be a positive integer.
4096
point_estimate
str
Method used to derive the point forecast. Accepted values:
'mean', 'median'.
'mean'
tabicl_config
dict
Additional keyword arguments forwarded verbatim to
TabICLRegressor at inference time.
None
temporal_features
list
List of TimeTransform instances applied before inference. If
None, TabICL uses its defaults. Pass [] to disable all
temporal feature engineering.
def__init__(self,model_id:str,*,model:Any|None=None,context_length:int=4096,point_estimate:str="mean",tabicl_config:dict[str,Any]|None=None,temporal_features:list[Any]|None=None,)->None:""" Initialise the adapter. Parameters ---------- model_id : str HuggingFace model ID, e.g. `"soda-inria/tabicl"`. model : object, default None Pre-instantiated `TabICLForecaster` instance. If `None`, a new instance is created lazily on the first call to `predict`. Intended for testing only. context_length : int, default 4096 Maximum number of historical observations to retain as context. At `fit` time only the last `context_length` observations of `series` (and `exog`) are stored. At `predict` time, if `context` is longer than `context_length` it is trimmed to this length before inference; if it is shorter, all available observations are passed as-is. Must be a positive integer. point_estimate : str, default 'mean' Method used to derive the point forecast. Accepted values: `'mean'`, `'median'`. tabicl_config : dict, default None Additional keyword arguments forwarded verbatim to `TabICLRegressor` at inference time. temporal_features : list, default None List of `TimeTransform` instances applied before inference. If `None`, TabICL uses its defaults. Pass `[]` to disable all temporal feature engineering. """ifnotisinstance(context_length,int)orcontext_length<1:raiseValueError(f"`context_length` must be a positive integer. Got {context_length!r}.")ifpoint_estimatenotin("mean","median"):raiseValueError(f"`point_estimate` must be 'mean' or 'median'. Got {point_estimate!r}.")self.model_id=model_idself._model=modelself.context_=Noneself.context_exog_=Noneself.context_length=context_lengthself.point_estimate=point_estimateself.tabicl_config=dict(tabicl_config)iftabicl_configelse{}self.temporal_features=temporal_featuresself.is_fitted=False
Keys: model_id, context_length, point_estimate,
tabicl_config, temporal_features. tabicl_config is
returned as None when no additional config was set (i.e.
when the internal dict is empty).
defget_params(self)->dict:""" Return the adapter's constructor parameters. Returns ------- params : dict Keys: `model_id`, `context_length`, `point_estimate`, `tabicl_config`, `temporal_features`. `tabicl_config` is returned as `None` when no additional config was set (i.e. when the internal dict is empty). """return{"model_id":self.model_id,"context_length":self.context_length,"point_estimate":self.point_estimate,"tabicl_config":self.tabicl_configorNone,"temporal_features":self.temporal_features,}
Set adapter parameters. Resets the model when any parameter changes,
since the TabICLForecaster is instantiated lazily on the first
predict call using the current adapter state.
defset_params(self,**params)->TabICLAdapter:""" Set adapter parameters. Resets the model when any parameter changes, since the `TabICLForecaster` is instantiated lazily on the first `predict` call using the current adapter state. Parameters ---------- **params : Valid keys: `model_id`, `context_length`, `point_estimate`, `tabicl_config`, `temporal_features`. Returns ------- self : TabICLAdapter """valid={"model_id","context_length","point_estimate","tabicl_config","temporal_features",}invalid=set(params)-validifinvalid:raiseValueError(f"Invalid parameter(s) for TabICLAdapter: {sorted(invalid)}. "f"Valid parameters are: {sorted(valid)}.")validated={}forkey,valueinparams.items():ifkey=="context_length":ifnotisinstance(value,int)orvalue<1:raiseValueError(f"`context_length` must be a positive integer. Got {value!r}.")validated[key]=valueelifkey=="point_estimate":ifvaluenotin("mean","median"):raiseValueError(f"`point_estimate` must be 'mean' or 'median'. Got {value!r}.")validated[key]=valueelifkey=="tabicl_config":validated[key]=dict(value)ifvalueelse{}else:validated[key]=valueactually_changed={k:vfork,vinvalidated.items()ifgetattr(self,k)!=v}ifactually_changed:self._model=Noneforkey,valueinactually_changed.items():setattr(self,key,value)returnself
deffit(self,context:dict[str,pd.Series],context_exog:dict[str,pd.DataFrame|pd.Series|None]|None,)->TabICLAdapter:""" Store the training series and optional historical exogenous variables. No model training occurs since TabICL is a zero-shot inference model. All input normalization and validation is performed upstream by `FoundationModel`; this method receives canonical dicts only. Parameters ---------- context : dict pandas Series Normalized training series, one entry per series. context_exog : dict pandas DataFrame, pandas Series, or None Per-series historical exogenous variables (past covariates). Returns ------- self : TabICLAdapter """self.context_=contextself.context_exog_=context_exogself.is_fitted=Truereturnself
defpredict(self,steps:int,context:dict[str,pd.Series],context_exog:dict[str,pd.DataFrame|pd.Series|None]|None,exog:dict[str,pd.DataFrame|pd.Series|None]|None,quantiles:list[float]|tuple[float]|None,)->dict[str,np.ndarray]:""" Generate predictions using TabICL. All input normalization, validation, and context trimming is performed upstream by `FoundationModel`; this method receives pre-processed dicts only. Parameters ---------- steps : int Number of steps ahead to forecast. context : dict pandas Series Per-series context windows (already trimmed to `context_length`). context_exog : dict pandas DataFrame, pandas Series, or None Per-series past covariates (already trimmed). exog : dict pandas DataFrame, pandas Series, or None Per-series future covariates for the forecast horizon. quantiles : list of float or None Quantile levels to return. If `None`, a point forecast is produced (shape `(steps, 1)`). Accepts any float in `[0, 1]`. Returns ------- predictions : dict Keys are series names. Each value is a 2-D numpy ndarray of shape `(steps, n_quantiles)`. """self._load_model()quantile_list=list(quantiles)ifquantilesisnotNoneelseNonetabicl_quantiles=(quantile_listifquantile_listisnotNoneelse[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9])series_names_in=list(context.keys())first_series=next(iter(context.values()))is_datetime=isinstance(first_series.index,pd.DatetimeIndex)ifnotis_datetime:warnings.warn("TabICLAdapter received series with a non-DatetimeIndex. ""TabICL requires datetime timestamps internally; a synthetic ""daily DatetimeIndex (starting 2000-01-01) will be used. ""Calendar-based temporal features (DatetimeEncoder, ""AutoPeriodicEncoder) will not be meaningful for ""integer-indexed data. Consider passing ""`temporal_features=[]` to disable calendar feature ""transforms.",# stacklevel=3: TabICLAdapter.predict → FoundationModel.predict → userstacklevel=3,)context_df=self._build_context_df(series_names=series_names_in,context=context,context_exog=context_exog,is_datetime=is_datetime)future_df=self._build_future_df(series_names=series_names_in,context=context,exog=exog,steps=steps,is_datetime=is_datetime)result_df=self._model.predict_df(context_df=context_df,future_df=future_df,quantiles=tabicl_quantiles,)# result_df is a plain DataFrame with MultiIndex (item_id, timestamp).# columns: "target" (str) and quantile levels as float column names.predictions:dict[str,np.ndarray]={}fornameinseries_names_in:group=result_df.loc[name]# DataFrame indexed by timestampifquantile_listisNone:predictions[name]=group["target"].to_numpy().reshape(-1,1)else:predictions[name]=group[quantile_list].to_numpy()returnpredictions
Load the TabICLForecaster into self._model if not already set.
Returns:
Type
Description
None
Raises:
Type
Description
ImportError
If tabicl[forecast] is not installed.
Notes
The model is imported lazily from tabicl and instantiated with
the current adapter parameters. This method is a no-op when
self._model is already populated (either by a prior call or by
the model test-injection parameter).
def_load_model(self)->None:""" Load the `TabICLForecaster` into `self._model` if not already set. Returns ------- None Raises ------ ImportError If `tabicl[forecast]` is not installed. Notes ----- The model is imported lazily from `tabicl` and instantiated with the current adapter parameters. This method is a no-op when `self._model` is already populated (either by a prior call or by the `model` test-injection parameter). """ifself._modelisnotNone:returntry:fromtabicl.forecastimportTabICLForecasterexceptImportErrorasexc:raiseImportError("tabicl[forecast] is required for TabICLAdapter. ""Install it with `pip install tabicl[forecast]`.")fromexcself._model=TabICLForecaster(max_context_length=self.context_length,temporal_features=self.temporal_features,point_estimate=self.point_estimate,tabicl_config=self.tabicl_configor{},)
For DatetimeIndex series the original index is returned. For
RangeIndex series a synthetic daily DatetimeIndex starting at
2000-01-01 is created so that TabICL's requirement for datetime
timestamps is satisfied.
Parameters:
Name
Type
Description
Default
series
pandas Series
The context series.
required
is_datetime
bool
Whether the series has a DatetimeIndex.
required
Returns:
Name
Type
Description
timestamps
pandas DatetimeIndex
Datetime timestamps aligned with the series values.
def_get_timestamps(self,series:pd.Series,is_datetime:bool)->pd.DatetimeIndex:""" Return datetime timestamps for a context series. For `DatetimeIndex` series the original index is returned. For `RangeIndex` series a synthetic daily `DatetimeIndex` starting at 2000-01-01 is created so that TabICL's requirement for datetime timestamps is satisfied. Parameters ---------- series : pandas Series The context series. is_datetime : bool Whether the series has a `DatetimeIndex`. Returns ------- timestamps : pandas DatetimeIndex Datetime timestamps aligned with the series values. """ifis_datetime:returnseries.indexreturnpd.date_range("2000-01-01",periods=len(series),freq="D")
Return datetime timestamps for the forecast horizon.
For DatetimeIndex series the horizon is appended at the inferred
frequency. For RangeIndex series the synthetic daily timeline
(2000-01-01 + len(context) days) is extended by steps days.
Parameters:
Name
Type
Description
Default
series
pandas Series
The context series (used to determine the end timestamp and
frequency).
def_get_future_timestamps(self,series:pd.Series,steps:int,is_datetime:bool)->pd.DatetimeIndex:""" Return datetime timestamps for the forecast horizon. For `DatetimeIndex` series the horizon is appended at the inferred frequency. For `RangeIndex` series the synthetic daily timeline (2000-01-01 + len(context) days) is extended by `steps` days. Parameters ---------- series : pandas Series The context series (used to determine the end timestamp and frequency). steps : int Number of steps ahead. is_datetime : bool Whether the series has a `DatetimeIndex`. Returns ------- timestamps : pandas DatetimeIndex Datetime timestamps for the `steps` forecast steps. """ifis_datetime:freq=series.index.freqiffreqisNone:freq=pd.tseries.frequencies.to_offset(pd.infer_freq(series.index))timestamps=pd.date_range(start=series.index[-1]+freq,periods=steps,freq=freq,)else:n=len(series)timestamps=pd.date_range(start=pd.Timestamp("2000-01-01")+pd.Timedelta(days=n),periods=steps,freq="D",)returntimestamps
def_build_context_df(self,series_names:list,context:dict[str,pd.Series],context_exog:dict[str,pd.DataFrame|None]|None,is_datetime:bool,)->pd.DataFrame:""" Build a long-format context DataFrame expected by TabICL. Each series' observations become rows with `item_id`, `timestamp`, `target`, and optional exogenous covariate columns. Parameters ---------- series_names : list Ordered list of series names. context : dict pandas Series Per-series context windows. context_exog : dict or None Per-series historical exogenous variables. is_datetime : bool Whether the series have a `DatetimeIndex`. Returns ------- context_df : pandas DataFrame Long-format DataFrame with columns `item_id`, `timestamp`, `target`, and any exogenous columns. """context_df=[]fornameinseries_names:series=context[name]n=len(series)part=pd.DataFrame({"item_id":np.full(n,name),"timestamp":np.asarray(self._get_timestamps(series,is_datetime)),"target":series.to_numpy(dtype=float),})exog_entry=(context_exog.get(name)ifcontext_exogisnotNoneelseNone)ifexog_entryisnotNone:part=pd.concat([part,exog_entry.reset_index(drop=True)],axis=1)context_df.append(part)context_df=pd.concat(context_df,ignore_index=True)returncontext_df
def_build_future_df(self,series_names:list,context:dict[str,pd.Series],exog:dict[str,pd.DataFrame|None]|None,steps:int,is_datetime:bool,)->pd.DataFrame:""" Build a long-format future DataFrame expected by TabICL. Each series' forecast horizon becomes rows with `item_id`, `timestamp`, and optional future exogenous covariate columns. Parameters ---------- series_names : list Ordered list of series names. context : dict pandas Series Per-series context windows (used to derive future timestamps). exog : dict or None Per-series future exogenous variables covering the forecast horizon. steps : int Number of steps ahead. is_datetime : bool Whether the series have a `DatetimeIndex`. Returns ------- future_df : pandas DataFrame Long-format DataFrame with columns `item_id`, `timestamp`, and any future exogenous columns. """future_df=[]fornameinseries_names:series=context[name]part=pd.DataFrame({"item_id":np.full(steps,name),"timestamp":np.asarray(self._get_future_timestamps(series,steps,is_datetime)),})future_exog=exog.get(name)ifexogisnotNoneelseNoneiffuture_exogisnotNone:part=pd.concat([part,future_exog.reset_index(drop=True)],axis=1)future_df.append(part)future_df=pd.concat(future_df,ignore_index=True)returnfuture_df