Skip to content

Commit 51136e2

Browse files
committed
Update publications
1 parent 1ee8534 commit 51136e2

3 files changed

Lines changed: 17 additions & 2 deletions

File tree

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
---
2+
title: "Chronos-2: From Univariate to Universal Forecasting"
3+
date: 2025-10-01
4+
publishDate: 2025-10-01
5+
authors: ["**Abdul Fatir Ansari, Oleksandr Shchur, Jaris Küken, **Andreas Auer**, Boran Han, Pedro Mercado, Syama Sundar Rangapuram, Huibin Shen, Lorenzo Stella, Xiyuan Zhang, Mononito Goswami, Shubham Kapoor, Danielle C. Maddix, Pablo Guerron, Tony Hu, Junming Yin, Nick Erickson, Prateek Mutalik Desai, Hao Wang, Huzefa Rangwala, George Karypis, Yuyang Wang, Michael Bohlke-Schneider"]
6+
publication_types: ["2"]
7+
abstract: "Pretrained time series models have enabled inference-only forecasting systems that produce accurate predictions without task-specific training. However, existing approaches largely focus on univariate forecasting, limiting their applicability in real-world scenarios where multivariate data and covariates play a crucial role. We present Chronos-2, a pretrained model capable of handling univariate, multivariate, and covariate-informed forecasting tasks in a zero-shot manner. Chronos-2 employs a group attention mechanism that facilitates in-context learning (ICL) through efficient information sharing across multiple time series within a group, which may represent sets of related series, variates of a multivariate series, or targets and covariates in a forecasting task. These general capabilities are achieved through training on synthetic datasets that impose diverse multivariate structures on univariate series. Chronos-2 delivers state-of-the-art performance across three comprehensive benchmarks: fev-bench, GIFT-Eval, and Chronos Benchmark II. On fev-bench, which emphasizes multivariate and covariate-informed forecasting, Chronos-2's universal ICL capabilities lead to substantial improvements over existing models. On tasks involving covariates, it consistently outperforms baselines by a wide margin. Case studies in the energy and retail domains further highlight its practical advantages. The in-context learning capabilities of Chronos-2 establish it as a general-purpose forecasting model that can be used "as is" in real-world forecasting pipelines."
8+
featured: true
9+
publication: "Technical Report"
10+
links:
11+
- icon_pack: ai
12+
icon: arxiv
13+
name: Paper
14+
url: 'https://arxiv.org/abs/2510.15821'
15+
---

content/publication/2025-tirex-workshop/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ authors: ["**Andreas Auer**, Patrick Podest, Daniel Klotz, Sebastian Böck, Gün
66
publication_types: ["2"]
77
abstract: "In-context learning, the ability of large language models to perform tasks using only examples provided in the prompt, has recently been adapted for time series forecasting. This paradigm enables zero-shot prediction, where past values serve as context for forecasting future values, making powerful forecasting tools accessible to non-experts and increasing the performance when training data are scarce. Most existing zero-shot forecasting approaches rely on transformer architectures, which, despite their success in language, often fall short of expectations in time series forecasting, where recurrent models like LSTMs frequently have the edge. Conversely, while LSTMs are well-suited for time series modeling due to their state-tracking capabilities, they lack strong in-context learning abilities. We introduce TiRex that closes this gap by leveraging xLSTM, an enhanced LSTM with competitive in-context learning skills. Unlike transformers, state-space models, or parallelizable RNNs such as RWKV, TiRex retains state-tracking, a critical property for long-horizon forecasting. To further facilitate its state-tracking ability, we propose a training-time masking strategy called CPM. TiRex sets a new state of the art in zero-shot time series forecasting on the HuggingFace benchmarks GiftEval and Chronos-ZS, outperforming significantly larger models including TabPFN-TS (Prior Labs), Chronos Bolt (Amazon), TimesFM (Google), and Moirai (Salesforce) across both short- and long-term forecasts."
88
featured: true
9-
publication: "[Spotlight] Workshop on Foundation Models for Structured Data @ ICML 2025"
9+
publication: "Workshop on Foundation Models for Structured Data @ ICML 2025 [Spotlight - Oral]"
1010
links:
1111
- icon_pack: ai
1212
icon: arxiv

content/publication/2025-zs-classification/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ authors: ["**Andreas Auer**, Daniel Klotz, Sebastian Böck, Sepp Hochreiter"]
66
publication_types: ["2"]
77
abstract: "Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models."
88
featured: true
9-
publication: "Recent Advances in Time Series Foundation Models (BERT2S) @ NeurIPS 2025"
9+
publication: "Recent Advances in Time Series Foundation Models (BERT2S) @ NeurIPS 2025 [Spotlight - Oral]"
1010
links:
1111
- icon_pack: ai
1212
icon: arxiv

0 commit comments

Comments
 (0)