Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Free Energy Mixer accepted to ICLR 2026
Published:
Free Energy Mixer has been accepted to ICLR 2026.
Links: OpenReview page · Code
ZeroS accepted to NeurIPS 2025 (Spotlight)
Published:
ZeroS: Zero‑Sum Linear Attention for Efficient Transformers has been accepted to NeurIPS 2025 (Spotlight).
Links: NeurIPS page · OpenReview PDF
Two papers accepted at ICML 2025
Published:
Two papers (WAVE and SAMoVAR) are accepted to ICML 2025 (Poster):
ICTSP accepted to ICLR 2025
Published:
In‑context Time Series Predictor is accepted to ICLR 2025 (Poster).
Links: OpenReview page · arXiv · Slides
CATS accepted at ICML 2024
Published:
CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables is accepted to ICML 2024 (Poster).
Links: PMLR page · PDF · arXiv · Code
ARM accepted at ICLR 2024
Published:
ARM: Refining Multivariate Forecasting with Adaptive Temporal‑Contextual Learning is accepted to ICLR 2024 (Poster).
Links: ICLR proceedings page · OpenReview PDF · arXiv
portfolio
publications
ARM: Refining Multivariate Forecasting with Adaptive Temporal‑Contextual Learning
Published in ICLR 2024 (Poster), 2024
Presents ARM with AUEL, Random Dropping, and multi‑kernel local smoothing to better capture series‑wise patterns and inter‑series dependencies for long‑term multivariate TSF.
CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables
Published in ICML 2024 (Poster), PMLR 235: 32990–33006, 2024
Constructs Auxiliary Time Series (ATS) as exogenous inputs to capture inter‑series relations; identifies continuity, sparsity, and variability principles; improves multivariate TSF even with simple predictors.
In‑context Time Series Predictor
Published in ICLR 2025 (Poster), 2025
Reformulates TSF as in‑context learning by constructing tokens of (lookback, future) task pairs, enabling Transformers to adapt predictors from context without parameter updates.
WAVE: Weighted Autoregressive Varying Gate for Time Series Forecasting
Published in ICML 2025 (Poster), PMLR 267: 40464–40490, 2025
Adds ARMA structure to autoregressive attention via a weighted varying gate, decoupling long‑range and local effects and improving TSF quality without increasing asymptotic complexity.
Linear Transformers as VAR Models: Aligning Autoregressive Attention Mechanisms with Autoregressive Forecasting
Published in ICML 2025 (Poster), PMLR 267: 40848–40867, 2025
Shows that a linear attention layer can be interpreted as a dynamic VAR; proposes SAMoVAR to realign multi‑layer Transformers with autoregressive forecasting for improved interpretability and accuracy.
ZeroS: Zero‑Sum Linear Attention for Efficient Transformers
Published in NeurIPS 2025 (Spotlight), 2025
Introduces Zero‑Sum Linear Attention (ZeroS), which removes the uniform zero‑order term and reweights residuals to enable stable positive/negative attention weights, allowing contrastive operations within a single layer while retaining O(N) complexity.
Free Energy Mixer
Published in ICLR 2026, 2026
Introduces Free Energy Mixer (FEM), which interprets (q,k) attention scores as a prior and performs a log-sum-exp free-energy readout to reweight values at the channel level, enabling a smooth transition from mean aggregation to selective channel-wise retrieval without increasing asymptotic complexity.
