Skip to content

zshhans/MTM

Repository files navigation

MTM: A Multi-Scale Token Mixing Transformer for Irregular Multivariate Time Series Classification

This is the PyTorch and Lightning implementation of our paper: MTM: A Multi-Scale Token Mixing Transformer for Irregular Multivariate Time Series Classification.

If you find this repo useful, please consider citing our paper:

@inproceedings{zhong2025mtm,
    author = {Zhong, Shuhan and Zhuo, Weipeng and Song, Sizhe and Li, Guanyao and Yu, Zhongyi and Chan, S.-H. Gary},
    title = {A Multi-Scale Token Mixing Transformer for Irregular Multivariate Time Series Classification},
    year = {2025},
    month = {August},
    booktitle={Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2},
    doi = {10.1145/3711896.3737058},
}

Table of Contents

Abstract

Irregular multivariate time series (IMTS) is characterized by the lack of synchronized observations across its different channels. In this paper, we point out that this channel-wise asynchrony can lead to poor channel-wise modeling of existing deep learning methods. To overcome this limitation, we propose MTM, a multi-scale token mixing transformer for the classification of IMTS. We find that the channel-wise asynchrony can be alleviated by down-sampling the time series to coarser timescales, and propose to incorporate a masked concat pooling in MTM that gradually down-samples IMTS to enhance the channel-wise attention modules. Meanwhile, we propose a novel channel-wise token mixing mechanism which proactively chooses important tokens from one channel and mixes them with other channels, to further boost the channel-wise learning of our model. Through extensive experiments on real-world datasets and comparison with state-of-the-art methods, we demonstrate that MTM consistently achieves the best performance on all the benchmarks, with improvements of up to 3.8% in AUPRC for classification.

overview

Dependency Setup

  1. Download and install conda (https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
  2. Create a conda virtual environment
    conda create -n mtm python=3.11
    conda activate mtm
  3. Install the correct version of PyTorch from the PyTorch official website
  4. Install other required packages:
    pip install -r requirements.txt

Dataset Preparation

Please download the preprocessed P12, P19, and PAM datasets from Raindrop's Github repository.

Please structure the directory as follows:

path/to/MTM/
└─dataset/
  └─raindrop/
    ├─P12data/
    ├─P19data/
    └─PAMAP2data/

Run MTM

Please use python main.py to run the experiments. Please use the -h or --help argument for details.

Example training commands:

  • Run all benchmarks
    python main.py classification
  • Run specific benchmarks
    python main.py classification --dataset p19 pam --subset 1 2 3

Logs, results, and model checkpoints will be saved in /path/to/MTM/logs

Baselines

About

[KDD 2025] A Multi-Scale Token Mixing Transformer for Irregular Multivariate Time Series Classification

Resources

License

Stars

Watchers

Forks

Languages