Fully Adaptive Multi-Scale Spatial-Temporal Recurrent Networks for Traffic Flow Prediction
Traffic flow prediction is one of the most fundamental tasks of intelligent transportation systems. The complex and dynamic spatial-temporal dependencies make the traffic flow prediction quite challenging. Although existing spatial-temporal graph neural networks hold prominent, they often encounter...
Saved in:
| Published in | IEEE transactions on artificial intelligence pp. 1 - 14 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
IEEE
2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2691-4581 2691-4581 |
| DOI | 10.1109/TAI.2025.3610568 |
Cover
| Summary: | Traffic flow prediction is one of the most fundamental tasks of intelligent transportation systems. The complex and dynamic spatial-temporal dependencies make the traffic flow prediction quite challenging. Although existing spatial-temporal graph neural networks hold prominent, they often encounter challenges of using a static graph with the same set of edge weights across different time points, which greatly limits the representational power of spatial graph structures, as well as lacking capability of capturing spatial-temporal patterns at different time scales from a single time point to multiple time points. In this paper, we propose a fully adaptive Multi-Scale Spatial-Temporal Recurrent Network for traffic flow prediction, namely MSSTRN, which consists of two different recurrent neural networks: the single-step gate recurrent unit and the multi-step gate recurrent unit to fully capture the complex spatial-temporal information in the traffic data under different time steps. We integrate node embeddings with temporal position information at multiple scales to construct fully adaptive graphs and propose adaptive position graph convolution networks for capturing spatial dependencies in specific temporal contexts. Moreover, we propose a spatial-temporal position-aware attention mechanism that unifies adaptive graph convolutions and self-attention for joint spatial-temporal dependency learning. simultaneously capture spatialtemporal dependencies. Extensive experiments on four real-world traffic datasets show that our model outperforms 23 baselines with significant margins in prediction accuracy. |
|---|---|
| ISSN: | 2691-4581 2691-4581 |
| DOI: | 10.1109/TAI.2025.3610568 |