OASIcs.DX.2024.29.pdf
- Filesize: 1.85 MB
- 14 pages
The scarcity of labeled data for intelligent diagnosis of non-linear technical systems is a common problem for developing robust and reliable real-world applications. Several deep learning approaches have been developed to address this challenge, including self-supervised learning, representation learning, and transfer learning. Due largely to their powerful attention mechanisms, transformers excel at capturing long-term dependencies across multichannel and multi-modal signals in sequential data, making them suitable candidates for time series modeling. Despite their potential, studies applying transformers for diagnostic functions, especially in signal reconstruction through representation learning, remain limited. This paper aims to narrow this gap by identifying the requirements and potential of transformer self-attention mechanisms for developing auto-associative inference engines that learn exclusively from healthy behavioral data. We apply a transformer backbone for signal reconstruction using simulated data from a simplified powertrain. Feedback from these experiments, and the reviewed evidence from the literature, allows us to conclude that autoencoder and autoregressive approaches are potentiated by transformers.
Feedback for Dagstuhl Publishing