I. The authors propose a method called Autowarp to learn a warping distance metric for comparing unlabeled time series data using a sequence-to-sequence autoencoder, rather than relying on hand-crafted metrics which may not generalize across different datasets.
II. Autowarp trains an autoencoder on the time series data and learns a distance metric in the latent space that closely matches the Euclidean distance between encodings.
III. The learned Autowarp distance outperforms existing hand-crafted metrics on two benchmark time series datasets, obtaining a lower SSPD error on a mobility dataset and a higher EDR accuracy on a sign language dataset.
I. The authors propose a method called Autowarp to learn a warping distance metric for comparing unlabeled time series data using a sequence-to-sequence autoencoder, rather than relying on hand-crafted metrics which may not generalize across different datasets.
II. Autowarp trains an autoencoder on the time series data and learns a distance metric in the latent space that closely matches the Euclidean distance between encodings.
III. The learned Autowarp distance outperforms existing hand-crafted metrics on two benchmark time series datasets, obtaining a lower SSPD error on a mobility dataset and a higher EDR accuracy on a sign language dataset.
I. The authors propose a method called Autowarp to learn a warping distance metric for comparing unlabeled time series data using a sequence-to-sequence autoencoder, rather than relying on hand-crafted metrics which may not generalize across different datasets.
II. Autowarp trains an autoencoder on the time series data and learns a distance metric in the latent space that closely matches the Euclidean distance between encodings.
III. The learned Autowarp distance outperforms existing hand-crafted metrics on two benchmark time series datasets, obtaining a lower SSPD error on a mobility dataset and a higher EDR accuracy on a sign language dataset.
Autowarp: Learning a Warping Distance from Unlabeled Time Series
Using Sequence-Sequence Autoencoders
Abubakar Abid (a12d@stanford.edu) ● James Zou (jamesz@stanford.edu)
2018 NeurIPS Conference
Takeaway: We can learn a good distance metric
from unlabeled time-series data instead of hand- crafting one, and the learnt distance often performs better than default metrics like DTW!
Background Methodology Results
• Autowarp consists of 2 steps: • Unlabeled time series are everywhere! We compared autowarp to (lower is better)
• Examples: unlabeled data from EEG hand-crafted metrics on
I. Feed the trajectories into a sequence-sequence datasets the original sensors, GPS trajectories, or disease autoencoder. authors had used to trajectories from patients. II. Find a warping distance over original validate them: • We need a metric to compare these trajectories that is most similar to the Euclidean 1. SSPD on the Taxicab unlabeled time series. distance between their latent representations Mobility dataset (right) • Problem: the sources of noise in time (We implement this efficiently and with certain theoretical guarantees – 2. EDR on the Australian series can be very complex (e.g. see our paper at: https://arxiv.org/abs/1810.10107 for details) Sign Language dataset (bottom) resampling), making it difficult to find a single metric that works well across all (higher is better) time series data.
What’s the best distance metric
for these GPS trajectories collected from cabs in San Francisco?
• Let’s learn Intuition: Similar trajectories should have similar
the best latent representations in an autoencoder metric from the trained to predict the next time point. The warping distance family acts as regularization. time series themselves!