-
Notifications
You must be signed in to change notification settings - Fork 100
Description
Hello,
I wanted to build up some intuition about how the soft-dtw cost function works by playing around with some fake data. I started comparing scaled and shifted cosines and wanted to see what numbers soft-dtw would spit out. However, when I compared a cosine to itself I couldn't understand why I was getting a negative result. Wouldn't the cumulative alignment cost for signal to itself be zero? Could someone please help me understand why the result is negative?
Also, when I compared similarly shifted cosines, but one was scaled by 0.5 and the other scaled by 2.0, the discrepancies were very different from each other. Why does scaling have such a strong effect on the scoring? Should I be self-normalizing my data before using soft-dtw?
I tried the following:
import numpy as np
from sdtw import SoftDTW
from sdtw.distance import SquaredEuclidean
t = np.arange(100)
X = np.cos(t).reshape(-1, 1)
Y = 0.5*np.cos(t + 1).reshape(-1, 1)
Z = 2.0*np.cos(t + 1).reshape(-1, 1)
Dxx = SquaredEuclidean(X, X)
Dxy = SquaredEuclidean(X, Y)
Dxz = SquaredEuclidean(X, Z)
sdtw_xx = SoftDTW(Dxx, gamma=1.0)
sdtw_xy = SoftDTW(Dxy, gamma=1.0)
sdtw_xz = SoftDTW(Dxz, gamma=1.0)
print(sdtw_xx.compute())
print(sdtw_xy.compute())
print(sdtw_xz.compute())
I got the following output:
-104.37094339998296
-106.85481189044374
-17.672483911503427