Skip to content

Commit 48ab064

Browse files
authored
Merge pull request #760 from WenjieDu/dev
Release v1.0
2 parents 4f4e9b4 + 2a7577d commit 48ab064

File tree

23 files changed

+913
-34
lines changed

23 files changed

+913
-34
lines changed

.github/workflows/testing_ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ jobs:
6060
run: |
6161
pip install torch-geometric torch-scatter torch-sparse -f "https://data.pyg.org/whl/torch-${{ matrix.pytorch-version }}+cpu.html"
6262
pip install pypots[dev]
63-
pip install numpy==1.24 # many libs not compatible with numpy 2.0. Note 3.12 requests for numpy>=2.0
63+
pip install numpy==1.26.4 # many libs not compatible with numpy 2.0. Note 3.12 requests for numpy>=2.0
6464
pip install pandas==1.5 # fix pandas version to avoid installing pandas 2.0, the same reason with numpy
6565
python_site_path=`python -c "import site; print(site.getsitepackages()[0])"`
6666
echo "python site-packages path: $python_site_path"

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ The paper references and links are all listed at the bottom of this file.
142142
| Neural Net | Koopa🧑‍🔧[^29] || | | | | `2023 - NeurIPS` |
143143
| Neural Net | Crossformer🧑‍🔧[^16] || | | || `2023 - ICLR` |
144144
| Neural Net | TimesNet[^14] |||| || `2023 - ICLR` |
145-
| Neural Net | PatchTST🧑‍🔧[^18] || | | || `2023 - ICLR` |
145+
| Neural Net | PatchTST🧑‍🔧[^18] || | | || `2023 - ICLR` |
146146
| Neural Net | ETSformer🧑‍🔧[^19] || | | || `2023 - ICLR` |
147147
| Neural Net | MICN🧑‍🔧[^27] ||| | | | `2023 - ICLR` |
148148
| Neural Net | DLinear🧑‍🔧[^17] ||| | || `2023 - AAAI` |
@@ -157,7 +157,7 @@ The paper references and links are all listed at the bottom of this file.
157157
| Neural Net | Pyraformer🧑‍🔧[^26] || | | || `2022 - ICLR` |
158158
| Neural Net | Raindrop[^5] | | || | | `2022 - ICLR` |
159159
| Neural Net | FEDformer🧑‍🔧[^20] || | | || `2022 - ICML` |
160-
| Neural Net | Autoformer🧑‍🔧[^15] || | | || `2021 - NeurIPS` |
160+
| Neural Net | Autoformer🧑‍🔧[^15] || | | || `2021 - NeurIPS` |
161161
| Neural Net | CSDI[^12] ||| | | | `2021 - NeurIPS` |
162162
| Neural Net | Informer🧑‍🔧[^21] || | | || `2021 - AAAI` |
163163
| Neural Net | US-GAN[^10] || | | | | `2021 - AAAI` |

README_zh.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ PyPOTS当前支持多变量POTS数据的插补, 预测, 分类, 聚类以及异
129129
| Neural Net | Koopa🧑‍🔧[^29] || | | | | `2023 - NeurIPS` |
130130
| Neural Net | Crossformer🧑‍🔧[^16] || | | || `2023 - ICLR` |
131131
| Neural Net | TimesNet[^14] |||| || `2023 - ICLR` |
132-
| Neural Net | PatchTST🧑‍🔧[^18] || | | || `2023 - ICLR` |
132+
| Neural Net | PatchTST🧑‍🔧[^18] || | | || `2023 - ICLR` |
133133
| Neural Net | ETSformer🧑‍🔧[^19] || | | || `2023 - ICLR` |
134134
| Neural Net | MICN🧑‍🔧[^27] ||| | | | `2023 - ICLR` |
135135
| Neural Net | DLinear🧑‍🔧[^17] ||| | || `2023 - AAAI` |
@@ -144,7 +144,7 @@ PyPOTS当前支持多变量POTS数据的插补, 预测, 分类, 聚类以及异
144144
| Neural Net | Pyraformer🧑‍🔧[^26] || | | || `2022 - ICLR` |
145145
| Neural Net | Raindrop[^5] | | || | | `2022 - ICLR` |
146146
| Neural Net | FEDformer🧑‍🔧[^20] || | | || `2022 - ICML` |
147-
| Neural Net | Autoformer🧑‍🔧[^15] || | | || `2021 - NeurIPS` |
147+
| Neural Net | Autoformer🧑‍🔧[^15] || | | || `2021 - NeurIPS` |
148148
| Neural Net | CSDI[^12] ||| | | | `2021 - NeurIPS` |
149149
| Neural Net | Informer🧑‍🔧[^21] || | | || `2021 - AAAI` |
150150
| Neural Net | US-GAN[^10] || | | | | `2021 - AAAI` |

docs/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The paper references are all listed at the bottom of this readme file.
171171
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
172172
| Neural Net | TimesNet :cite:`wu2023timesnet` |||| || ``2023 - ICLR`` |
173173
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
174-
| Neural Net | PatchTST🧑‍🔧 :cite:`nie2023patchtst` || | | || ``2023 - ICLR`` |
174+
| Neural Net | PatchTST🧑‍🔧 :cite:`nie2023patchtst` || | | || ``2023 - ICLR`` |
175175
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
176176
| Neural Net | ETSformer🧑‍🔧 :cite:`woo2023etsformer` || | | | | ``2023 - ICLR`` |
177177
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
@@ -201,7 +201,7 @@ The paper references are all listed at the bottom of this readme file.
201201
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
202202
| Neural Net | FEDformer🧑‍🔧 :cite:`zhou2022fedformer` || | | | | ``2022 - ICML`` |
203203
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
204-
| Neural Net | Autoformer🧑‍🔧 :cite:`wu2021autoformer` || | | || ``2021 - NeurIPS`` |
204+
| Neural Net | Autoformer🧑‍🔧 :cite:`wu2021autoformer` || | | || ``2021 - NeurIPS`` |
205205
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
206206
| Neural Net | CSDI :cite:`tashiro2021csdi` ||| | | | ``2021 - NeurIPS`` |
207207
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+

docs/pypots.classification.rst

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,15 @@ pypots.classification.itransformer
3737
:show-inheritance:
3838
:inherited-members:
3939

40+
pypots.classification.patchtst
41+
----------------------------------
42+
43+
.. automodule:: pypots.classification.patchtst
44+
:members:
45+
:undoc-members:
46+
:show-inheritance:
47+
:inherited-members:
48+
4049
pypots.classification.csai
4150
----------------------------------
4251

@@ -55,6 +64,15 @@ pypots.classification.timesnet
5564
:show-inheritance:
5665
:inherited-members:
5766

67+
pypots.classification.autoformer
68+
----------------------------------
69+
70+
.. automodule:: pypots.classification.autoformer
71+
:members:
72+
:undoc-members:
73+
:show-inheritance:
74+
:inherited-members:
75+
5876
pypots.classification.ts2vec
5977
----------------------------------
6078

pypots/classification/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,12 @@
55
# Created by Wenjie Du <[email protected]>
66
# License: BSD-3-Clause
77

8+
from .autoformer import Autoformer
89
from .brits import BRITS
910
from .csai import CSAI
1011
from .grud import GRUD
1112
from .itransformer import iTransformer
13+
from .patchtst import PatchTST
1214
from .raindrop import Raindrop
1315
from .saits import SAITS
1416
from .tefn import TEFN
@@ -25,4 +27,6 @@
2527
"TimesNet",
2628
"iTransformer",
2729
"TEFN",
30+
"PatchTST",
31+
"Autoformer",
2832
]
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
"""
2+
The package of the partially-observed time-series classification model Autoformer.
3+
4+
5+
Refer to the paper
6+
`Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long.
7+
Autoformer: Decomposition transformers with autocorrelation for long-term series forecasting.
8+
In Advances in Neural Information Processing Systems, volume 34, pages 22419–22430. Curran Associates, Inc., 2021.
9+
<https://proceedings.neurips.cc/paper/2021/file/bcc0d400288793e8bdcd7c19a8ac0c2b-Paper.pdf>`_
10+
11+
Notes
12+
-----
13+
This implementation is inspired by the official one https://github.com/thuml/Autoformer
14+
15+
"""
16+
17+
# Created by Wenjie Du <[email protected]>
18+
# License: BSD-3-Clause
19+
20+
from .model import Autoformer
21+
22+
__all__ = [
23+
"Autoformer",
24+
]
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
"""
2+
3+
"""
4+
5+
# Created by Wenjie Du <[email protected]>
6+
# License: BSD-3-Clause
7+
8+
import torch
9+
import torch.nn as nn
10+
11+
from ...nn.modules import ModelCore
12+
from ...nn.modules.loss import Criterion
13+
from ...nn.modules.autoformer import AutoformerEncoder
14+
from ...nn.modules.saits import SaitsEmbedding
15+
16+
17+
class _Autoformer(ModelCore):
18+
19+
def __init__(
20+
self,
21+
n_classes: int,
22+
n_steps: int,
23+
n_features: int,
24+
n_layers: int,
25+
d_model: int,
26+
n_heads: int,
27+
d_ffn: int,
28+
factor: int,
29+
moving_avg_window_size: int,
30+
dropout: float,
31+
training_loss: Criterion,
32+
validation_metric: Criterion,
33+
):
34+
super().__init__()
35+
36+
self.n_steps = n_steps
37+
self.d_model = d_model
38+
self.n_layers = n_layers
39+
self.training_loss = training_loss
40+
if validation_metric.__class__.__name__ == "Criterion":
41+
# in this case, we need validation_metric.lower_better in _train_model() so only pass Criterion()
42+
# we use training_loss as validation_metric for concrete calculation process
43+
self.validation_metric = self.training_loss
44+
else:
45+
self.validation_metric = validation_metric
46+
47+
self.saits_embedding = SaitsEmbedding(
48+
n_features * 2,
49+
d_model,
50+
with_pos=False,
51+
dropout=dropout,
52+
)
53+
self.encoder = AutoformerEncoder(
54+
n_layers,
55+
d_model,
56+
n_heads,
57+
d_ffn,
58+
factor,
59+
moving_avg_window_size,
60+
dropout,
61+
"relu",
62+
)
63+
self.projection = nn.Linear(d_model * n_steps, n_classes)
64+
65+
def forward(
66+
self,
67+
inputs: dict,
68+
calc_criterion: bool = False,
69+
) -> dict:
70+
X, missing_mask = inputs["X"], inputs["missing_mask"]
71+
enc_out = self.saits_embedding(X, missing_mask)
72+
73+
# Autoformer encoder processing
74+
enc_out, attns = self.encoder(enc_out)
75+
logits = self.projection(enc_out.reshape(-1, self.n_steps * self.d_model))
76+
classification_proba = torch.softmax(logits, dim=1)
77+
78+
results = {
79+
"classification_proba": classification_proba,
80+
"logits": logits,
81+
}
82+
83+
if calc_criterion:
84+
if self.training: # if in the training mode (the training stage), return loss result from training_loss
85+
# `loss` is always the item for backward propagating to update the model
86+
results["loss"] = self.training_loss(logits, inputs["y"])
87+
else: # if in the eval mode (the validation stage), return metric result from validation_metric
88+
results["metric"] = self.validation_metric(logits, inputs["y"])
89+
90+
return results

0 commit comments

Comments
 (0)