Hign MAE on METR-LA #29
Unanswered
violet1108
asked this question in
Q&A
Replies: 1 comment
-
Try tuning hyperparameters. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I run main.py with the train configs:
Namespace(enable_cuda=True, seed=42, dataset='metr-la', n_his=12, n_pred=3, time_intvl=5, Kt=3, stblock_num=2, act_func='glu', Ks=3, graph_conv_type='graph_conv', gso_type='sym_norm_lap', enable_bias=True, droprate=0.5, lr=0.001, weight_decay_rate=0.001, batch_size=32, epochs=1000, opt='adamw', step_size=10, gamma=0.95, patience=10)
But the result is not good, I got the following result:
Epoch: 088 | Lr: 0.00066342043128906215 |Train loss: 0.209980 | Val loss: 0.259182 | GPU occupy: 610.627072 MiB
EarlyStopping counter: 10 out of 10
Early stopping
Dataset metr-la | Test loss 0.250559 | MAE 4.598410 | RMSE 9.115868 | WMAPE 0.09052190
In the GraphWaveNet paper, the STGCN model results in 2.88 on the METR-LA dataset, but the code runs at 4.598. Is there something wrong? Thank you for answering my questions.
Beta Was this translation helpful? Give feedback.
All reactions