-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Hello developers,
I am getting this error
'''
025/11/16 10:50:05 I - New best test loss found (1.09429e-01), checkpointing
2025/11/16 10:50:05 I - Sharding callback duration: 24 microseconds
Traceback (most recent call last):
File "/home/2222/micromamba/envs/grace/bin/gracemaker", line 47, in
main(sys.argv[1:], strategy=strategy, strategy_desc=strategy_desc)
File "/home/2222/micromamba/envs/grace/lib/python3.11/site-packages/tensorpotential/cli/gracemaker.py", line 558, in main
train_adam(
File "/home/2222/micromamba/envs/grace/lib/python3.11/site-packages/tensorpotential/cli/train.py", line 579, in train_adam
callback_list.on_epoch_end(tp.epoch, epoch_end_metrics)
File "/home/2222/micromamba/envs/grace/lib/python3.11/site-packages/keras/src/callbacks/callback_list.py", line 171, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/home/2222/micromamba/envs/grace/lib/python3.11/site-packages/tensorpotential/cli/train_callbacks.py", line 136, in on_epoch_end
if self.monitor_op(current, self.best):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
'''
This is my input yaml file
"""
seed: 1
cutoff: 6.0
data:
filename: train_data.extxyz
test_filename: test_data.extxyz
#reference_energy: {Cl:}
reference_energy: {Al: -1.23, Li: -3.56}
save_dataset: False
stress_units: eV/A3 # eV/A3 (default) or GPa or kbar or -kbar
potential:
If elements not provided - determined automatically from data
preset: GRACE_1LAYER # LINEAR, FS, GRACE_1LAYER, GRACE_2LAYER
For custom model from model.py::custom_model
custom: model.custom_model
keywords-arguments that will be passed to preset or custom function
kwargs: {lmax: 3, n_rad_max: 20, max_order: 3, n_mlp_dens: 10}
#shift: False # True/False
scale: True # False/True or float
fit:
loss: {
energy: { weight: 1, type: huber , delta: 0.01 },
forces: { weight: 100, type: huber , delta: 0.01 },
stress: { weight: 0.1, type: huber , delta: 0.01 },
}
maxiter: 500 # Number of epochs / iterations
optimizer: Adam
opt_params: {
learning_rate: 0.01,
amsgrad: True,
use_ema: True,
ema_momentum: 0.99,
weight_decay: null,
clipvalue: 1.0,
}
for learning-rate reduction
learning_rate_reduction: { patience: 5, factor: 0.98, min: 5.0e-4, stop_at_min: True, resume_lr: True, }
optimizer: L-BFGS-B
opt_params: { "maxcor": 100, "maxls": 20 }
needed for low-energy tier metrics and for "convex_hull"-based distance of energy-based weighting scheme
compute_convex_hull: False
batch_size: 4 # Important hyperparameter for Adam and irrelevant (but must be) for L-BFGS-B
test_batch_size: 16 # test batch size (optional)
jit_compile: True
eval_init_stats: False # to evaluate initial metrics
train_max_n_buckets: 10 # max number of buckets (group of batches of same shape) in train set
test_max_n_buckets: 5 # same for test
checkpoint_freq: 2 # frequency for REGULAR checkpoints.
save_all_regular_checkpoints: True # to store ALL regular checkpoints
progressbar: True # show batch-evaluation progress bar
train_shuffle: True # shuffle train batches on every epoch
"""