We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
提问时请尽可能提供如下信息:
基本信息 你使用的操作系统: ubuntu20.04 你使用的Python版本: 3.8.10 你使用的Pytorch版本: 1.10.2 你使用的bert4torch版本: 0.2.7.post2 你加载的预训练模型: bert-base
adversarial_train = AdversarialTraining('fgm') model, optimizer, train_dataloader, eval_dataloader, lr_scheduler, adversarial_train = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler, adversarial_train ) #除了prepare加入了对抗,无其他代码改动 model.fit(train_dataloader, epochs=num_epochs, steps_per_epoch=10, callbacks=[evaluator, AccelerateCallback(accelerator)], verbose=verbose)
# 必然在epoch 1的evaluate阶段卡住,显存扔占着且利用率100%,删掉后对抗后恢复正常 # rdrop, fgm, pgd等问题相同
The text was updated successfully, but these errors were encountered:
留个待优化项免得忘记hhhhhhhhhh,有空看
Sorry, something went wrong.
No branches or pull requests
提问时请尽可能提供如下信息:
基本信息
你使用的操作系统: ubuntu20.04
你使用的Python版本: 3.8.10
你使用的Pytorch版本: 1.10.2
你使用的bert4torch版本: 0.2.7.post2
你加载的预训练模型: bert-base
核心代码
输出信息
The text was updated successfully, but these errors were encountered: