We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我想问一下,为什么我训练完以后,再测试输出的图片是这样的呢,每张图片大概有三张输出,第一张感觉像是六张图片拼在一起的,然后别的就都是黑白的,我只是简单的改了一下模型文件,为什么会这样讷
The text was updated successfully, but these errors were encountered:
我用作者的源代码测试了一下,只要不使用vgg16的模型权重就会出现这样的结果,但是一旦涉及修改模型就无法使用预训练的权重啊,有什么解决办法吗
Sorry, something went wrong.
如果修改 Backbone 就不行,建议你别修改Backbone,修改之后的decoder。另一个思路是搞两个分支的网络,一个是vgg16,另一个分支你就可以随便搞,两个分支的特征交互、融合做预测,这样可能也能实现你改特征提取网络的计划。
嗯嗯好的,我还有一个问题,这两个avg分别代表什么呢?为什么训练的时候一直是下降的呢?
你好,楼主方便加个联系方式么?这环境我实在配不出来,可以加个联系方式说下正确流程么
No branches or pull requests
您好,我想问一下,为什么我训练完以后,再测试输出的图片是这样的呢,每张图片大概有三张输出,第一张感觉像是六张图片拼在一起的,然后别的就都是黑白的,我只是简单的改了一下模型文件,为什么会这样讷
The text was updated successfully, but these errors were encountered: