-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using external activation functions #981
Comments
Hi, Thanks for opening this issue. You are mentioning observed vs expected values. Where are the expected values coming from? |
Hi @Giuseppe5 , I apologize for the delay. I had to look into the basics of quantizing a model in order to be able to explain my doubts here. Use case: Train a spiking CNN with Leaky or LIF activation function in PyTorch & SNNTorch and use the INT8 weights of this trained model in my VHDL design.
From your previous reply, I understand that there is no problem in using SNNTorch along with Brevitas. However, I have been trying to calculate some channel output manually and compare it with the output of Quantized model. This is where I am seeing a difference in observed vs expected values.
I have a few assumptions on why this is happening. The quant and dequant stubs between each layer in a fake quantized model would not allow such a comparison. As this is a fake quantized model, I have to generate a True INT8 model to be able to compare it with the manual calculations I am doing. If the above question is the problem, Can I generate a true INT8 model in which I can run a inference pass which uses only INT8 values and no FP32 values? note: the input to Spiking conv model comprises of 0 and 1(spike or not spike) which makes manual calculation easy. MAC reduces to just accumulation operation. Thank you! |
I still have a few questions about the set-up. |
If this is still an issue, please feel free to re-open and we'd be more than happy to help! |
I have a Spiking convolutional neural network. It uses the Leaky(Leaky Integrate and Fire) neuron from SNNTorch library as activation function. Is it possible to use activation functions like from SNNTorch along with Brevitas. Given below is an example architecture.
Is it possible to use Brevitas along with such custom activation functions?
The purpose of quantizing my model is to extract the INT8 weights and use it for simulation of VHDL design I have written. I have recorded the INT8 weights of the quantized spiking convolutional layer(conv2). However, I observe that there is some difference in the observed values after the activation function and expected values. I would like to know if Brevitas support using custom activation functions. If yes, does it need any additional configurations?
The text was updated successfully, but these errors were encountered: