Replies: 1 comment
-
|
To elaborate a bit, my understanding from the FINN-R paper is that PE always multiplies the number of BRAMs, whereas from the Bin-Packing paper, PE and SIMD both scale the number of BRAMs equally. But maybe I'm just extrapolating too much? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
In the FINN-R paper, the formula for calculating number of BRAMs required by the weight memory of a given layer is:

It explains that "This memory volume is split into separate memories, one for each processing element.".
However, in the "Evolutionary Bin Packing for Memory-Efficient Dataflow

Inference Acceleration on FPGA" paper, this is explained:
If PE * SIMD * W is the memory word width, then shouldn't the number of BRAMs required to implement this just be ceil((PE * SIMD * W) / BRAM_width) if we only consider the width dimension for simplicity?
Or does the Bin Packing paper just do a simplification without getting into FINN-specific implementation details that each PE requires a separate memory?
I am confused, could someone kindly explain the exact behavior?
Beta Was this translation helpful? Give feedback.
All reactions