Skip to content

Analytical FIFO sizing #1185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: dev
Choose a base branch
from

Conversation

lstasytis
Copy link

FIFO sizing is an extremely time-consuming process in terms of CPU cycles due to currently requiring RTL simulation of a model to determine the FIFO depths by tracking the behavior of the model.

Currently, FINN uses two main approaches for FIFO sizing:

The global sizing algorithm which incrementally tightens FIFO sizes while rerunning RTLSIM until a steady-state is reached of the entire model. (AutoFIFOSizingMethod.LARGEFIFO_RTLSIM)
The characteristic-function based approach which RTL simulates individual nodes and constructs characteristic functions for each one and then uses phase-shifting to determine an optimal FIFO size by finding how many cycles the input characteristic function must be shifted forward before it reaches a steady state with the output stream. (AutoFIFOSizingMethod.CHARACTERIZE)
Between these two options, the later - characteristic sizing can be dramatically sped up by computing the characteristic functions of individual nodes analytically, rather than by using RTLSIM.

This can be accomplished by manually reviewing the RTL (or HLS) code of each node and constructing a model python function which reproduces the loop behavior of only the AXI stream reads and writes, filling up the characteristic function array.

The PR includes these python-based model functions as part of the custom_ops classes of the following nodes:

  • channelwise_op
  • convolutioninputgenerator
  • fmpadding
  • labelselect
  • matrixvectoractivation
  • pool
  • streamingdatawidthconverter (generalized variant, very conservative estimate)
  • streamingmaxpool
  • thresholding
  • vectorvectoractivation

Additionally, it includes modifications to /builder/build_dataflow_config.py and builder/build_dataflow_steps.py so that vivado is not called in the FIFO sizing step unless there is no characteristic function for an a node (and in that case it called to only characterize that respective node). This is achieved by introducing a new 'ipgen_ignore' node argument, which is set to true for all analytically characterized nodes once FIFO sizing is started and will force the FINN compiler to skip calling Vivado. This argument set back to false, allowing to call Vivado once the analytic FIFO sizing is finished.

Improvements to be made:

The remaining nodes in FINN should be characterized as necessary
There might exist parameter configurations for the convolutioninputgenerator and streamingmaxpool nodes where the characteristic function is inaccurate since an exhaustive search is complicated to do automatically.
Currently, the test for fifo sizing tests for exact correctness of the analytical function relative to the RTLSIM output. However, small latencies introduced at the start or end of the characteristic function by HLS do not lead to a change in final FIFO sizes. The test should be changed to compare the characteristic functions in a more relaxed manner. This would then allow to also perform an exhaustive test of all possible configurations for the nodes.
The characteristic FIFO sizing algorithm lacks support for branching networks. This is irrespective of the individual characteristic functions and should be improved in the main FIFO sizing function (transformation/derive_characteristic.py)

lstasytis and others added 6 commits September 16, 2024 14:22
…tgenerator, fmpadding, labelselect, matrixvectoractivation, pool, streamingdatawidthconverter (generalized variant, very conservative estimate), streamingmaxpool, thresholding, vectorvectoractivation
Signed-off-by: lstasytis <[email protected]>
@lstasytis lstasytis marked this pull request as ready for review October 15, 2024 16:11
@auphelia auphelia self-requested a review October 24, 2024 08:37
…ing and specific cases of streamingmaxpool and slidingwindow
Copy link
Collaborator

@fpjentzsch fpjentzsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not run the tests for thresholding & vvau yet.

In general we probably want to reduce the number of tests that go into the default suite. Currently the most test cases are located in

  • ConvInpGen: 20.736
  • Thresholding: 4.608
  • VVAU: 2.304

Maybe we can still include an extended test suite via pytest.fixture or pytest_generate_tests functions @auphelia?

"largefifo_rtlsim_cpp",
"characterize_analytic",
"characterize_rtl",
],
)
@pytest.mark.parametrize("topology", ["tfc", "cnv"])
def test_fifosizing_linear(method, topology):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After incorporating some of my other comments, this test fails for me in the cases [cnv, characterize_analytic] and [cnv, characterize_rtl] during "step_set_fifo_depths" with

Exception: Error in simulation! Takes too long to produce output. Consider setting the LIVENESS_THRESHOLD env.var. to a larger value.

and

%Warning: ./MVAU_hls_6_Matrix_Vector_Activate_Stream_Batch_p_ZL7threshs_0_ROM_AUTO_1R.dat:0: $readmem file not found

I don't have an explicit LIVENESS_THRESHOLD set, so it should default to 10k.

Does this PR depend on the new auto-folding PR or other PRs?
Could the "internal_decoupled" mode be the culprit (see my other comment)?

Copy link
Author

@lstasytis lstasytis Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the mvau fix now in, the simulation error does not show up anymore for me, however I still get the

$readmem file not found

warning as well, even though the tests all pass and I manually checked that each type of fifo sizing strategy does generate the same fifo depths across the entire model. This warning is not present in the current /dev build so something is breaking, even though it is not affecting the fifo-sizing. I'm a bit stumped on what is going on here. If the missing mem files are replicas of the weight stream, I guess for the purposes of FIFO sizing, they're not really relevant, since it's parallel streams.

lstasytis added 2 commits January 23, 2025 12:16
adjusting for multiple output streams in duplicatestream
@fpjentzsch
Copy link
Collaborator

Regarding moving the step_set_fifo_depths before codegen/ipgen (which you discussed before I believe @lstasytis @auphelia):
We had to fix this issue in FINN+ to allow node-by-node rtlsim after FIFO insertion: eki-project#85

Longterm, to aid design space exploration, we are thinking about splitting this step into a FIFO-Sizing and a FIFO-Insertion part. Or a "fast" (analytical FIFO-Sizing) and "slow" (old FIFO-Sizing and Insertion) part. Or simply integrating fast analytical FIFO-Sizing into the folding step (so it can be used for early resource estimation) and doing FIFO-Insertion as part of the backend.

@lstasytis
Copy link
Author

Regarding moving the step_set_fifo_depths before codegen/ipgen (which you discussed before I believe @lstasytis @auphelia): We had to fix this issue in FINN+ to allow node-by-node rtlsim after FIFO insertion: eki-project#85

Longterm, to aid design space exploration, we are thinking about splitting this step into a FIFO-Sizing and a FIFO-Insertion part. Or a "fast" (analytical FIFO-Sizing) and "slow" (old FIFO-Sizing and Insertion) part. Or simply integrating fast analytical FIFO-Sizing into the folding step (so it can be used for early resource estimation) and doing FIFO-Insertion as part of the backend.

The issue with splitting the 'fast' and 'slow' flows is that they are not mutually exclusive - the generation of a token access vectors (traces) is done on a per-node basis and uses either rtlsim or the tree model of a node depending on a.) what is available, b.) what is chosen. This means that there are going to be many situations where either:

a.) a new node is introduced
b.) a node is updated
c.) the model tree is determined to be inaccurate for sizing

It can be beneficial to perform the tree-based 'fast' flow for all nodes except the ones resulting from the aforementioned cases.

What I would push for, assuming we integrate these traces further into FINN as time goes on, is to decouple the trace generation from their use for fifo sizing. Right now I already have them as two separate transformations, both performed in step_set_fifo_depths, however it could make sense to split them entirely and then step_generate_traces can use either the fast backend of the slow backend depending on what will be needed for the fifo sizing step (large_rtlsim etc would require all nodes synthesized, analytical sizing 'might' depending on a flag set for trace generation to be done with either a tree or the simulations).

The traces have a lot of potential in FINN outside of FIFO sizing, we can use them to get very accurate performance models of nodes, characterize their behavior, get precise period lengths etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants