How to set the reuse factor #775
Replies: 1 comment 10 replies
-
In general, the reuse factor (RF) for every layer will depend on preferences, such as latency vs. resources utilised and target architecture. A RF equal to one will mean all multiplications are executed in parallel. So, in general, for a layer of n_mult multiplications and reuse factor of reuse_factor, we expect the layer to use For Dense layers, there is In general, increasing RF will mean splitting the multiplications across more clock cycles - RF = 1 is equivalent to all multiplications done in parallel and RF = 4 would mean the multiplications are done across four clock cycles, with (approx.) four times less DSPs (as well as a reduction in other resources, but this reduction is not linear). So for a Dense layer, the expected latency is equal to the RF (plus some overhead to retrieve the weights from BRAM, if using Resource strategy). For Conv2D layers, the expected latency is proportional to the product of output pixels (height x width) and the RF. Depending on your target device, latency constraints etc. the RF should be set accordingly for every layer. In your case, the model is fairly small and very high-end FPGAs (~8k DSPs) could accomodate the entire model with RF = 1. However, if the goal is to save resources - it might be worth increasing RF. Examples on changing reuse factor, precision etc, as well as a more in-depth explanation on RF and DSPs used, are available in the tutorial. One possible approach is to generate the configuration (Python dict) from the model, given a default reuse factor and then overwrite it for individual layers:
The above code will generate an hls4ml config (a Python dictionary with all the necessary information for model conversion) with a default reuse factor of 2 for every layer. Then, the reuse factor can be overwritten, by changing the config for specific layers in the dictionary - |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello to everyone!
If i have this CNN, there is a way to find the right values to set the reuse factor for every layer ?
For example, i can use 1 as reuse factor for the first Conv2D layer but what is the value of the reuse factor of the second layer?
Layer (type) Output Shape Param #
convolution_layer_1 (Conv2D (None, 26, 26, 32) 320
)
ReLu_1 (Activation) (None, 26, 26, 32) 0
max_pooling_layer_1 (MaxPoo (None, 13, 13, 32) 0
ling2D)
convolution_layer_2 (Conv2D (None, 11, 11, 14) 4046
)
ReLu_2 (Activation) (None, 11, 11, 14) 0
max_pooling_layer_2 (MaxPoo (None, 5, 5, 14) 0
ling2D)
flatten_layer (Flatten) (None, 350) 0
dropout_layer (Dropout) (None, 350) 0
dense_layer (Dense) (None, 10) 3510
softmax (Activation) (None, 10) 0
=================================================================
Total params: 7,876
Trainable params: 7,876
Non-trainable params: 0
Beta Was this translation helpful? Give feedback.
All reactions