Bram and LUTs are overutilized #758
Replies: 9 comments 3 replies
-
Unlikely to solve your problem since the activation tensors are simply too large, but you can try: hls4ml.model.optimizer.get_optimizer('vivadoaccelerator:fifo_depth_optimization').configure(profiling_fifo_depth=100_000) # This will cause even larger use of BRAMs, but is only temporary
hls_config['Flows'] = ['vivadoaccelerator:fifo_depth_optimization']
...
report = model.build(csim=True, synth=True, cosim=True, validation=False, export=True, vsynth=True, reset=True, bitfile=True) then see what the vivado synthesis report says. This will optimize the sizes of the FIFOs used for activation tensors (which consume the most of BRAMs). You'll likely have to reduce the size of your model (add pooling, get rid of |
Beta Was this translation helpful? Give feedback.
-
Hi @vloncar I am using Vivado as the backend and was trying to optimize the FIFO depth to overcome the issue of over utilization. I am using the commands below as you mentioned but I am getting the following key error. I am using hls4ml version 0.6.0
even if I use
can you please help me in resolving this issue. Thankyou in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi @vloncar, I tried to use the main branch, and when I tried converting the model it took me 18hrs just to do c-simulation, however, I didn't mention explicitly to do c/RTL simulation while converting the model. below is the code for that:
after running the above cell it took almost 18hrs to finish the execution of the cell and the output is below:
I don't know why it performed C/RTL simulation while conversion of the model and took too long 18hrs of time. And one more thing is when I want to generate bit stream file, by the following command using the main branch, I was getting the following error;
can you please help me in resolving this issue? Thankyou in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi can anyone look into this issue and help me in resolving the issue, I got stuck and have no idea how to proceed. Thank you in advance. |
Beta Was this translation helpful? Give feedback.
-
there is no |
Beta Was this translation helpful? Give feedback.
-
Hi @vloncar Thanks for the reply, I just want to port my model on zcu104, but if I use accelerator as a backend zcu104 doesn't support it since it was not listed in the supported board files, so I am using vivado as a backend and the part number is for zcu104. So is there any way where we can port the models on zcu104 by generating a bit stream file, and why does the model conversion take too much time nearly to 19hrs when I include fifo_depth_optimization any idea about it? |
Beta Was this translation helpful? Give feedback.
-
There is PR #752 that adds support for it. You can look into those changes and follow them up or just use that branch. As for the fifo depth optimization, look into the logs. First the model is created with large fifo depths, this is then synthesized, co-simulation is ran and the actual occupancy of the fifos is extracted. then the model is re-synthesized with those depths, giving you the final result. this obviously takes more time. the co-simulation can also take a significant amount of time depending on the number of samples in the dataset you provided (if you provided no dataset, it will likely work with a single sample of zeros, producing incorrect fifo depths). |
Beta Was this translation helpful? Give feedback.
-
Hi @vloncar, Thanks for the reply, can I know how to use that branch, do I need to do pip install, if so can you please provide me with the branch link, I am not able to get it. Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hi @vloncar Thanks for the reply, Sorry I am really confused on what to do, when I try to generate bit stream using accelerator as the backend using the main branch, I was getting the following attribute error
sorry for troubling you by asking many issues, I tried to figure it out, but I am not sure how to resolve this. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone, I amtrying to synthesize a model for image denoising, My model summary() is as below:
every layer has a number of parameters less than 4096, my configuration is shown as follows:
I was trying to synthesize the above model on zcu104 using Vivado as the backend, but during synthesis and exporting IP, the model is overutilizing the Bram_18k and LUTs, utilization report of the model is shown below:
Any Ideas that you may suggest for reducing the number of BRAM and LUTs by maintaining proper utilization?
Also what's the command for generating a bitstream file for Vivado as the backend, just like if we use vivadoaccelerator as the backend we use
hls4ml.templates.VivadoAcceleratorBackend.make_bitfile(hls_model)
command to generate the bitstream file, but how to get the bitstream file if we use vivado as the backend is there any command as such, if so please help me in this regard, I am using vivado 2019.2 version. Thank you in Advance.Beta Was this translation helpful? Give feedback.
All reactions