Description
What problem does this solve or what need does it fill?
This would allow a camera to render an array of views to a texture array. The primary need that this fills is greatly improved performance on vulkan for stereo rendering, allowing both views to be rendered in a single pass, which is particularly relevant for AR/VR purposes. It could also possibly be used to render shadow maps or light maps faster, although I have not looked into this.
What solution would you like?
A field on the Camera
component to specify the position or other information of the multiple views that can be used, as well as changing ExtractedView
to ExtractedViews
and changing relevant fields to be arrays. Additionally, since this requires changes across most shaders to work properly, I would like to create more shader utilities to prevent having to add shader defs to every shader and possibly to allow multiview emulation on graphics APIs that do not support multiview.
Additional context
I implemented a proof of concept for this a while back here: https://github.com/awtterpip/bevy/tree/multiview
To test this proof of concept, run the 3d_scene
example and press c
to change which camera view is being blitted to the window. Please note that both camera views are being rendered at once, this only changes which views you are seeing.
If needed I can update the proof of concept to the latest Bevy version.