VRWorks - 360 Video - Multiband Blending
ROIs and Laplacian Image Generation
The first step of our multiband blending implementation is the computation of the Region of Interest (ROI) in the output buffer corresponding to each input camera feed. The next step is to project each input into the corresponding ROI. This computation takes the camera parameters and the desired resolution of the output into account. Thereafter, the Laplacian pyramid is generated for each of the projected inputs.
The projected frames are finally blended at each level using masks and the final output is synthesized.
Masks determine the path that the seams will follow. The masks are compute at the base level and a Gaussian pyramid of this mask is generated to blend at each level. The width of the region to be blended at each level increases at subsequent down-sampled levels.
Number of Levels
All the pyramids used have the same number of levels. The current implementation computes the number of levels from the output buffer resolution such that at the lowest level the smallest surface dimension is no less than 16 pixels (capped at 8 levels).
Multiband blending is very sensitive to the type of filter used, both for downsampling and upsampling. The repeated upsampling and downsampling required can cause minor artifacts in the smaller levels to be greatly amplified as the pyramid is synthesized.
CUDA Streams and Multi-GPU Scaling
Our Multiband implementation maps very well to CUDA Streams and it is a good candidate for multi-GPU scaling. Most of the processing is performed on a per-camera basis and only the last blending + synthesizing stages require inputs from each of the camera pipelines.
Once the ROIs are generated, the first steps are to project the images into the base level of their image pyramids (purple), generate Gaussian and Laplacian pyramids, blend and synthesize. Note that with this approach there is no need for synchronization until the blending stage, which means the CUDA streams can be executed on different GPUs.