In all implementations of SIFT that I have encountered the Pyramid of Gaussians is calculated recursively applying Gaussian blurring to the image, as is done here.
This makes a lot of sense when applied in sequential processors such as a CPU or DSP.
In my application I would like to implement a very fast SIFT on an FPGA, which is a parallel hardware-configured processor. To speed-up the construction of the Pyramid of Gaussians I would like to apply independent bank of filters with increasing sigma squared on the original image. This implies that all Gaussian blurred images will be computed at the same time.
I am aware that this may be a little more computationally expensive but my main concern is speed.
Can someone please provide me with some insight or code of how I can work out the new sigmas such that if I compare the end results of both implementations they would be fairly the same?