My attempts at random sampling have been *partially* successful.
Here is the original Mona Lisa Image:
This is the image after diffraction through a cloth of 100 micron weave. This image is generated through uniform sampling (as shown in the preliminary paper submitted to Siggraph asia):
This is the result of using random sampling to randomize ray source positions. The same random numbers are used for each row of the image, which gives it a striped look:
Finally, using better quality random numbers (random throughout the image, not just across a row):
Performance tradeoffs:
The random sampling in CUDA has been implemented as a lookup into a 1D array of random numbers passed by the CPU to the GPU. This means two additional lookups from a global memory array. It drastically reduces the execution speed from around 33 seconds (no randomization) to 90 seconds (with 2 random number lookups). This heavy performance hit can be attributed (IMO) to slow global memory read. Hopefully with 1D texture lookups, this will be mitigated.
The next step is to add bilinear texture sampling at these random source points. Currently, I use no filtering for the texture lookup. This means learning how to use textures in CUDA. Another avenue I am exploring is writing a GLSL shader to do the same. Preliminary results are very promising (around 5 seconds for the entire process, as compared to 33 seconds on CUDA).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment