I was interested in implementing some form of adaptive sampling for the Monte Carlo path tracer in Chunky. To the uninitiated it might seem simple – you often notice while rendering that some parts of your scene will always converge quicker while noise remains in the more difficult regions. For example, the following scene was rendered with 500 samples per pixels:
The sky in this scene converges after only a couple samples, and due to the way light emitting objects are rendered the torch has converged almost just as quickly. Other areas take much longer to converge. You can still see noise on the non-transparent areas of the windows.
So the naive idea is that if you could just render fewer samples in the areas where the image converges quickly, and spend those samples in more difficult areas, you could increase the convergence rate of the image. Well, it turns out it is not that simple. I did what I usually do: I tried to implement it myself, then started reading articles about the problem. From the articles I later did read there appears to be no huge benefit from adaptive sampling. Some researchers have had marginally positive results, but the difference to uniform sampling (or fixed sampling as some call it) is marginal. Plus, it introduces rendering bias which is undesirable.
I’m posting my results here anyway. I figure someone might find them interesting.
My first idea was that I should use confidence intervals for the sample color at each pixel to guide the adaptive sampling, but then I figured that I probably wouldn’t need that (wrong!). Why not just measure the difference in sample values over a number of frames for each pixel, then focus on sampling those pixels that had the largest difference in sample values!
Per-pixel adaptive sampling does introduce significant overhead, which can of course be optimized, but I opted for a simpler method. The rendered image is already split into tiles (in this case 8 by 8 pixel tiles). I averaged the 5-sample difference over each tile. Then, every 5th frame I selected half of the tiles, those with the largest sample variance over the previous 5 frames, to be rendered for the next five frames. This is the result:
That above frame was taken after an equal number of samples to the first frame. That’s 500 SPP in average, but of course it varies between tiles. I was disappointed to see that the image looks a bit worse than the first one. Since the sample values are averaged over a tile, if only one sample has a huge difference it gets smoothed out by the other samples and not noticed by the algorithm. I’m bad at explaining this, but look at the torch. There are some noisy pixels there in an otherwise converged area, so sample difference for those noisy pixels gets outweighed by the mostly non-noisy neighbor pixels.
I continued by scrapping the tile-based approach and instead tried using the 50 sample average per pixel. This turned out to also provide disappointing results:
It’s really difficult for me to see any improvement in the previous render from the first one, using uniform sampling. With the additional overhead used by the adaptive sampling algorithm I had a lower effective sample rate. I was quite disappointed. Finally, I read some articles on the subject and found that some researchers had used the confidence interval approach, with mild success. Adaptive sampling really does not seem to do that much for path tracing. The cost to benefit ratio is too low to spend more time on this.