Progressive Photon Mapping

During the weekend I implemented a progressive photon mapper for the rendering course I am currently taking. It was really fun to finally implement a progressive photon mapper — it’s something I have been wanting to do for quite a while.

The idea of photon mapping is to trace photons from the light sources in the scene and then gather the radiance for points visible from the camera. This is a two-pass technique, as opposed to the one-pass path tracing technique where rays are only traced from the camera.

Photon mapping performs much better for rendering caustics than path tracing does. I rendered the image below to demonstrate caustics generated from a point light source shining through a water surface. The bright spots on the bottom of the pool are caused by the waves focusing photons from the light source:

Image rendered using progressive photon mapping.

Image rendered using progressive photon mapping.

The caustics seen in the image above would not have been possible to generate correctly using path tracing since they are from a point light. The scene could be lit with a very small area light to approximate the same effect. However, a small area light source in general performs extremely poorly in path tracing.

Path tracing is still a very nice technique because it is so simple to implement and simple to parallelize. Photon mapping does seem a bit trickier to parallelize in a smart way. It would probably be inefficient to let parallel threads render into the same photon map. Instead, one could have separate photon maps for each thread and merge the results into a final output image. I look forward to experimenting with this a bit.

Rendering Course

I am taking a course right now in photo realistic rendering. I am working in Java, and started with the applet pathtracer I wrote a while back as a simple starting point. I had to rewrite a lot of the architecture. The path tracing applet was really a very tiny system, with lots of hacks to keep the code minimal. I rewrote the rendering architecture to be a more generalized so that I can implement all the course assignments in it.

Path Tracing

Path Tracing

I was thinking about documenting my progress here on the blog. I’ve wanted to write some introductory articles about path tracing or rendering in general and this would be a good time to do that. Unfortunately I have very little free time to spend on writing about rendering. We’ll see what happens.

This week I implemented a simple bounding volume hierarchy and Wavefront .obj file loading. You can see an elephant object I loaded in the second image above. I also implemented Whitted raytracing, but it is disappointingly slow.

Comparing Collections in Java

In a project I’m currently working on I had to compare two Java Collections to see if they contained the same elements, ignoring element order. The usual place for such general Collection-related library functions is java.util.Collections, however a fitting method was not found there.

Apache Commons Algorithm

An answer on Stack Overflow pointed me to a method in an Apache Commons library which did exactly what I wanted. The method is called isEqualCollection and the implementation I looked at can be found in Apache Commons Collections 4.0. The Apache library uses this algorithm to compare two collections A and B:

  1. Test that both A and B have equal size, if the sizes differ then A is not equal to B and we are done.
  2. Build a frequency table for collection A.
  3. Build a frequency table for collection B.
  4. Test that both frequency tables have the same number of rows, if not then A is not equal to B and we are done.
  5. For each element in the frequency table of A, check that the element has the same frequency as in B (by lookup in the frequency table of B).

Improved Algorithm

Disclaimer This article focuses on the Apache Commons algorithm. I have not looked at any other implementations. I am sure that others have already thought about the same improvements. You should see this only as an exercise I did for fun, not an optimal implementation by any means. A complete analysis would have required much more work.

The frequency tables give the number of times an element x occurs in each collection. We can improve the memory usage of the algorithm by using a single frequency table rather than two. Let fA(x) be the number of occurrences of x in A, and let fB(x) be the number of occurrences of x in B. We only need to compute d(x) = fA(x)-fB(x). If there is any x for which d(x) is not zero, then x occurs more times in A (d(x) > 0) or B (d(x) < 0). The modified algorithm stores d(x) in a frequency table which is initialized by counting occurrences of elements in A, as in step 2 above, then for each element in B we subtract one from the frequency count of the element.

Using a single frequency table still requires iteration over both collections in order to populate the frequency table correctly, but we now have an easy way to add add a fail-fast condition while iterating over elements in B. If we notice that the frequency of any element drops below zero then we know that the element occurs more often in B. This new fail-fast condition may improve the average running time, but it does not affect the worst-case running time.

A slight improvement in worst-case running time is gained by the removal of the table lookup in the final loop. The final loop now only needs to iterate over frequency differences and see if any difference is non-zero.

Let’s review the improved algorithm:

  1. Test that both A and B have equal size, if the sizes differ then A is not equal to B and we are done.
  2. Build a frequency table for collection A.
  3. For each element x in B:
    • subtract one from row x in the frequency table
    • if the frequency of x is now negative, we are done (A is not equal to B)
  4. For each row x in the frequency difference table, check that the frequency difference is zero.

Implementation

Below is my implementation of the improved algorithm. Note that I use a HashMap to implement the frequency table, it may be replaced by an IdentityHashMap if you want to use reference equality rather than object equality as the element equality condition.

boolean isEqualCollection(Collection<?> a, Collection<?> b) {
    if (a.size() != b.size()) {
        return false;
    }
    Map<Object,Integer> map = new HashMap<Object,Integer>();
    for (Object o: a) {
        Integer val = map.get(o);
        int count;
        if (val != null) {
            count = val.intValue();
        } else {
            count = 0;
        }
        map.put(o, Integer.valueOf(count+1));
    }
    for (Object o: b) {
        Integer val = map.get(o);
        int count;
        if (val != null) {
            count = val.intValue();
            if (count == 0) {
                return false;
            }
        } else {
            return false;
        }
        map.put(o, Integer.valueOf(count-1));
    }
    for (Integer count: map.values()) {
        if (count.intValue() != 0) {
            return false;
        }
    }
    return true;
}

In conclusion, the above implementation is slightly more efficient than the Apache Commons implementation because it uses less memory (by using only one HashMap, rather than two), avoids redundant map lookups (in the final loop), and has an additional fail-fast condition.

Chunky Documentation to Replace the Wiki

The Chunky Wiki was intended to provide documentation for the users. Unfortunately I had to disable regular users from editing the Wiki because it was being swamped by bots. Even with Captchas in place the Wiki received regular spam edits, and we did not have enough volunteers to patrol the Wiki and keep the spam away.

Right now you can’t edit the Wiki unless you get permission from me, and that really defeats the purpose of the Wiki. I am grateful for the contributions from the few people who did help to fight the spam and update the Wiki, but the Wiki is not meeting its original goals right now so I have decided to replace it.

The Wiki will be replaced by regular static HTML pages. However, the HTML pages will be generated from a Git repository which will be managed on GitHub and I will try to encourage people to contribute to the project! The documentation is not ready to replace the Wiki yet, and even lacks any style whatsoever, but you can have a sneak peek at it right here (note: the URL will change to the current Wiki URL).

If you want to help out with the documentation please feel free to submit pull requests for the GitHub repository. It’s easy – you can even edit the pages right there on GitHub!