Breaking code by switching input streams

This summer I was refactoring code for reading class files in a Java compiler. The code used a custom input stream, and the implementation did essentially what BufferedInputStream from the Java standard library did. I decided it was redundant to use a custom input stream, and I should be able to replace it by the standard library class. However, doing this caused many errors in the regression test suite for the compiler.

This seemed odd. Replacing one input stream by another should have no effect on the code using the stream, right? After taking a closer look at the custom input stream, it really seemed like it followed the contract of the InputStream documentation, so why didn’t it work?

The first step to solving a compiler bug is usually to create a minimal test case: a minimal input program that exposes the same compile error. I could not do that in this case because the inputs that were crashing the compiler were class files – a binary file format that is difficult to edit by hand.

Another way to start debugging a problem is to try to get closer to the root cause – even though you might not know exactly where the root cause is located you can often make an educated guess. I assumed that whatever the root cause was, it was causing the bytecode reader to output an incorrect sequence of bytes to the class file parser at some point. Figuring out when that divergence happened should help to locate the cause. I put some printouts in the class file parser to log the bytes it saw coming from the bytecode reader. I then ran the compiler on a failing test and diffed the output between the working compiler and the faulty version, it was then easy to see that the outputs diverged just a few hundred bytes into one particular class file.

When I knew where in the class file the outputs diverged I could run the compiler in a debugger and set a breakpoint right before it reached that point. I single-stepped through to see what was happening call-by-call, and I saw that the problems started appearing right after a call to InputStream.skip(). I looked again at the documentation and saw that skip() is not guaranteed to always skip ahead the specified number of bytes. The code in the bytecode reader seemed to assume that though, because there was only one call to skip(). The correct way to use skip() is to call it repeatedly, using the return value which indicates how many bytes were actually skipped:

int remaining = bytesToSkip;
while (remaining > 0) {
  remaining -= in.skip(remaining);

The original input stream, the custom-made one, always ensured to skip the specified number of bytes, so when it was replaced by BufferedInputStream, which does not always skip the given number of bytes, things went wrong in the code that assumed all input streams worked the way the custom-made one did!

This is a valuable lesson in programming: even if you use one class that follows a specification by another that follows the same specification, you might end up breaking your code! This was not the first time I’ve had this kind of problem, and I suspect that it is a common and especially difficult to diagnose error in programming. A recent and wide-spread example of this type of problem is HashSet ordering in Java 8, where the iteration order has changed since the Java 7 HashSet implementation. This is all within the specification for a HashSet, but many uses inadvertently depended on the order being deterministic and switching to Java 8 then breaks the code.

Player Rendering

Last year I started adding entity rendering support to Chunky, with one of the big goals being to add player rendering. Adding entity support was difficult because it required doing some large changes in the rendering code – previously Chunky could only render things that fit exactly into a single block, otherwise it broke the Octree renderer.

In version 1.3.4 I had finally added a working entity renderer, and since then I’ve been adding new entities in each version. Today version 1.3.7 was released with initial player rendering support.


The problem with rendering player entities is that there are so many features that you might expect to come with that, such as armor, skins (custom player textures), equipped items (bow, sword, pickaxe), capes, etc. You might also expect to be able to reposition the players and pose them, adjusting the arm and leg directions. Most of this is not supported yet, but I’ll be trying to add some of those features in the next releases of Chunky.

Chunky 2015 Fall Update

In this installment of Chunky news I will discuss the current development progress and my plans for the future.

Current Development

I recently uploaded a snapshot for the upcoming version 1.3.6. Version 1.3.6 which will include sign text rendering, bug fixes, and rendering for some new block types and previously unsupported blocks. The new entity rendering system allows me to implement rendering of some types of blocks which were really difficult to render previously, so that makes me really excited. However, I will not tackle more complex entities such as players and animals/mobs for the 1.3.6 release.

1.3.6 Preview

I recently experimented with a new rendering technique that proved to be very successful. It’s a neat trick that can drastically cut render time for complex scenes with difficult lighting. The idea is to split up the render into two separate passes: sunlight and emitter passes. Splitting these two light sources makes it possible to blur the emitter light where it is too noisy.

I posted about this rendering technique on reddit, and this imgur album sums up the technique. I plan on writing a more detailed tutorial for this rendering technique to post on the Chunky website. I’m also thinking about adding a black sky mode in Chunky to make the scene setup for the emitter pass a bit faster.

Future Plans

I often have ideas about neat features to implement in Chunky. One of the ideas I am most excited about at the moment is plugin support. This is something that I have vaguely thought about before but never really considered possible, but with the current launcher it would actually be pretty simple to implement plugin support. The thing that really blocks plugins right now is that I’d have no way of signing/verifying plugins. Plugins would be a huge security concern for the plugin users. On a related note I really want to add verification for the core Chunky binaries.

My current priority list for future projects is this:

  • Signing and verifying Chunky core libraries
  • Player and animal/mob rendering (needs posing UI)
  • Custom models from resource packs
  • Plugin support
  • Translation support
  • Translating the Chunky website
  • Distributed rendering
  • GPU rendering

GPU Rendering

Although GPU rendering seems to be the most requested feature by users I don’t have it that high on my own priority list because of these reasons:

  • The overall main priority for Chunky is to reach feature parity with Minecraft. Every Minecraft block and entity needs to be rendered correctly. GPU rendering would require re-implementing a lot of things that are handled now by the software renderer, so I’d rather complete the software renderer first.
  • GPU rendering will take a lot of time to implement, and I think my time is better spent on other things like supporting new Minecraft features.
  • Although everyone who requests GPU rendering seems to believe that it would give a huge boost to rendering speed my estimations are much lower. I believe it will be faster, just not that incredibly much faster.
  • I believe distributed rendering will give a much higher return on investment when considering time spent implementing versus net rendering efficiency increase.
  • Aside from just implementing a GPU renderer, a lot of code needs to be restructured in Chunky just to integrate that smoothly into the existing render framework.

Two Months in Silicon Valley

I have been an intern at Google in Mountain View for nearly two months now. I’ve reached the midpoint of my stay here so I want to share some of my experiences of living in Silicon Valley.

I have had an absolutely amazing time so far. When I was on the plane from Sweden to California I almost couldn’t believe that this was happening. I never imagined that I could work at Google, so the first days at work seemed almost unreal. I’ve learned so much from working here and I’ve met very many friendly people. I feel really lucky.

I have never lived outside of Sweden before, so it is interesting to note the differences between living in Sweden and the USA. The first differences I noticed were the much wider roads, larger cars/trucks, and more talkative people. The California weather is very different from Sweden: much hotter and drier, mostly. There were a few rainy days last month, but according to the locals those were freak occurrences. Unfortunately health care and housing is extremely expensive here compared to Sweden.

California is much less bike friendly than Sweden, but there is one nice bike path where I bike to work every day. The bike path goes along the shoreline where there are lots of birds. I’ve seen Geese, Egrets, Pelicans, and other birds I don’t know the name of.

Geese on a bike path

To get around on bike you often have to bike on the road next to cars. Usually the roads have a bike lane, but it still feels much less safe than biking on an isolated bike path. Speaking of cars, there are many Teslas, Priuses, and self-driving cars on the streets here.

My favourite thing about California is the nature and landscape. Palm trees are a common sight here, and all trees are huge compared to trees in Sweden. There are also plenty of awesome places to hike on weekends. On my third weekend here I went to Yosemite with some friends. Yosemite is probably the most beautiful place I’ve visited so far, and hiking there was one of the best experiences I’ve had.

Here are some pictures from places I’ve visited in California:

Yosemite Falls

Yosemite Stream

Lake Tahoe

Golden Gate Bridge

Mt Tamalpais

Chunky 1.3.5 Feature Spotlight

I’m happy to announce that Chunky 1.3.5 has finally been released after a slow trickle of features and bug fixes over the past several months! This post will highlight some of the notable changes since 1.3.4 with pretty pictures!

Mac bundle

I am now building Mac bundles again for each release. Mac bundles are a simple way of installing Chunky on your Mac if you have one, however I currently only test it with the Oracle Java distribution on a 64-bit machine, so it might not work for other Mac systems. There are some minor problems with the Chunky UI on Mac to improve in the future.

Mac Bundle


Paintings are now rendered by Chunky. This was not a highly requested feature, but one that relied on entity support. With the entity rendering framework in place it will be easier to add rendering of other entities in future updates!


Camera view indicator

Chunky now renders an approximate outline of the camera’s field of view on the 2D map, seen as a yellow rectangle in the screen shot below. You can right-click on the map to bring up the context menu and select all visible chunks inside the camera view indicator! This makes it much easier to load the right chunks for a specific view. Hilly terrain can still be problematic because the camera view indicator is based on the assumption that the world is flat (and supported by elephants).

Select visible chunks demo

New icons

The icon for the “Water” tab in the Render Controls dialog was loaded directly from the current texture pack. This caused the tab to become ridiculously large when using a high-resolution texture pack. There is a new icon for this tab now with a fixed size, so that the tab will keep its intended size. There are some new icons for a few other things as well in the UI.

Tab Icons

Single color textures

A new option to render each texture as a solid color was added. It can be used to create a fun look:

Shoreline with single color textures

Improved fog

The fog and atmosphere rendering system in Chunky was a hastily designed piece of junk. It has been rewritten to be simpler and faster. The new fog system allows everything from very light haze (aerial perspective) to thick fog to be emulated with a single fog density slider. You can even choose the color of the fog! Check out my previous post for more info about fog.


Render bugs fixed

These rendering bugs were fixed:

  • Certain configurations of stairs rendered incorrectly.
  • Some mushroom blocks rendered with wrong textures.
  • Fog did not work correctly around stairs.
  • Fixed lighting bug for underwater blocks.
  • Fixed rendering of buttons using the new button orientations (since MC 1.8 buttons can be placed on the top/bottom of a block).

Note that some of these fixes require reloading the chunks in scenes created with Chunky 1.3.4.

Stair rendering issues

Scroll bars

The Render Preview window now has scroll bars, and no longer stretches the rendered image to fit the window. This seems like a small thing, but I was constantly annoyed that changing the window size stretched the preview image. Scroll bars make a lot of sense on a small screen or when rendering a huge image.

Scroll bars in the render preview window

Rendering Fog

To simulate real fog is surprisingly complicated, and computationally expensive. The way real fog behaves has to do with how light scatters in air when interacting with scattering particles (aerosol). The same phenomena occur in clear weather and are responsible for making the sky look the way it does.

The light scattering that happens in the sky is usually categorised into two types: Mie scattering and Rayleigh scattering. Mie scattering describes how large particles (relative to the wavelength of visible light) scatter light, while Rayleigh scattering deals with smaller particles. Rayleigh scattering scatters light with shorter wavelengths more, i.e. blue light is scattered more than red light, the root cause of the sky being mostly blue.

I was interested in implementing physically based fog rendering, using equations for Mie/Rayleigh scattering, in Chunky. The version currently in the stable release of Chunky (1.3.4) was a half-assed implementation of that. I did some experimentation in a separate Java project to see if I could produce a better scattering implementation. The result was rather nice:

Sky rendered by light scattering simulation. The  sun is not explicitly rendered, though it is visible due to the elliptical phase function giving a sharp increase in incoming light from the direction of the sun.

Sky rendered by light scattering simulation. The sun is not explicitly rendered, though it is visible due to the elliptical phase function giving a sharp increase in incoming light from the direction of the sun.

The advantage of simulating light scattering for fog rendering is getting the right color at the horizon. In video games you can often notice a clear border between objects on the horizon and the sky. If both the sky and the fog is rendered by light scattering simulation you will get a nicer transition from horizon to sky.

The problem with using scattering equations for fog simulation is that it’s too computationally expensive, and does not allow tweaking the fog color. I wanted to allow changing the color of fog, contrary to the way real fog works where the color is an indirect result of the light scattering behaviour.

Most equations for light scattering involve an exponential extinction factor. If you really simplify the equations you can make believable fog by using the extinction factor to blend in a fixed fog color. This entirely short-circuits the whole simulation aspect of fog rendering and is much less computationally demanding. I have implemented this system in the latest version of Chunky, and it seems to be working rather well.

There is however a problem with my new fog rendering system: inscatter light intensity (the fog color blending factor) is based on the fraction of lit fog along each light path, i.e. how many inscattering events happen on the light path proportionally to all simulated events. This causes light shaft intensity to drop off too much if there is a lot of non-lit space around the light shaft. This undesired effect can be illustrated with a room where the camera looks through a light shaft at two walls that are far apart:

The light path to the near wall has proportionally more inscattering events when path tracing than the light path that hits the far wall, though they should both have the same inscatter contributions.

The light path to the near wall has proportionally more inscattering events when path tracing than the light path that hits the far wall, though they should both have the same inscatter contributions.

The fog color scaling by room depth can be “fixed” by multiplying with the light path length for each inscatter contribution. However, that fix makes the inscatter component much larger for distant objects and decreases the convergence rate. If I was better at solving differential equations I might be able to work out how to make the inscatter component same as in the no-shadows case when using the simpler equation, but that would still mean convergence is slower (grain/noise in the render). I think I will just leave the current system as it is because the rendering error is difficult to notice and the fog looks reasonably nice. Here is a comparison between two inscatter equations I’ve been testing:

Different inscatter equations

The lower part of the above image was rendered with an improved equation that tries to compensate for the light path length. It took much longer time to render than using the simpler equation, seen in the top part of the image. The improved equation generates slightly nicer results but it might not be worth the increased rendering time.


Here are a couple of nice articles describing light scatter simulation:

The first article describes one of the most widely used methods of simulating a day sky. It is not perfect, but I used it in Chunky for the “simulated” sky model.