I am Not Scared of the AI Singularity 2017-03-10

One common idea when discussing the future of Artificial Intelligence (AI) is that, if humans ever develop an AI that reaches above some threshold of reasoning power, then the AI will increase its own reasoning power and become progressively “smarter” until human intelligence pales in comparison.

The following quote is from the synopsis of Nick Bostrom’s Superintelligence on Wikipedia:

[…] once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

This idea, that there will exist a type of AI singularity, if we just develop an AI that is “smart” enough, is attractive but naive. With our current technology, I don’t believe that type of scenario would be possible.

Let’s destruct the idea a bit. My interpretation of the core hypothesis is this:

AI Singularity Hypothesis
If there is an AI running on a processing unit, and the AI is intelligent enough to take control of other processing units, it could take advantage of those other processing units to increase its own computing power and thereby increase its own intelligence.

The hypothesis assumes something I think we should not take for granted: namely that adding more computing power will necessarily increase reasoning power.

One of the main problems in large-scale computations today is communication. Many interesting computational problems are bottlenecked not by the available computational power, but by the communication latency between processing units. Some problems can be solved in parallel trivially because they require no communication between processing units, all other problems are fundamentally more difficult because you also need to handle communication efficiently. Communication is needed to share partial results between the participating processing units.

Superintelligence <-> Supercommunication

I believe that superintelligent AI is not trivially parallelizable, and that it in fact requires a high degree of communication between processing units, hence it will be bottlenecked by communication. This is of course speculation, but you should not assume that the opposite is true either, which is what the AI Singularity Hypothesis is based on.

If communication is the bottleneck for superintelligent AI, then it won’t help to spread the computation over more processing units. That would increase the amount of communication needed, working against the bottleneck. What you need instead is a very compact processing unit with very fast, short-distance communication.

Consider the human brain. It is a very dense network of highly interconnected neurons. It seems like the ideal structure for highly communication-intensive computation. This might be an indication that human-level reasoning requires a lot of communication. It is hard for me to imagine that human-level reasoning would not require huge amounts of communication.

I am of course just speculating about AI here. I am not an expert in this field, I have merely written a few simple machine learning applications in my spare time. However, I felt like I had to voice my opinion because it always annoys me when people take it for granted that an AI with higher-than-human level intelligence would automatically become all-powerful.

2 Comments on I am Not Scared of the AI Singularity
Categories: Uncategorized

Chunky Development Update 1 2017-03-04

It was a while since I wrote a general update about Chunky development, so it seems like a good time to write one.

I decided to start a new naming scheme for these posts, with incremental numbering. This is not the first Chunky Development update, just the first one with the new naming scheme. Here are the most recent previous update posts:

Recent News

To summarize what happened since the last update post:

During Christmas I worked mostly on bugfixes for version 1.4.2, and since then I have not had time to work on Chunky because of substantially increased workload for my PhD studies, writing an article, presenting at a conference, etc.

Current Focus

After the first version of the plugin architecture was integrated in Chunky I noticed more interest from people to get involved in Chunky development. I want to make it easier to contribute patches to Chunky, and make it easier for new people to start working on the code. I think this is really important for the long-term improvement of Chunky.

My short-term goals for Chunk right now are:

  • Improve testing and developer documentation to make it easier for new developers to start working on the code.
  • Integrate the plugin manager into the launcher.
  • Make emittance and other material properties customizable on a per-block per-scene basis.
  • Add sprite rendering.
  • Improve fog rendering, as described in this issue.

Sprite Rendering

During Christmas I was tinkering with a Minecraft style sprite renderer, which renders textures as 3D objects, just like held items in the game are rendered.

It works pretty well so far, it just needs to be integrated into Chunky:

Sprite object rendering

GitHub Contributions

Recently I got some pull requests from leMaik and electron93 on GitHub. Some nice additions from leMaik are:

Several people have been helping by reporting issues and commenting on issues from each other on GitHub and Reddit. That really helps because common problems can be resolved much faster when users help each other.

Currently, leMaik is working on improving the plugin system to make it more usable.

Long-term Goals

The long-term goals for Chunky are currently:

  • Custom models from resource packs.
  • Add animal/mob rendering.
  • Signing and verifying Chunky core libraries.
  • Translation support.
  • Demo GPU rendering plugin.
  • Distributed rendering plugin.

The plugin architecture is important for future development because it shifts the focus for the core Chunky development from adding new features towards improving the existing architecture and making it more flexible for plugin development. I have removed GPU rendering from my long-term goals for Chunky because ideally it could be developed as a separate plugin. I still want to develop a nice demo plugin for GPU rendering which someone could build upon to create a full-featured GPU renderer.

No Comments on Chunky Development Update 1
Categories: Uncategorized

Broville v11 2016-12-06

There has been an incredible amount of human effort put into building amazing things in Minecraft. Some players build huge structures on their own while others band together in communities to make something truly impressive together. One of the most dedicated such communities I know of is the group that has been building Broville v11 over the past few years. Last week their efforts finally became publicly available with the release of Broville v11. You can download it here.

Broville Waterfront

No Comments on Broville v11
Categories: Uncategorized

Java Plugin Architecture 2016-10-16

In this post I will show how the plugin system in Chunky is being implemented.

Plugin support is surprisingly easy to add in a Java application thanks to the Java reflection API. Java reflection allows runtime type introspections, and dynamic code loading. Some of the more exotic uses cases for Java reflection is to generate executable code during runtime. A compiler hosted on and targeting the JVM could even use reflection for compile-time code execution. You don’t need to build anything nearly so exotic to implement a simple plugin system.

No Comments on Java Plugin Architecture
Categories: Uncategorized

Project Updates 2016-08-17

I am home again from California, and my plants survived! This post is a collection of updates on some of my projects that I have posted about here on my blog before.


I have started coding on Chunky again, and I tried livestreaming some of my Chunky coding on Twitch. I streamed last evening from 20:00 to 24:00 CEST, and I’m planning to stream around the same time the rest of the week. Feel free to ask questions in the chat if you join my stream!

In Chunky I am working on the UI code, trying to finish up the new JavaFX UI. Here is a screenshot from last evening while I was working on a new Color Picker dialog:

Color Picker progress

My current Chunky development priorities are:

  1. Finish the new JavaFX UI.
  2. Add support for Minecraft 1.10 blocks.
  3. Add support for custom block models.
  4. Plugin system.


Plant Watering System

My automatic plant watering system was a success. My plants were completely alone in my apartment for about 5 weeks, and the watering system kept them alive. Here is the moisture graph from one of my plants:

Moisture - whole summer

I made a timelapse video of my plants using a Raspberry Pi camera. The video shows one image per day, taken at the same time of day, and the images cover about one and a half month of real time. My fiancée came home and moved the camera a bit at the end of the video.


As mentioned I was in California for work again this summer. Here are some pictures:

Golden Gate Bridge




No Comments on Project Updates
Categories: Uncategorized

My thoughts on Virtual Reality 2016-08-14

I recently bought the HTC Vive, and after I converted my living room into a Virtual Reality cave I have been trying out lots of VR experiences.

The best thing about the Vive is that it has a very accurate motion tracking system for the head mounted display and the motion controllers. The most impressive game I tried so far is a bow-and-arrow minigame in Valve’s The Lab. In the game you are tasked with defending a gate from oncoming attacker stick figures by using a bow and arrow. You pick the bow in one hand and hold an arrow in your other hand. It feels very intuitive to shoot arrows using the virtual bow, and I became very immersed in the game feeling like I was almost there in the virtual game world.

The Lab - Image from Steam Store

Too little content

Sadly most of the games available for the Vive so far have very little content. Two of the best paid games available now are Space Pirate Trainer and Hover Junkers, but neither of them has much to offer in terms of gameplay. In Space Pirate Trainer you stand on a platform shooting flying robots that attack in waves – there is no enemy variety and you play on the same stationary platform with no other levels available, so even though the shooting action is cool it does get boring quickly. Hover Junkers does not have much more content – it similarly has a mode where you shoot flying robots attacking in waves, with slightly more enemy variety, there is also a mode where you fly around a small arena shooting at other players in a multiplayer death match.

I was very excited about trying racing and flying games in VR. It seemed like the perfect fit for the platform – ideally the VR features would just make it more immersive. So far there seem to be very few racing games with VR support. Project Cars is one game, and DiRT Rally another (for the Oculus Rift). I tried Project Cars and it made me slightly motion sick after a few races. It also has a problem keeping a high framerate, which I’ve noticed is very important for reducing motion sickness and keeping immersion. I think VR might work much better with slower paced driving games like Euro Truck Simulator 2, since it probably won’t make you motion sick as quickly.

Graphics, framerate, latency

Having high fidelity graphics does not seem at all important for VR immersion. The stereoscopic graphics make everything seem real, even if the textures have very low resolution or even if the 3D models have few polygons. High and consistent framerate and low latency matters much more for VR immersion. As soon as the framerate dips you instantly notice it, and worst of all it can make you feel motion sick. Latency is equally important – while looking around in VR it is very easy to notice the delay between your head movement and the world rotating around you. The Vive seems to handle motion tracking latency just about right, making the latency unnoticeable – at least to me – when the game I’m playing is running smoothly. It is similarly important to have fast tracking and low latency visualization of the motion controllers. When everything is tracked with low latency I can really believe that I’m swinging a lightsaber in front of me.

Speaking of lightsabers, there is a free VR demo on Steam called Trials on Tatooine. The game is very short – about 5 minutes – and has lots of framerate and latency problems. It seems like ILM tried to make it look very pretty before trying to make it run decently. During some short periods it ran okay, and the lightsaber felt very responsive and cool, but even though there is very little stuff happening on screen it still manages to tank the framerate and have terrible latency most of the time. Trials on Tatooine is an excellent example of how poor the experience gets when you sacrifice framerate and latency for graphics.

Trials on Tatooine

Creative tools

Tilt Brush is an interesting VR tool for painting in 3D, though painting 3D lines does not seem very useful to me. I’d rather use something more like ZBrush to sculpt in VR. VR could be a huge win for authoring 3D content. The industry standard 3D modeling tools are, as far as I know, all limited to using 2D screens for drawing shapes – I can imagine that many new 3D modeling tools will be made specifically to exploit VR.

Vive’s Head Mounted Display

The Vive has a 2160×1200 pixel display – each eye seeing roughly 1080×1200 pixels. When I use the display there is noticeable blurring when I look up or down, outside the center of the eyepieces. It’s also surprising how few pixels make out distant objects. The first times I used the HMD I felt like I had become short-sighted because I couldn’t make out distant objects clearly – it was simply because there are not enough pixels for faraway things.

There is also an effect that people describe as the “Screen Door” – it’s like a slight mesh pattern in your vision caused by the pixel grids in the display. I got used to that very quickly, and it really doesn’t detract that much from the experience. A more noticeable effect of the pixel grid is very pronounced false chromatic abberation from the pixel geometry of the display. The effect is not noticeable in the center of vision but quite noticeable near the edges of my vision where the lens blurring enhances it.

All in all, things that are reasonably close and near my viewing direction are rendered very nicely. I have not tried the Oculus Rift so I can’t say how the displays compare.

Developing for VR

I’ve become interested in coding something for the HTC Vive now that I’ve tried some of the existing games. I’m not entirely sure what I’d make yet, but one idea was to make some kind of VR exploration tool for Minecraft. It would be very cool to be able to fly around a Minecraft world in VR.


VR is very cool, but it’s overpriced and there are too few games for it right now.

No Comments on My thoughts on Virtual Reality
Categories: Uncategorized