Uninteresting Code Metrics 2017-05-25

I was browsing the GitHub repository rapid7/metasploit-framework and noticed that the README displayed a “code climate” badge. The badge displayed a number of unknown significance: 2.7.

It was clear from the name that the badge had something to do with static analysis. Clicking on the badge leads to this page:

Yikes! 3694 seems like a pretty large issue count. It leads me to believe that the developers either don’t care about their Code Climate score (but then why would they display the badge in the README?), or that they don’t find the Code Climate issues useful.

During my first internship at Google I worked on the static analysis team. One thing I learned from the team was that static analysis findings are only useful if they are actionable and have a low false positive rate. This paper from the team lists these as the first two requirements for selecting static analyses:

I looked at the issues Code Climate reported for the repository in question, and it seems Code Climate is reporting mostly non-actionable issues regarding code complexity. The other type of issues are duplicated code. I looked at several of the reported duplicated code fragments, and they were all pretty simple code blocks with no more than five lines, and most had only two occurrences. The duplicated code issues seemed nitpicky.

Complexity Score

Code complexity metrics (including line counts) are not useful for fixing real problems in code. If you try to enforce some maximum complexity you will end up with pointless rewrites just to try to fix the complexity score. This in itself can lead to needless bugs being introduced from trying to “fix” your complexity issues.

Another issue is that the complexity score in itself is entirely arbitrary. What does it mean that a method has a complexity of 300? Is that too much? Where do you draw the line? If you make the computation of the complexity score more sophisticated, it just gets more difficult to figure out what part to change to minimize the score.


Sometimes it can be useful to find that some replicated piece of code can be refactored into a common method. However, this often leads to introducing a more abstract method that handles more cases, increasing your code complexity. Often, a new parameter is added to the method, and some conditional statements as well. Consider for example Figure 1 from Narasimhan (2015):

In my opinion the above refactoring is not useful. The more general method will run slower than the original method due to the new conditional branch, unless it is inlined by the compiler. The more general method is also harder to read and has introduced a boolean trap.

No Comments on Uninteresting Code Metrics
Categories: Uncategorized

Voxxed Days Zürich 2017 2017-04-30

In February I visited Zürich to give a talk at a software developer conference called Voxxed Days Zürich. The event was very fun, and I’m thankful to the organizers who invited me to give a talk.

My talk was about a Java compiler, ExtendJ, which I have been working with extensively during my PhD studies. My PhD subject is declarative compiler construction with Attribute Grammars, and ExtendJ has been very useful as a platform for my research projects. I have used ExtendJ both as a testbed for programming language experiments and as a benchmark for Reference Attribute Grammars.

The goal of my talk at the Voxxed Days event was to show some of the cool features in ExtendJ. I also wanted to briefly show some of the aspects of the design of ExtendJ that are interesting or unusual compared to most other compilers.

The slides from my talk are available here. The presentation was recorded and can be watched via YouTube:

No Comments on Voxxed Days Zürich 2017
Categories: Uncategorized

I am Not Scared of the AI Singularity 2017-03-10

One common idea when discussing the future of Artificial Intelligence (AI) is that, if humans ever develop an AI that reaches above some threshold of reasoning power, then the AI will increase its own reasoning power and become progressively “smarter” until human intelligence pales in comparison.

The following quote is from the synopsis of Nick Bostrom’s Superintelligence on Wikipedia:

[…] once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

This idea, that there will exist a type of AI singularity, if we just develop an AI that is “smart” enough, is attractive but naive. With our current technology, I don’t believe that type of scenario would be possible.

Let’s destruct the idea a bit. My interpretation of the core hypothesis is this:

AI Singularity Hypothesis
If there is an AI running on a processing unit, and the AI is intelligent enough to take control of other processing units, it could take advantage of those other processing units to increase its own computing power and thereby increase its own intelligence.

The hypothesis assumes something I think we should not take for granted: namely that adding more computing power will necessarily increase reasoning power.

One of the main problems in large-scale computations today is communication. Many interesting computational problems are bottlenecked not by the available computational power, but by the communication latency between processing units. Some problems can be solved in parallel trivially because they require no communication between processing units, all other problems are fundamentally more difficult because you also need to handle communication efficiently. Communication is needed to share partial results between the participating processing units.

Superintelligence <-> Supercommunication

I believe that superintelligent AI is not trivially parallelizable, and that it in fact requires a high degree of communication between processing units, hence it will be bottlenecked by communication. This is of course speculation, but you should not assume that the opposite is true either, which is what the AI Singularity Hypothesis is based on.

If communication is the bottleneck for superintelligent AI, then it won’t help to spread the computation over more processing units. That would increase the amount of communication needed, working against the bottleneck. What you need instead is a very compact processing unit with very fast, short-distance communication.

Consider the human brain. It is a very dense network of highly interconnected neurons. It seems like the ideal structure for highly communication-intensive computation. This might be an indication that human-level reasoning requires a lot of communication. It is hard for me to imagine that human-level reasoning would not require huge amounts of communication.

I am of course just speculating about AI here. I am not an expert in this field, I have merely written a few simple machine learning applications in my spare time. However, I felt like I had to voice my opinion because it always annoys me when people take it for granted that an AI with higher-than-human level intelligence would automatically become all-powerful.

2 Comments on I am Not Scared of the AI Singularity
Categories: Uncategorized

Chunky Development Update 1 2017-03-04

It was a while since I wrote a general update about Chunky development, so it seems like a good time to write one.

I decided to start a new naming scheme for these posts, with incremental numbering. This is not the first Chunky Development update, just the first one with the new naming scheme. Here are the most recent previous update posts:

Recent News

To summarize what happened since the last update post:

During Christmas I worked mostly on bugfixes for version 1.4.2, and since then I have not had time to work on Chunky because of substantially increased workload for my PhD studies, writing an article, presenting at a conference, etc.

Current Focus

After the first version of the plugin architecture was integrated in Chunky I noticed more interest from people to get involved in Chunky development. I want to make it easier to contribute patches to Chunky, and make it easier for new people to start working on the code. I think this is really important for the long-term improvement of Chunky.

My short-term goals for Chunk right now are:

  • Improve testing and developer documentation to make it easier for new developers to start working on the code.
  • Integrate the plugin manager into the launcher.
  • Make emittance and other material properties customizable on a per-block per-scene basis.
  • Add sprite rendering.
  • Improve fog rendering, as described in this issue.

Sprite Rendering

During Christmas I was tinkering with a Minecraft style sprite renderer, which renders textures as 3D objects, just like held items in the game are rendered.

It works pretty well so far, it just needs to be integrated into Chunky:

Sprite object rendering

GitHub Contributions

Recently I got some pull requests from leMaik and electron93 on GitHub. Some nice additions from leMaik are:

Several people have been helping by reporting issues and commenting on issues from each other on GitHub and Reddit. That really helps because common problems can be resolved much faster when users help each other.

Currently, leMaik is working on improving the plugin system to make it more usable.

Long-term Goals

The long-term goals for Chunky are currently:

  • Custom models from resource packs.
  • Add animal/mob rendering.
  • Signing and verifying Chunky core libraries.
  • Translation support.
  • Demo GPU rendering plugin.
  • Distributed rendering plugin.

The plugin architecture is important for future development because it shifts the focus for the core Chunky development from adding new features towards improving the existing architecture and making it more flexible for plugin development. I have removed GPU rendering from my long-term goals for Chunky because ideally it could be developed as a separate plugin. I still want to develop a nice demo plugin for GPU rendering which someone could build upon to create a full-featured GPU renderer.

No Comments on Chunky Development Update 1
Categories: Uncategorized

Broville v11 2016-12-06

There has been an incredible amount of human effort put into building amazing things in Minecraft. Some players build huge structures on their own while others band together in communities to make something truly impressive together. One of the most dedicated such communities I know of is the group that has been building Broville v11 over the past few years. Last week their efforts finally became publicly available with the release of Broville v11. You can download it here.

Broville Waterfront

No Comments on Broville v11
Categories: Uncategorized

Java Plugin Architecture 2016-10-16

In this post I will show how the plugin system in Chunky is being implemented.

Plugin support is surprisingly easy to add in a Java application thanks to the Java reflection API. Java reflection allows runtime type introspections, and dynamic code loading. Some of the more exotic uses cases for Java reflection is to generate executable code during runtime. A compiler hosted on and targeting the JVM could even use reflection for compile-time code execution. You don’t need to build anything nearly so exotic to implement a simple plugin system.

No Comments on Java Plugin Architecture
Categories: Uncategorized