The Kobalt diaries: Parallel builds

I’ve always wondered why the traditional build systems we use on the JVM (Maven, Gradle) don’t support parallel builds (edit: this is not quite true, see the comments), so now that I developed my own build system, I decided to find out if there was any particular difficulty to this problem that I had overlooked. This turned out to be easier than I anticipated and with interesting benefits. Here is a preliminary result of my findings.

Defining parallel builds

While most builds are sequential by nature (task B can’t usually run until task A has completed), projects that contain multiple sub projects can greatly benefit from the performance boost of parallel builds. This speed up will obviously not apply to single projects or subprojects that have direct dependencies on each other (although this is not quite true as we will see below). If you’re curious, the engine that powers Kobalt’s parallel builds is an improved version of TestNG’s own DynamicGraphExecutor, which I described in details in this post.

In order to measure the efficacy of parallel builds, I needed a more substantial project, so I picked ktor, Kotlin’s web framework developed by JetBrains. ktor is interesting because it contains more than twenty sub projects with various dependencies with each other. Here is a dependency graph:

You can see that core is the main project that everybody else depends on. After that, the dependencies open up and we have the potential to build some of these projects in parallel: locations, samples, etc…

I started by converting the ktor build from Maven to Kobalt. Right now, ktor is made of about twenty-five pom.xml files that add up to a thousand lines of build files. Kobalt’s build file (Build.kt) is one file of about two hundred lines, and you can find it here. The fact that this build file is a pure Kotlin file allows to completely eliminate the redundancies and maximize the reuse of boiler plate code that most sub projects define.

Extracting the dependencies from Build.kt is trivial thanks to Kobalt’s convenient syntax to define dependencies:

$ grep project kobalt/src/Build.kt
val core = project {
val features = project(core) {
val tomcat = project(core, servlet) {

We see the core project at the top of the dependency graph. Then features depends on core, tomcat depends on both core and servlet and so on…

So without further ado, let’s launch the build in parallel mode and let’s see what happens:

$ ./kobaltw --parallel assemble

At the end of a parallel build, Kobalt optionally displays a summary of the way it scheduled the various tasks. Here is what we get after building the project from scratch:

║  Time (sec) ║ Thread 39           ║ Thread 40           ║ Thread 41           ║ Thread 42           ║
║  0          ║ core                ║                     ║                     ║                     ║
║  45         ║ core (45)           ║                     ║                     ║                     ║
║  45         ║                     ║ ktor-locations      ║                     ║                     ║
║  45         ║                     ║                     ║ ktor-netty          ║                     ║
║  45         ║                     ║                     ║                     ║ ktor-samples        ║
║  45         ║ ktor-hosts          ║                     ║                     ║                     ║
║  45         ║                     ║                     ║                     ║                     ║
║  45         ║ ktor-hosts (0)      ║                     ║                     ║                     ║
║  45         ║ ktor-servlet        ║                     ║                     ║                     ║
║  45         ║                     ║                     ║                     ║                     ║
║  45         ║                     ║                     ║                     ║                     ║
║  45         ║                     ║                     ║                     ║ ktor-samples (0)    ║
║  45         ║                     ║                     ║                     ║ ktor-freemarker     ║
║  49         ║                     ║                     ║                     ║ ktor-freemarker (3) ║

PARALLEL BUILD SUCCESSFUL (68 seconds, sequential build would have taken 97 seconds)

The “Time” column on the left describes at what time (in seconds) each task was scheduled. Each project appears twice: when they start and when they finish (and when they do, the time they took is appended in parentheses).

Analyzing the thread scheduling

As you can see, core is scheduled first and since all the other projects depend on it, Kobalt can’t build anything else until that project completes, so the other four threads remain idle during that time. When that build completes forty-five seconds later, Kobalt now determines that quite a few projects are now eligible to build, so they get scheduled on all the idle threads: ktor-locations, ktor-netty, etc… The first to complete is ktor-hosts and Kobalt immediately schedules the next one on that thread.

Finally, Kobalt gives you the complete time of the build and also how long a sequential build would have taken, calculated by simply adding all the individual project times. It’s an approximation (maybe these projects would have been built faster if they weren’t competing with other build threads) but in my experience, it’s very close to what you actually get if you perform the same build sequentially.

Obviously, the gain with parallel build is highly dependent on the size of your projects. For example, if project C depends on projects A and B and these two projects are very small, the gain in parallelizing that build will be marginal. However, if A and B are both large projects, you could see your total build time divided by two. Another big factor in the gain you can expect is whether you use an SSD. Since all these parallel builds are generating a lot of files in various directories concurrently, I suspect that a physical hard drive will be much slower than an SSD (I haven’t tested, I only have SSD’s around).

Taking things further

When project B depends on project A, it certainly looks like you can’t start building B until A has completed, right? Actually, that’s not quite true. It’s possible to parallelize (or more accurately, vectorize) such builds too. For example, suppose you launch ./kobaltw test on these two projects:

$ ./kobalw test
----- A:compile
----- A:assemble
----- A:test
----- B:compile
----- B:assemble
----- B:test

But the dependency of B on A is not on the full build: B certainly doesn’t need to wait for A to have run its tests before it can start building. In this case, B is ready to build as soon as A is done assembling (i.e. created A.jar). So here, we could envision having threads scheduled at the task level, and not at the project level. So what we could really do is:

║  Time (sec) ║ Thread 39           ║ Thread 40           ║
║             ║ A:compile           ║                     ║
║             ║ A:assemble          ║                     ║
║             ║ A:test              ║ B:compile           ║
║             ║                     ║ B:assemble          ║
║             ║                     ║ B:test              ║

As you can see above, Kobalt schedules B:compile as soon as A:assemble has completed while starting A:test on a separate thread, resulting in a clear reduction in build time.

This task-based approach can improve build times significantly since tests (especially functional tests) can take minutes to run.

Wrapping up

I started implementing parallel builds mostly as a curiosity and with very low expectations but I ended up being very surprised to see how well they work and how they improve my build times, even when just considering project-based concurrency and not task-based concurrency. I’d be curious to hear back from Kobalt users on how well this new feature performs on their own projects.

And if you haven’t tried Kobalt yet, it’s extremely easy to get started.

Ad-hoc polymorphism in Kotlin

Even though Kotlin doesn’t natively support ad-hoc polymorphism today, it’s actually pretty straightforward to use it with little effort. Doing so is not as straightforward as it is in Haskell, obviously, but it’s been simple enough that I haven’t really encountered situations in Kotlin where the lack of native support in the language was a showstopper. In this article, I present two techniques you can use to leverage ad-hoc polymorphism in Kotlin.

Extending for “fun” and some profit

Here is what the Haskell wiki has to say about ad-hoc polymorphism:

Despite the similarity of the name, Haskell’s type classes are quite different from the classes of most object-oriented languages. They have more in common with interfaces, in that they specify a series of methods or values by their type signature, to be implemented by an instance declaration.

Before we get into details, let’s define exactly what we are trying to do and why we’re trying to do it. One intuitive way of looking at ad-hoc polymorphism is that it enables us to retroactively make values conform to a certain type. This is a bit abstract but I’m pretty sure you have encountered this problem many times before even if you never realized it. Let’s look at a simple example.

A library typically defines types and then offers functions that take parameters and returns values of that type. The only way to make use of that library is to find a way to bring your objects into that library’s object world, or in other words, be able to convert your objects to objects that this library expects and vice versa. You will find many examples of type classes if you do a simple search: values that behave like a Number, Functor, Applicative, Monad, Monoid, Equality, Comparability, etc… In order not to repeat what’s already out there and in an effort to remain focused on concrete problems, I’m going to pick a different field: JSON.

The need to parse JSON and also convert your objects to JSON is pretty much universal, so in all likeliness, you are already using a JSON library in your code. And at some point, you have had to ask yourself a very simple question: “How do I convert my current objects to JSON so I can use this library”. The library probably defines some kind of JsonObject type and most of its API is defined in terms of this type, either with functions accepting parameters of that type or returning such values:

interface JsonObject {
    fun toJson() : String

In order to leverage this library, converting your objects so they conform to this interface is very important, and once you have converted your objects, you gain full access to all the functionalities that this library offers, such as pretty printing in JSON, doing search/replaces in JSON, reshaping JSON object from one form to another, etc…

If you own (i.e. you are the author of) these classes, doing so is very easy. For example, you can modify your class to implement that interface:

class Account : JsonObject {
    fun override toJson() : String { ... }

The advantage of this approach is that you can now pass all your Account values directly to functions of the JSON libraries that accept a JsonObject. This is very useful, but the downside is that you have now polluted your class with a concern that makes your design more bloated. If you are going to extend this approach to other types, very soon, your Account class will extend multiple interfaces filling various functionalities, and you have now tied your business logic to a lot of dependencies (i.e. you now need this JSON library in order to compile your Account…).

Another approach is simply to write a function that converts your business objects to JsonObjects:

fun Account.toJson() : JsonObject { ... }

This defines an extension function that adds the method toJson() directly on the Account class, where it belongs. The extension function buys us two very important benefits:

  • Since it’s an extension function, its implementation runs on the Account instance itself (in other word, this is of type Account).
  • This function is defined without making any modification to the Account class itself. Not only does it leave your class untouched and unpolluted with unrelated concerns, you can also apply this approach to classes that you don’t own. This is extremely important and a critical step toward ad-hoc polymorphism.

With this approach, we have won separation of concerns and a great amount of flexibility but we have lost some typing power: we can no longer pass an instance of Account to a function accepting a JsonObject parameter, we need to call toJson() on that instance first.

This is how far we can go with Kotlin today and this fits in Kotlin’s general design principle to stay away from implicit conversions, a decision I’ve come to respect greatly after several years writing Kotlin code.

Escaping the tyranny of nominal typing

Let’s look at another approach to implement ad hoc polymorphism. Consider the following simple function that saves an object to a database:

fun persist(person: Person) {, person)

The object is saved to the database and associated to its id. A little later, I want to persist an Account object, and in the spirit of proper software engineering, I’d rather abstract my existing code rather than writing a second persist() function. So I make my function more generic and in the process, I discover that in order to be persisted, my Account instance needs to be able to give me its id. After refactoring, my code now looks like this:

interface Id {
    val id : String

class Person : Id {
    override val id: String get() = "1"

fun persist(id: Id) { ... }

But now, I find myself having to make Account implement Id, which is exactly what I am trying to avoid with ad hoc polymorphism (either because I think it’s bad design or more simply because the class Account is not mine, so I can’t modify it). The realization here is that these type names get in the way of my goal and I’d rather keep all these concepts separate.

What if, instead, the persist() method accepted an additional parameter (a function) that allows me to obtain an Id from its parameter?

fun <T> persist(o: T, toId: (T) -> String)

// Persist a Person: easy since Person extends Id
persist(person, { person -> })

// Persist an Account: need to get an id some other way
persist(account, { account -> getAnIdForAccountSomehow(account) }

This new approach has a few interesting characteristics:

  1. Notice how completely generic the persist() method has become: it doesn’t reference Person, Account and not even Id, even though it needs some sort of id in order to operate. This function is literally applicable “for all” types (I am intentionally using double quotes here, some of you will probably immediately understand what “for all” means in this context).
  2. We have detached the ability to provide an id from our types. You still have the option to implement this functionality into your types (like Person does) but it’s now entirely optional (like Account shows). This gives you a lot of flexibility since you are now longer forced to use the id supplied by the class and you can also be more creative in your testing (e.g. trying to save two different objects but force them to have the same id in order to test for collision error cases).

This approach makes a drastic step toward a more functional solution to the problem of ad hoc polymorphism: we depend less on types and more on functions. As you can see, this approach provides some interesting benefits.

An ad-hoc polymorphism proposal for Kotlin

When I reflected about this problem a while ago, it occurred to me that ad-hoc polymorphism has a lot in common with Kotlin’s extension functions: an extension function adds a function to a type outside of the definition of that type and ad-hoc polymorphism makes a type extend another type outside of the definition of that type. I came up with the concent of “Extension types” and I gave a quick overview of this idea in this article. Extension types would allow us to make types retroactively implement other types with this made up syntax:

// Not legal Kotlin
override class Account: JsonObject<Account>

The rest of the interface would be implemented with extension functions, as demonstrated in the link above. The downside of this proposal is that it adds some form of implicit conversion, something that is at odds with Kotlin’s current design, so it’s probably unlikely this proposal will go past the stage of strawman but I thought it would be interesting to draw a parallel between extension functions and extension types.

Does Kotlin really need ad-hoc polymorphism?

The more I think about it, the more convinced I am that the value offered by ad-hoc polymorphism is very closely tied to the language you’re using it in. In other words, it’s not a universal tool but one that’s heavily dependent on how well supported it is in your language. Ad-hoc polymorphism is obviously a critical component of Haskell and it has given rise to high amounts of reuse and elegant abstractions in that language but I’m not sure Kotlin would benefit as much from it.

Another important aspect of deciding how useful ad-hoc polymorphism would be in a language is whether that language supports higher kinds (type families). Without higher kinds, your ability to abstract is limited, which lessens the value of ad-hoc polymorphism significantly. And since Kotlin doesn’t support higher kinds as of this writing, the importance of native support for ad-hoc polymorphism is questionable, or at least, certainly not as high a priority as other features.

At any rate, I have used the two techniques described above in my own code bases with reasonable benefit, so I hope they will be useful to others as well.

Neural networks in Kotlin (part 2)

In the previous installment, I introduced a mystery class NeuralNetwork which is capable of calculating different results depending on the data that you train it with. In this article, we’ll take a closer look at this neural network and crunch a few numbers.

Neural networks overview

A neural network is a graph of neurons made of successive layers. The graph is typically split in three parts: the leftmost column of neurons is called the “input layer”, the rightmost columns of neurons is the “output layer” and all the neurons in-between are the “hidden” layer. This hidden layer is the most important part of your graph since it’s responsible for making the calculations. There can be any numbers of hidden layers and any number of neurons in each of them (note that the Kotlin class I wrote for this series of articles only uses one hidden layer).

Each edge that connects two neurons has a weight which is used to calculate the output of this neuron. The calculation is a simple addition of each input value multiplied by its weight. Let’s take a look at a quick example:

This network has two input values, one hidden layer of size two and one output. Our first calculation is therefore:

w11-output = 2 * 0.1 + (-3) * (-0.2) = 0.8
w12-output = 2 * (-0.4) + (-3) * 0.3 = -1.7

We’re not quite done: the actual outputs of neurons (also called “activations”) are typically passed to a normalization function first. To get a general intuition for this function, you can think of it as a way to constrain the outputs within the [-1, 1] range, which prevents the values flowing through the network from overflowing or underflowing. Also, it’s useful in practice for this function to have additional properties connected to its derivative but I’ll skip over this part for now. This function is called the “activation function” and the implementation I used in the NeuralNetwork class is the hyperbolic tangent, tanh.

In order to remain general, I’ll just refer to the activation function as f(). We therefore refine our first calculations as follows:

w11-output = f(2 * 0.1 + (-3) * (-0.2))
w12-output = f(2 * (-0.4) + (-3) * 0.3)

There are a few additional details to this calculation in actual neural networks but I’ll skip those for now.

Now that we have all our activations for the hidden layer, we are ready to move to the next layer, which happens to be the ouput layer, so we’re almost done:

output = f(0.1 * w11-output - 0.2 * w12-output
       = 0.42

As you can see, calculating the output of a neural network is fairly straightforward and fast, much faster than actually training that network. Once you have created your networks and you are satisfied with its results, you can just pass around the characteristics of that network (weights, sizes, …) and any device (even phones) can then use that network.

Revisiting the xor network

Let’s go back to the xor network we created in the first episode. I created this network as follows:

NeuralNetwork(inputSize = 2, hiddenSize = 2, outputSize = 1)

We only need two inputs (the two bits) and one output (the result of a xor b). These two values are fixed. What is not fixed is the size of the hidden layer, and I decided to pick 2 here, for no particular reason. It’s interesting to tweak these values and see whether your neural network performs better of worse based on these values and there is actually a great deal of both intuition and arbitrary choices that go into these decisions. These values that you use to configure your network before you run it are called “hyper parameters”, in contrast to the other values which get updated while your network runs (e.g. the weights).

Let’s now take a look at the weights that our xor network came up with, which you can display by running the Kotlin application with --log 2:

Input weights:
-1.21 -3.36 
-1.20 -3.34 
1.12 1.67 

Output weights:

Let’s put these values on the visual representation of our graph to get a better idea:

You will notice that the network above contains a neuron called “bias” that I haven’t introduced yet. And I’m not going to just yet besides saying that this bias helps the network avoid edge cases and learn more rapidly. For now, just accept it as an additional neuron whose output is not influenced by the previous layers.

Let’s run the graph manually on the input (1,0), which should produce 1:

hidden1-1 = 1 * -1.21
hidden1-2 = 0 * -1.20
bias1     = 1 * 1.12

output1 = tanh(-1.21 + 1.12) = -0.09

hidden2-1 = 1 * -3.36
hidden2-2 = 0 * -3.34
bias2     = 1 * 1.67

output2 = tanh(-3.36 + 1.6) = -0.94

// Now that we have the outputs of the hidden layer, we can caculate
// our final result by combining them with the output weights:

finalOutput = tanh(output1 * 3.31 + output2 * (-2.85))
            = 0.98

We have just verified that if we input (0,1) into the network, we’ll receive 0.98 in output. Feel free to calculate the other three inputs yourself or maybe just run the NeuralNetwork class with a log level of 2, which will show you all these calculations.

Revisiting the parity network

So the calculations hold up but it’s still a bit hard to understand where these weights come from and why they interact in the way they do. Elucidating this will be the topic of the next installment but before I wrap up this article, I’d like to take a quick look at the parity network because its content might look a bit more intuitive to the human eye, while the xor network detailed above still seems mysterious.

If you train the parity network and you ask the NeuralNetwotk class to dump its output, here are the weights that you’ll get:

Input weights:
0.36 -0.36 
0.10 -0.09 
0.30 -0.30 
-2.23 -1.04 
0.57 -0.58 

Output weights:

If you pay attention, you will notice an interesting detail about these numbers: the weights of the first three neurons of our hidden layer cancel each other out while the two inputs of the fourth neuron reinforce each other. It’s pretty clear that the network has learned that when you are testing the parity of a number in binary format, the only bit that really matters is the least significant one (the last one). All the others can simply be ignored, so the network has learned from the training data that the best way to get close to the desired output is to only pay attention to that last bit and cancel all the other ones out so they don’t take part in the final result.

Wrapping up

This last result is quite remarkable if you think about it, because it really looks like the network learned how to test parity at the formal level (“The output of the network should be the value of the least significant bit, ignore all the others”), inferring that result just from the numeric data we trained it with. Understanding how the network learned how to modify itself to reach this level will be the topic of the next installment of the series.

Neural Network in Kotlin

It’s hard not to hear about machine learning and neural networks these days since the practice is being applied to an ever increasingly wide variety of problems. Neural networks can be intimidating and look downright magical to the untrained (ah!) eye, so I’m going to attempt to dispel these fears by demonstrating how these mysterious networks operate. And since there are already so many tutorials on the subject, I’m going to take a different approach and go from top to bottom.


In this first series of articles, I will start by running a very simple network on two simple problems, show you that they work and then walk through the network to explain what happened. Then I’ll backtrack to deconstruct the logic behind the network and why it works.

The neural network I’ll be using in this article is a simple one I wrote. No TensorFlow, no Torch, no Theano. Just some basic Kotlin code. The original version was about 230 lines but it’s a bit bigger now that I broke it up in separate classes and added comments. The whole project can be found on github under the temporary “nnk” name. In particular, here is the source of the neural network we’ll be using.

I will be glossing over a lot of technical terms in this introduction in order to focus on the numeric aspect but I’m hoping to be able to get into more details as we slowly peel the layers. For now, we’ll just look at the network as a black box that get fed input values and which outputs values.

The main characteristic of a neural network is that it starts completely empty but it can be taught to solve problems. We do this by feeding it values and telling it what the expected output is. We iterate over this approach many times, changing these inputs/expected parameters and as we do that, the network updates its knowledge to come up with answers that are as close to the expected answers as possible. This phase is called “training” the network. Once we think the network is trained enough, we can then feed it new values that it hasn’t seen yet and compare its answer to the one we’re expecting.

The problems

Let’s start with a very simple example: xor.

This is a trivial and fundamental binary arithmetic operation which returns 1 if the two inputs are different and 0 if they are equal. We will train the network by feeding it all four possible combinations and telling it what the expected outcome is. With the Kotlin implementation of the Neural Network, the code looks like this:

with(NeuralNetwork(inputSize = 2, hiddenSize = 2, outputSize = 1)) {
	val trainingValues = listOf(
	    NetworkData.create(listOf(0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0), listOf(1)),
	    NetworkData.create(listOf(1, 1), listOf(0)))


Let’s ignore the parameters given to NeuralNetwork for now and focus on the rest. Each line of NetworkData contains the inputs (each combination of 0 and 1: (0,0), (0,1), (1,0), (1,1)) and the expected output. In this example, the output is just a single value (the result of the operation) so it’s a list of one value, but networks can return an arbitrary number of outputs.

The next step is to test the network. Since there are only four different inputs here and we used them all for training, let’s just use that same list of inputs but this time, we’ll display the ouput produced by the network instead of the expected one. The result of this run is as follows:

Running neural network xor()

[0.0, 0.0] -> [0.013128957]
[0.0, 1.0] -> [0.9824073]
[1.0, 0.0] -> [0.9822749]
[1.0, 1.0] -> [-2.1314621E-4]

As you can see, these values are pretty decent for such a simple network and such a small training data set and you might rightfully wonder: is this just luck? Or did the network cheat and memorize the values we fed it while we were training it?

One way to find out is to see if we can train our network to learn something else, so let’s do that.

A harder problem

This time, we are going to teach our network to determine whether a number is odd or even. Because the implementation of the graph is pretty naïve and this is just an example, we are going to train our network with binary numbers. Also, we are going to learn a first important lesson in neural networks which is to choose your training and testing data wisely.

You probably noticed in the example above that I used the same data to train and test the network. This is not a good practice but it was necessary for xor since there are so few cases. For better results, you usually want to train your network on a certain population of the data and then test it on data that your network hasn’t seen yet. This will guarantee that you are not “overfitting” your network and also that it is able to generalize what you taught it to input values that it hasn’t seen yet. Overfitting means that your network does great on the data you trained it with but poorly on new data. When this happens, you usually want to tweak your network so that it will possibly perform less well on the training data but it will return better results for new data.

For our parity test network, let’s settle on four bits (integers 0 – 15) and we’ll train our network on about ten numbers and test it on the remaining six:

with(NeuralNetwork(inputSize = 4, hiddenSize = 2, outputSize = 1)) {
	val trainingValues = listOf(
	    NetworkData.create(listOf(0, 0, 0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 0, 0, 1), listOf(1)),
	    NetworkData.create(listOf(0, 0, 1, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 1, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 1, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0, 1, 0), listOf(0)),
	    NetworkData.create(listOf(1, 0, 1, 1), listOf(1)),
	    NetworkData.create(listOf(1, 1, 0, 0), listOf(0)),
	    NetworkData.create(listOf(1, 1, 0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 1, 1, 0), listOf(0)),
	    NetworkData.create(listOf(1, 1, 1, 1), listOf(1))

	val testValues = listOf(
	    NetworkData.create(listOf(0, 0, 1, 1), listOf(1)),
	    NetworkData.create(listOf(0, 1, 0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0, 0, 0), listOf(0)),
	    NetworkData.create(listOf(1, 0, 0, 1), listOf(1))

And here is the output of the test:

Running neural network isOdd()

[0.0, 0.0, 1.0, 1.0] -> [0.9948013]
[0.0, 1.0, 0.0, 0.0] -> [0.0019584869]
[0.0, 1.0, 0.0, 1.0] -> [0.9950419]
[1.0, 0.0, 0.0, 0.0] -> [0.0053276513]
[1.0, 0.0, 0.0, 1.0] -> [0.9947305]

Notice that the network is now outputting correct results for numbers that it hadn’t seen before, just because of the way it adapted itself to the training data it was initially fed. This gives us good confidence that the network has configured itself to classify numbers from any input values and not just the one it was trained for.

Wrapping up

I hope that this brief overview will have whetted your appetite or at least piqued your curiosity. In the next installment, I’ll dive a bit deeper into the NeuralNetwork class, explain the constructor parameters and we’ll walk through the inner working of the neural network that we created to demonstrate how it works.

The Kobalt diaries: Automatic Android SDK management

The dreaded SDK Manager

Android has always had a weird dependency mechanism. On the JVM (and therefore, Android), we have this great Maven repository system which is leveraged by several tools (Gradle and Kobalt on Android) and which allows us to add dependencies with simple additions to our build files. This is extremely powerful and it has undoubtedly been instrumental in increasing the JVM’s popularity. This system works for pretty much any type of applications and dependencies.

Except for Android libraries.

The Android support libraries (and I’m using “support” loosely here to include all such libraries and not just the one that Google calls “support”) are not available on any of the Maven repositories. They are not even available on Google’s own repository. Instead, you need to use a special tool to download them, and once you do that, they land on your local drive as a local Maven repository, which you then need to declare to Gradle so you can finally add the dependencies you need.

I suspect the reason why these libraries are not available in a straight Maven repo is that you need to accept licenses before you can use them, but regardless, this separate download management makes building Android applications more painful, especially for build servers (Travis, Jenkins) which need to be configured specifically for these builds.

The graphical tool used to download this repository, called “SDK Manager”, is also a command tool called "android" that can be invoked without the GUI, and inspired by Jake Wharton’s sdk-manager-plugin, I set out to implement automatic SDK management for Kobalt.

Once I became more comfortable with the esoteric command line syntax used by the "android" tool, adding it to the Kobalt Android plug-in was trivial, and as a result, Kobalt is now able to completely install a full Android SDK from scratch.

In other words, all you need in your project is a simple Android configuration in your Build.kt file:

    android {
        compileSdkVersion = "23"
        buildToolsVersion = "23.0.1"
        // ...

    dependencies {

The Kobalt Android plug-in will then automatically download everything you need to create an apk from this simple build file:

  • If $ANDROID_HOME is specified, use it and make sure a valid SDK is there. If that environment variable is not specified, install the SDK in a default location (~/.android-sdk).
  • If no Build Tools are installed, install them.
  • Then go through all the Google and Android dependencies for that project and install them as needed.
  • And a few other things…

A typical run on a clean machine with nothing installed will look like this:

$ ./kobaltw assemble
Android SDK not found at /home/travis/.android/android-sdk-linux, downloading it
Couldn't find /home/travis/.android/android-sdk-linux/build-tools/23.0.1, downloading it
Couldn't find /home/travis/.android/android-sdk-linux/platform-tools, downloading it
Couldn't find /home/travis/.android/android-sdk-linux/platforms/android-23, downloading it
Couldn't find Maven repository for extra-android-m2repository, downloading it
Couldn't find Maven repository for extra-google-m2repository, downloading it
          | Building androidFlavors |
------ androidFlavors:clean
------ androidFlavors:generateR
------ androidFlavors:compile
  Java compiling 4 files
------ androidFlavors:proguard
------ androidFlavors:generateDex
------ androidFlavors:signApk
Created androidFlavors/kobaltBuild/outputs/apk/androidFlavors.apk

Obviously, these downloads will not happen again unless you modify the dependencies in your build file.

I’m hopeful that Google will eventually make these support libraries available on a real remote Maven repository so we don’t have to jump through these hoops any more, but until then, Kobalt has you covered.

This feature is available in the latest kobalt-android plug-in as follows:

val p = plugins("com.beust:kobalt-android:0.81")

The Kobalt diaries: testing

Kobalt automatically detects how to run your tests based on the test dependencies that you declared:

dependenciesTest {

By default, Kobalt supports TestNG, JUnit and Spek. You can also configure how your tests run
with the test{} directive:

test {
    args("-excludegroups", "broken", "src/test/resources/testng.xml")

The full list of configuration parameters can be found in the TestConfig class.

Additionally, you can define multiple test configurations, each with a different name. Each
configuration will create an additional task named "test" followed by the name of
that configuration. For example:

test {
    args("-excludegroups", "broken", "src/test/resources/testng.xml")

test {
    name = "All"

The first configuration has no name, so it will be launched with the task "test",
while the second one can be run with the task "testAll".

The full series of articles on Kobalt can be found here.

The Kobalt diaries: templates

The latest addition to Kobalt is templates, also known as “archetypes” in Maven.

Templates are actions performed by plug-ins that create a set of files in your project. They are typically used when uou are beginning a brand new project and you want some default files to be created. Of course, nothing stops you from invoking templates even if you already have an existing project since templates can generate pretty much any kind of files. Here is how they work in Kobalt.

You can get a list of available templates with the --listTemplates parameter:

$ kobaltw --listTemplates
Available templates
  Plug-in: Kobalt
    "java"              Generate a simple Java project
    "kotlin"            Generate a simple Kotlin project
    "kobalt-plugin"     Generate a sample Kobalt plug-in project

You invoke a template with the --init parameter. Let’s call the "kobalt-plugin" template:

$ ./kobaltw --init kobalt-plugin
Template "kobalt-plugin" installed
Build this project with `./kobaltw assemble`

$ ./kobaltw assemble
          ║ Building kobalt-line-count ║
───── kobalt-line-count:compile
───── kobalt-line-count:assemble
  Created .\kobaltBuild\libs\kobalt-line-count-0.18.jar
  Created .\kobaltBuild\libs\kobalt-line-count-0.18-sources.jar
  Wrote .\kobaltBuild\libs\kobalt-line-count-0.18.pom

The template was correctly installed, then it provided instructions on what to do next, which we followed, and now we have a fully working project. This one is particular since it’s a Kobalt plug-in and I’ll get back to it shortly. But before that, let’s drill a bit deeper into templates.

Templates would be pretty useless if they were limited to the default Kobalt distribution, so of course, you can invoke templates on plug-ins. Even plug-ins that you haven’t downloaded yet! Kobalt can download plug-ins from any Maven repository and run them.

To illustrate this, let’s see what templates the Kobalt Android plug-in offers:

$ kobaltw --listTemplates --plugins com.beust:kobalt-android:
Available templates
  Plug-in: Kobalt
    "java"              Generate a simple Java project
    "kotlin"            Generate a simple Kotlin project
    "kobaltPlugin"      Generate a sample Kobalt plug-in project
  Plug-in: Android
    "androidJava"       Generate a simple Android Java project
    "androidKotlin"     Generate a simple Kotlin Java project

Several things happened here. First of all, we are invoking the same --listTemplates command as earlier but this time, there is a new --plugins parameter. You pass this parameter a list of Maven id’s representing the Kobalt plug-ins you want Kobalt to install. This is similar to declaring these plug-ins in your build file, except that typically, when you run a template, you don’t have a build file yet. So this is an easy way to install plug-ins without requiring a build file.

Finally, notice that the Maven id used above, "com.beust:kobalt-android:" doesn’t have a version number and instead, ends with a colon. This is how you ask Kobalt to locate the latest version of the plug-in for you.

Kobalt responded by determining that the latest version of the Kobalt Android plug-in is 0.40, downloading it, installing it and then asking it what templates it provides. The Kobalt Android plug-in provides two templates, both of them creating a full-blown Android application, one written in Kotlin and one in Java. Let’s install the Kotlin one:

$ kobaltw --plugins com.beust:kobalt-android: --init androidKotlin
Template "androidKotlin" installed
Build this project with `./kobaltw assemble`

$ find .

$ ./kobaltw assemble
          ║ Building kobalt-demo ║
───── kobalt-demo:generateR
───── kobalt-demo:compile
───── kobalt-demo:proguard
───── kobalt-demo:generateDex
───── kobalt-demo:signApk
Created kobaltBuild\outputs\apk\kobalt-demo.apk
───── kobalt-demo:assemble

We now have a complete Android application written in Kotlin.

Let’s go back to the template we built at the beginning of this article: the Kobalt plug-in called "kobalt-line-count-0.18.jar". It’s a valid Kobalt plug-in so how do we test it? We could upload it to JCenter and then invoke it with the --plugins parameter, but Kobalt provides another handy command line parameter to test such plug-ins locally: --pluginJarFiles. This parameter is similar to --plugins in that it installs a plug-in, except that it does so from a local jar file and not a remote Maven id.

Let’s install this plug-in and see which tasks are then available to us:

$ ./kobaltw --pluginJarFiles kobaltBuild/libs/kobalt-line-count-0.18.jar --tasks

List of tasks


  ═════ kobalt-line-count ═════
    dynamicTask         Dynamic task
    lineCount           Count the lines


As you can see, Kobalt has installed the kobalt-line-count plug-in, which then added its own tasks to Kobalt’s default ones. The Kobalt plug-in template appears to work fine. From this point on, you can edit it and create your own Kobalt plug-in.

Speaking of plug-in development, how hard is it to add templates to your Kobalt plug-in? Not hard at all! All you need to do is to implement the ITemplateContributor interface:

interface ITemplateContributor {
    val templates: List<ITemplate>

Each template provides a name, a description and a function that actually generates the files for the current project. Feel free to browse how Kobalt’s Android plug-in implements its templates.

The full series of articles on Kobalt can be found here.

Old school Apple ][ cracking

My first steps in programming probably go back about thirty-five years on the HP-38 and HP-41 but nothing will ever top the amazing times I had cracking games on the Apple ][ in the early 80s.

The Apple ][ and its DOS were extremely fertile grounds for software protection that led to some of the most fascinatingly intricate approaches to making sure that your program could not be easily copied. I’m not going to dive very deep into technical details about the Apple ][ architecture but the short version of it is that this computer let you reimplement how bytes are stored on the diskettes that you ship your software on, so needless to say that companies selling software for a living were more than happy to go ahead and do just that in attempt (mostly futile) to curb piracy.

I did a lot of cracking back in these days, mostly for the fun of it. Actually, I enjoyed getting my hands on games more for the pleasure of cracking them than actually playing them. However, one particular game resisted my attempts: “The Blade of Blackpoole”. A pretty mediocre adventure game in the style that was very popular then.

This copy protection used a lot of tricks that I just was not able to handle at the time. Remember, this was the early 80s. There was no Internet and pretty much nobody around me with enough technical knowledge of the Apple ][ to help me out. I had to figure things out on my own.

Recently, I had the crazy idea to revisit this old skeleton of mine and see if I can do better now, given all the tools and technology that the 21st century affords me. So I grabbed an image of the protected version of the game, fired a few emulators (I did this work both on Windows and Mac OS) and went to task.

It was slow at first but I was spooked to realize how much I actually remember of the Apple ][‘s internal architecture. And what I didn’t remember, the Internet happily provided for me. As it turns out, the Apple ][ cracking scene is still quite active (shout out to my inspiration for this work: a2_4am, who’s been actively cracking hundreds of Apple ][ games this past year alone]).

I carefully documented all my work cracking the Blade of Blackpoole in this document. I decided to store it in a separate file because it’s long and gruesome and goes into excruciating details about the Apple ][ and 6502 assembly. It’s not for the faint of heart, but I think you might find it interesting to follow even if you’re not completely familiar with all the technical details because it captures pretty accurately the timeless struggle between programmers who write copy protections and programmers who defeat them.

Fast forward to 2016. Copy protection is more alive than ever and the producer side seems to have struck a very serious blow to the cracking scene with Denuvo, a system that is proving extremely hard to crack and, to everyone’s surprise, which is actually an anti-temper mechanism and not an anti-piracy technology. There is so much to say about this that I’ll probable save it for another post, but in the meantime, I hope you enjoy my old school cracking report.

The Kobalt Diaries: Incremental Tasks

One of the recent additions to Kobalt is incremental tasks. This is the ability for each build task to be able to check whether it should run or not based on whether something has changed compared to the previous run. Here are a few quick outlines of how this feature works in Kobalt.


Kobalt’s incremental task architecture is based on checksums. You implement an incremental task by giving Kobalt a way to compute an input checksum and an output checksum. When the time comes to run your task, Kobalt will ask for your input checksum and it will compare it to that of the previous run. If they are different, your task is invoked. If they are identical, Kobalt then compares the two output checksums. Again, if they are different, your task is run, otherwise it’s skipped. Finally, Kobalt updates the output checksum on successul completion of your task.

This mechanism is extremely general and straightforward to implement for plug-in developers, who remain in full control of how exhaustive their checksum should be. You could decide to stick to the default MD5 checksums of the files and directories that are of interest to your task, or if you want to be faster, only check the timestamps of your file and return a checksum reflecting whether Kobalt should run you or not. And of course, checksums don’t even have to map to files at all: if your task needs to perform a costly download, it could first check a few HTTP headers and again, return a checksum indicating whether your task should run.

Having said that, build systems tend to run tasks that have files for inputs and outputs, so it seems logical to think about an incremental resolution that would be based not on checksums (which can be expensive to compute) but on file analyses. While a checksum can tell you “One of these N files has been modified”, it can’t tell you exactly which one, and such information can open the door to further incremental work (see below for more details).

One approach for file-based tasks could be for the build system to store the list of files along with some other data (timestamp or checksum) and then pass the relevant information to the task itself. The complication here is that file change resolution implies knowing the following three pieces of information:

  • Which files were modified.
  • Which files were added.
  • Which files were removed.

The downside is obviously that there is more bookkeeping required to preserve this information around between builds but the clear benefit is that if a task ends up being invoked, it can perform its own incremental work on just the files that need to be processed, whereas the checksum approach forces the task to perform its work on the entire set of inputs.


Incremental tasks are not very different from regular tasks. An incremental task returns an IncrementalTaskInfo instance which is defined as follows:

class IncrementalTaskInfo(
	val inputChecksum: String?,
    val outputChecksum: () -> String?,
    val task: (Project) -> TaskResult)

The last parameter is the task itself and the first two are the input and output checksums of your task. Your task simply uses the @IncrementalTask annotation instead of the regular @Task and it needs to return an instance of that class:

@IncrementalTask(name = "compile", description = "Compile the source files")
fun taskCompile(project: Project) = IncrementalTaskInfo(/* ... */)

Most of Kobalt’s own tasks are now incremental (wherever that makes sense) including the Android plug-in. Here are a few timings showing incremental builds in action:


Task First run Second run
kobalt-wrapper:compile 627 ms 22 ms
kobalt-wrapper:assemble 9 ms 9 ms
kobalt-plugin-api:compile 10983 ms 54 ms
kobalt-plugin-api:assemble 1763 ms 154 ms
kobalt:compile 11758 ms 11 ms
kobalt:assemble 42333 ms 2130 ms
70 seconds 2 seconds

Android (u2020)

Task First run Second run
u2020:generateRInternalDebug 32350 ms 1652 ms
u2020:compileInternalDebug 3629 ms 24 ms
u2020:retrolambdaInternalDebug 668 ms 473 ms
u2020:generateDexInternalDebug 6130 ms 55 ms
u2020:signApkInternalDebug 449 ms 404 ms
u2020:assembleInternalDebug 0 ms 0 ms
43 seconds 2 seconds

Wrapping up

At the moment, Kobalt only supports checksum-based incremental tasks since that approach subsumes all the other approaches but I’m not ruling out adding input-specific incremental tasks in the future if there’s interest. In the meantime, checksums are working very well and pretty efficiently, even on large directories and/or large files.

If you are curious to try it yourself, please download Kobalt and report back!

The full series of articles on Kobalt can be found here.

A close look at Kotlin’s “let”

let is a pretty useful function from the Kotlin standard library defined as follows:

fun <T, R> T.let(f: (T) -> R): R = f(this)

You can refer to a previous article I wrote if you want to understand how this function works, but in this post, I’d like to take a look at the pros and cons of using let.

let is basically a scoping function that lets you declare a variable for a given scope:

File("a.txt").let {
    // the file is now in the variable "it"

There is another subtle use of let when applied to a nullable reference. The ?. operator
lets you make sure that the code in scope is only run if the expression is not null:

findUser(id)?.let {
    // only run if findUser() returned a non null value

After going back and forth about whether this idiom is superior to a simple null test, I am slowly leaning to abandoning it in favor of an if for the following reasons:

  • This idiom is only useful if you want to do an if that doesn’t have an else branch. I tend to view such constructs as suspicious since if without an else can be a source of bugs.

  • This idiom introduces a renaming. Either you use the default lambda syntax, in which case the renamed variable is implicitly called it, or you explicitly name the argument:

    val user = findUser(id)
    user?.let { foundUser ->
        // ...

    This can occasionally be useful but sometimes, I just don’t feel like being forced to rename my variable.

  • Following the previous point, if doesn’t impose a renaming but Kotlin’s smart casting guarantees that you won’t have any surprise:

    val user = findUser(id)
    if (user != null) {
        // user is now a non null reference

    Also, the fact that no new name was introduced and the variable keeps its name user the entire time improves readability in my opinion.

So for these reasons, I tend to default to a good old if these days. None of these arguments are deal breakers, it’s mostly a stylistic preference at this point. Let’s see if I’ll change my mind over the next few months.