The Kobalt diaries: profiles

When I started thinking about how profiles should work in Kobalt, I realized that the simplest approach I’d like to see in a build tool is defining a boolean variable and having if statements in my build file. So that’s exactly how Kobalt’s profiles are implemented.

You start by defining boolean values initialized to false in your build file:

  val experimental = false
  val premium = false

Then you use this variable wherever you need it in your build file:

  val p = javaProject {
      name = if (experimental) "project-exp" else "project"
      version = "1.3"

Finally, you invoke ./kobaltw with the --profiles parameter followed by the profiles you want to activate, separated by a comma:

  ./kobaltw -profiles experimental,premium assemble

Keep in mind that since your build file is a real Kotlin source file,
you can use these profile variables pretty much anywhere, e.g.:

dependencies {
    if (experimental)

And that’s it.

The Kobalt diaries: it’s the little things

When I embarked on the ridiculously ambitious goal of writing a build tool, I had plans to tackle both big problems and small problems. My previous (and probably future) blog post cover the big problems such as performance, plug-in architecture and DSL syntax, but in this post, I’m going to cover a few little things that I was quite happy to finally be able to get from my build tool.

I’ve always found it a hassle to keep up with the latest versions of the dependencies of my build, especially since it’s its job to tell me. Therefore, Kobalt has a handy --checkVersions parameter that will check to see if it can find any new version of your dependencies:

$ ./kobaltw --checkVersions
New versions found:

Another convenient switch is --resolve, which looks up a dependency and gives you some information about it, such as which Maven repo it is found in and its own dependency tree. You can also use an id without a version (e.g. org.testng:testng:) to ask Kobalt to find the most recent version of that artifact:

$ ./kobaltw --resolve org.testng:testng:
║                                     org.testng:testng:                           ║
║  ║
╟ junit:junit:4.10
║      ╙ org.hamcrest:hamcrest-core:1.1
╟ com.beust:jcommander:1.48
╟ org.apache.ant:ant:1.7.0
║      ╙ org.apache.ant:ant-launcher:1.7.0
╟ org.yaml:snakeyaml:1.15
╙ org.beanshell:bsh:2.0b4

Finally, I’ve always been bugged by what I consider a glaring omission of the Gradle Android plug-in: not being able to run my applications. The plug-in generates tasks for the various variants of your application (assembleDevDebug, assembleDevRelease, installDevDebug, etc…) but strikingly, no "run" task. I’m happy to report that Kobalt’s Android plug-in supports exactly that. To see it in action, clone the Kobalt example and follow the instructions at the bottom of the README:

$ ./kobaltw runFreeDebug // build, install and launch that variant
$ ./kobaltw runFreeRelease // build, install and launch that variant

I’ve made a lot of improvements to the Android plug-in lately, but that will be the topic for another post.

The Kobalt diaries: annotation processing

I recently added apt support to Kobalt, which is a requirement for a build system these days, and something interesting happened.

First of all, the feature itself in Kobalt: pretty straightforward. The apt plug-in adds a new dependency directive similar to compile:

dependencies {

The processing tool can be further configured (output directory, arguments, etc…) with a separate apt directive:

apt {
    outputDir = "generated/sources/apt"

In order to test this new feature, I decided to implement a simple annotation processor project and I went for a Version class generator. As I wrote this processor, I realized that it was actually something I could definitely use in my other projects.

Of course, you can always simply hard code the version number of your application in a source file but that version number is typically something that’s useful outside of your code: you might need it in your build file, or when you generate your artifacts, or maybe other projects need to refer to it. Therefore, it often makes sense to isolate that version number in a property file and have every entity that needs it read it from that property file.

This is how version-processor was born. It’s pretty simple really: all you need to do is annotate one of your classes with @Version and a file is created, which you can then refer to. That version number can either be hardcoded or specified in a properties file. Head over to the project’s main page for the details.

And of course, it’s built with Kobalt and if you are curious, here is the processor’s build file:

val processor = javaProject {
    name = "version-processor"
    group = "com.beust"
    artifactId = name
    version = "0.2"
    directory = "processor"

    assemble {
        mavenJars {}

    jcenter {
        publish = true

Happy version generating!

TensorFlow’s rough exterior

Like many others, I have paid very close attention to Google’s TensorFlow announcement and I’m planning to invest a decent amount of time to dive into it and understand it but watching Jeff Dean’s video about it, I couldn’t help but take notice of one of the code samples that he shows:

graph = tf.Graph()
with graph.AsDefault():
  examples = tf.constant(train_dataset)
  labels = tf.constants(train_labels)
  W = tf.Variables(tf.truncated_normal(rows*cols, num_labels]))
  b = tf.Variables(tf.zeros([num_labels]))

  logits = tf.mat_mul(examples, W) + b
  loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))

What a mess…

I realize this is just one of the two front ends (Python, the other being in C++) but the syntactic conventions of the snippet above are all over the map.

I see capitalized functions (Graph()) when most of the functions are lowercased. Capital variables (W) and lowercase ones (b), both of which the result of the same function. Functions using underscores and others using capitalized camel case. There just doesn’t seem to be any rhyme nor reason to the conventions.

The only style that’s not represented in this short snippet is straight camel case.

This hurts my eyes. Hopefully, spending some time with this fascinating tool will demystify it somewhat. Or maybe it will motivate me to write a front end I feel more comfortable with, say in Kotlin.

The Kobalt diaries: Android

A lot of work has gone into Kobalt since I announced its alpha:

I’m plannning to post more detailed updates as things progress but today, I’d like to briefly show a major milestone: the first Android APK generated by Kobalt.

I picked the Code path intro app as a test application. I first built it with Gradle to get a feel for it and the apk was generated in about 27 seconds. Then I generated a Build.kt file with ./kobaltw --init, added a few Android related directives to reach the following (complete) build file:

import com.beust.kobalt.*

val p = javaProject {
    name = "intro_android_demo"
    group = "com.example"
    artifactId = name
    version = "0.1"

    dependencies {

    sourceDirectories {

    android {
        applicationId = name
        buildToolsVersion = "21.1.2"

Then I launched the build with ./kobaltw assemble, and…

Less than five seconds to generate, compile it, compile the code, run aapt, generate classes.dex and finally, generated the apk. If you are curious, you can check out the full log.

Admittedly, Kobalt doesn’t yet handle build types and flavors nor manifest merging, but the example app I’m building here doesn’t use those either so I don’t expect the build time to increase much. There is a lot more to be done before Kobalt’s Android plug-in is ready for more users, but this is a pretty encouraging result.

Exploring the Kotlin standard library (part 2)

I folded the two parts of this series into one blog post, which you can read here.

Exploring the Kotlin standard library

Standard.kt is part of the Kotlin library and it defines some essential functions. What’s really striking about this source file is that it’s less than fifty lines long and that each of the function it defines (less than ten) is a one liner. Yet, each of these functions is very powerful. Here is a quick overview of the most important ones.


fun <T, R> T.let(f: (T) -> R): R = f(this)

let() is a scoping function: use it whenever you want to define a variable for a specific scope of your code but not beyond. It’s extremely useful to keep your code nicely self-contained so that you don’t have variables “leaking out”: being accessible past the point where they should be.

DbConnection.getConnection().let { connection ->
// connection is no longer visible here

let() can also be used as an alternative to testing against null:

val map : Map<String, Config> = ...
val config = map[key]
// config is a "Config?"
config?.let {
    // This whole block will not be executed if "config" is null.
    // Additionally, "it" has now been cast to a "Config" (no question mark)


fun <T> T.apply(f: T.() -> Unit): T { f(); return this }

apply() defines an extension function on all types. When you invoke it, it calls the closure passed in parameter and then returns the receiver object that closure ran on. Sounds complicated? It’s actually very simple and extremely useful. Here is an example:

File(dir).apply { mkdirs() }

This snippet turns a String into a File object, calls mkdirs() on it and then returns the file. The equivalent Java code is a bit verbose:

File makeDir(String path) {
  File result = new File(path);
  return result;

apply() turns this kind of ubiquitous code into a one liner.


fun <T, R> with(receiver: T, f: T.() -> R): R = receiver.f()

with() is convenient when you find yourself having to call multiple different methods on the same object. Instead of repeating the variable containing this object on each line, you can instead “factor it out” with a with call:

val w = Window()
with(w) {


fun <T, R> T.() -> R): R = f()

run() is another interesting one liner from the standard library. Its definition is so simple that it looks almost useless but it’s actually a combination of with() and let(), which reinforces what I was saying earlier about the fact that because all these functions from the standard library are regular functions, they can be easily combined to create more powerful expressions.

Tying it all together

Of course, it’s actually possible (and encouraged) to combine these functions:

fun configurationFor(id: String) = map[id]?.let { config ->
  config.apply {
    buildType = "DEBUG"
    version = "1.2"

This code looks up a Config object from an id and if one is found, sets a few additional properties on it and then returns it. But we can simplify this code even further. This time, I’m providing a fully self-contained snippet so you can copy and paste it directly into Try Kotlin in order to run it yourself:

class Config(var buildType: String, var version: String)

val map = hashMapOf<String, Config>()

fun configurationFor(id: String) = map[id]?.let { config ->
    config.apply {
        buildType = "DEBUG"
        version = "1.2"

Don’t you feel that this combination of let() and apply() feels a bit boilerplatey? Let’s rewrite it a bit more idiomatically:

fun configurationFor(id: String) = map[id]?.apply {
    buildType = "DEBUG"
    version = "1.2"

Let’s unpack this rather dense snippet:

  • Looking up a value on a hash map can be done either with get() or with the bracket notation, which is preferred.
  • Since the key might not be present in the map, we use the safe dereference operator ?. which guarantees that we will only enter apply() if the result is non null.
  • Inside the apply() block, the this object is a Config, which lets us invoke functions on this object without any prefix. In this case, all we have is properties, but obviously, you could invoke regular functions just as well.
  • Once that code has run, the altered Config is returned.


fun <T : Closeable, R> T.use(block: (T) -> R): R

Another interesting function of the standard library is use(), which gives us the equivalent of Java’s try-with-resources and of C#’s using statement.

This function applies to all objects of type Closeable and it automatically closes its receiver on exit. Note that as opposed to Java and C#, Kotlin’s use() is a regular library function and not directly baked in the language with a special syntax. This is made possible by Kotlin’s extension functions and closure syntax used coinjointly.

// Java 1.7 and above
Properties prop = new Properties();
try (FileInputStream fis = new FileInputStream("")) {
// fis automatically closed
// Kotlin
val prop = Properties()
FileInputStream("").use {
// FileInputStream automatically closed

Because Kotlin’s version is just a regular function, it’s actually much more composable than Java’s. For example, did you want to return this prop object after loading it?

// Kotlin
fun readProperties() = Properties().apply {
    FileInputStream("").use { fis ->

The apply() call tells us that the type of this expression is that of the object apply() is invoked on, which is Properties. Inside this block, this is now of type Properties, which allows us to call load() on it directly. In between, we create a FileInputStream that we use to populate this property object. And once we call use() on it, that FileInputStream will be automatically closed before this function returns, saving us from the ugly try/catch/finally combo that Java requires.

You will find a lot of these constructs in the Kobalt build tool code, feel free to browse it.

Google Fi unboxing

I received my order of Google Fi, the package contained more than I expected.

The business card size item at the bottom is the SD card. The rest is:

  • A portable charger.
  • A case for your Nexus 6.
  • A headset.

The charger has two USB ports and a micro one. Apparently, you can charge it from any of these ports (very convenient) and then you can plug two phones at the same time (probably three if you can find a dual micro-USB cable).

Finally, the headset has something that’s hard to find in headsets in general: volume control. It also has an extra jack, so you can plug another heaset in it. The only downside of this headset is that the control block dangles on your cheek instead of being located much lower on the cable. I don’t understand why such headsets are still manufactured.

I haven’t tested the service yet, I’ll report back after I’ve had a chance to use it thoroughly.

The long and arduous road to JCenter and Maven bliss

TestNG is available on both Maven Central and JCenter and I used to publish the artifact in these two repos with Maven. Recently, I took some time trying to obtain the same result with Gradle and so far, it has been a very painful and agonizing experience because there are so many moving parts to the whole process:

  • Gradle itself.
  • Using the right plugins, then configuring them properly.
  • Understanding the intricacies of JCenter/Bintray publishing.
  • Too much incorrect information out there. There are a lot of tutorials available on the Internet, way too many actually, especially since a lot of them are out of date.
  • IDEA is offering close to no help while editing your Gradle file: no auto completion, claiming your file has errors when it doesn’t, not complaining about broken files, etc…

My goal getting into this operation was simple and, so I thought, reasonable: being able to publish snapshots and releases from the command line to both JCenter and Maven. So far, my conclusion is that there is really no simpleem> way to achieve this goal. There is a complicated way, which I describe below, but even that complicated way doesn’t quite achieve my goal. In the end, I’m getting close to that goal except that I’m still going manually through the Nexus UI to close and deploy the artifact to Maven Central. I’ll post an update if I ever solve this.

The final structure of the build layout looks like this:

  • build.gradle: main build file, which at the end includes
  • gradle/publishing.gradle: sets up publishing routines and values that are shared by both Maven Central and JCenter. At the end, this script includes
  • gradle/publishing-maven.gradle and gradle/publishing-jcenter.gradle, which include the respective plugins and perform the publishing.

What follows is not a tutorial (there are so many alrady) but instead, a series of errors that I encountered along the way and how I fixed them.

401 when uploading to bintray

Verify your credentials. One way of setting them is:

bintray {
    user = properties.getProperty("bintray.user")
    key = properties.getProperty("bintray.apikey")

I put these values in, which I load explicitly ( might be a better location):


Javadocs are not being published

Maven Central will reject artifacts that don’t contain Javadocs and these are typically not included by default:

task javadocJar(type: Jar, dependsOn: javadoc) {
    classifier = 'javadoc'
    from 'build/docs/javadoc'

task sourcesJar(type: Jar) {
    from sourceSets.main.allSource
    classifier = 'sources'

artifacts {
    archives jar
    archives javadocJar
    archives sourcesJar

Javadocs are not being uploaded

One line to add to your bintray configuration:

bintray {
    // Without this, javadocs don't get uploaded
    configurations = ['archives']

“Cannot create task of type ‘Jar’ as it does not implement the Task interface.”

At some point during my tribulations, I started encountering this mystifying error. I ended up realizing that IntelliJ had sneakily added an import at the top of my gradle file, which is obviously not the class that we want. Removing this import fixed the problem (and you might want to configure IDEA to exclude it from your imports to avoid this problem in the future).

Artifacts are not being signed

Add the following to your signing configuration:

apply plugin: 'signing'

signing {
    required { gradle.taskGraph.hasTask("bintrayUpload") }
    sign configurations.archives

Then in your (not

signing.secretKeyRingFile=(path to .gnupg/secring.gpg)

.asc files are not being generated

Another requirement from Maven Central, which you fix in the bintray configuration:

bintray {
    pkg {
        version {
            gpg {
                // Without this, .asc files don't get generated
                sign = true

“Return code is: 400, ReasonPhrase: Bad Request”

In my initial attempts, ./gradlew publish would fail with this error, probably one of the most frustrating things about Sonatype: the HTTP errors are completely opaque and they don’t give you any detail on why they failed, while they easily could. Here is a list of potential reasons for this 400:

  • user credentials are wrong
  • url to server is wrong
  • user does not have access to the deployment repository
  • user does not have access to the specific repository target
  • artifact is already deployed with that version if it is a release (not -SNAPSHOT version)
  • the repository is not suitable for deployment of the respective artifact (e.g. release repo for snapshot version, proxy repo or group instead of a hosted repository)

Just to list a few. While the server won’t give more details is beyond me and one of the main reasons why I wish I could stop dealing with Sonatype completely.

./gradlew uploadArchives” failing with mysterious HTTP errors

Another mistake I initially made was to try to upload the artifacts directly to the Maven Central repo instead of Sonatype’s Nexus staging host. The correct configuration is:

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))
            snapshotRepository(url: "") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))

As I said at the top of this article, you then need to go deploy the archive manually from the Nexus UI.

I can’t find an answer to my Gradle problem!

Here is a life pro tip: whenever you do Google searches about Gradle, restrict the results to only the last year and read the StackOverlow answers first. Anything published before is pretty much guaranteed to be out of date.

Sonatype documentation is terrible.

Yes, yes it is. For example, the first hit to learn how to deploy to Maven Central from Gradle will land you here. This article is actually an indirection to the “real” article here which is… 404. The next link is also a 404.

Extremely frustrating.

Next steps

My current configuration enables the following process:

  • ./gradlew bintrayUpload uploads the release to JCenter. It will fail if the version is a SNAPSHOT (intentional since I uploads the snapshots to Maven’s snapshot repo, this part is straightforward and fully automated). If you want to publish snapshots to JCenter as well, you can do this by publishing to JFrog, although my attempts in that direction have never succeeded.
  • ./gradlew uploadArchives will upload the snapshot to Maven’s snapshots repo and the release to Sonatype’s staging repo. This is decided automatically based on whether the version name contains the string “SNAPSHOT”.

The build files themselves add up to more than 300 lines, which is mind boggling to perform operations that should be close to standard. Gradle is certainly very far from having sensible defaults.

I’m hoping to eventually be able to fully deploy to Maven Central from the command line but I’m not sure it’s possible, so suggestions are welcome.

Easily inspect your SQLite database on Android

Here is a script I use very often for Android development: this small shell script will copy the database from your device on your file system and then launch SQLiteBrowser on it, allowing you to inspect your tables very quickly. I’ve found this script extremely useful, going sometimes as far as calling it multiple times while my code in on breakpoints in my IDE.

This script takes additional steps before pulling the database, such as changing a few file permissions, since I have noticed that some devices are more strict about allowing database pulling than others. As far as I can tell, this script has worked on every single device I’ve used so far.

# pull-db
# Inspect the database from your device
# Cedric Beust


adb shell "run-as $PKG chmod 755 /data/data/$PKG/databases"
adb shell "run-as $PKG chmod 666 /data/data/$PKG/databases/$DB"
adb shell "rm /sdcard/$DB"
adb shell "cp /data/data/$PKG/databases/$DB /sdcard/$DB" 

rm -f /tmp/${DB}
adb pull /sdcard/${DB} /tmp/${DB}

open /Applications/ /tmp/${DB}