Archive for category General

The Kobalt diaries: annotation processing

I recently added apt support to Kobalt, which is a requirement for a build system these days, and something interesting happened.

First of all, the feature itself in Kobalt: pretty straightforward. The apt plug-in adds a new dependency directive similar to compile:

dependencies {

The processing tool can be further configured (output directory, arguments, etc…) with a separate apt directive:

apt {
    outputDir = "generated/sources/apt"

In order to test this new feature, I decided to implement a simple annotation processor project and I went for a Version class generator. As I wrote this processor, I realized that it was actually something I could definitely use in my other projects.

Of course, you can always simply hard code the version number of your application in a source file but that version number is typically something that’s useful outside of your code: you might need it in your build file, or when you generate your artifacts, or maybe other projects need to refer to it. Therefore, it often makes sense to isolate that version number in a property file and have every entity that needs it read it from that property file.

This is how version-processor was born. It’s pretty simple really: all you need to do is annotate one of your classes with @Version and a file is created, which you can then refer to. That version number can either be hardcoded or specified in a properties file. Head over to the project’s main page for the details.

And of course, it’s built with Kobalt and if you are curious, here is the processor’s build file:

val processor = javaProject {
    name = "version-processor"
    group = "com.beust"
    artifactId = name
    version = "0.2"
    directory = "processor"

    assemble {
        mavenJars {}

    jcenter {
        publish = true

Happy version generating!

TensorFlow’s rough exterior

Like many others, I have paid very close attention to Google’s TensorFlow announcement and I’m planning to invest a decent amount of time to dive into it and understand it but watching Jeff Dean’s video about it, I couldn’t help but take notice of one of the code samples that he shows:

graph = tf.Graph()
with graph.AsDefault():
  examples = tf.constant(train_dataset)
  labels = tf.constants(train_labels)
  W = tf.Variables(tf.truncated_normal(rows*cols, num_labels]))
  b = tf.Variables(tf.zeros([num_labels]))

  logits = tf.mat_mul(examples, W) + b
  loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))

What a mess…

I realize this is just one of the two front ends (Python, the other being in C++) but the syntactic conventions of the snippet above are all over the map.

I see capitalized functions (Graph()) when most of the functions are lowercased. Capital variables (W) and lowercase ones (b), both of which the result of the same function. Functions using underscores and others using capitalized camel case. There just doesn’t seem to be any rhyme nor reason to the conventions.

The only style that’s not represented in this short snippet is straight camel case.

This hurts my eyes. Hopefully, spending some time with this fascinating tool will demystify it somewhat. Or maybe it will motivate me to write a front end I feel more comfortable with, say in Kotlin.

The Kobalt diaries: Android

A lot of work has gone into Kobalt since I announced its alpha:

I’m plannning to post more detailed updates as things progress but today, I’d like to briefly show a major milestone: the first Android APK generated by Kobalt.

I picked the Code path intro app as a test application. I first built it with Gradle to get a feel for it and the apk was generated in about 27 seconds. Then I generated a Build.kt file with ./kobaltw --init, added a few Android related directives to reach the following (complete) build file:

import com.beust.kobalt.*

val p = javaProject {
    name = "intro_android_demo"
    group = "com.example"
    artifactId = name
    version = "0.1"

    dependencies {

    sourceDirectories {

    android {
        applicationId = name
        buildToolsVersion = "21.1.2"

Then I launched the build with ./kobaltw assemble, and…

Less than five seconds to generate, compile it, compile the code, run aapt, generate classes.dex and finally, generated the apk. If you are curious, you can check out the full log.

Admittedly, Kobalt doesn’t yet handle build types and flavors nor manifest merging, but the example app I’m building here doesn’t use those either so I don’t expect the build time to increase much. There is a lot more to be done before Kobalt’s Android plug-in is ready for more users, but this is a pretty encouraging result.

Google Fi unboxing

I received my order of Google Fi, the package contained more than I expected.

The business card size item at the bottom is the SD card. The rest is:

  • A portable charger.
  • A case for your Nexus 6.
  • A headset.

The charger has two USB ports and a micro one. Apparently, you can charge it from any of these ports (very convenient) and then you can plug two phones at the same time (probably three if you can find a dual micro-USB cable).

Finally, the headset has something that’s hard to find in headsets in general: volume control. It also has an extra jack, so you can plug another heaset in it. The only downside of this headset is that the control block dangles on your cheek instead of being located much lower on the cable. I don’t understand why such headsets are still manufactured.

I haven’t tested the service yet, I’ll report back after I’ve had a chance to use it thoroughly.

The long and arduous road to JCenter and Maven bliss

TestNG is available on both Maven Central and JCenter and I used to publish the artifact in these two repos with Maven. Recently, I took some time trying to obtain the same result with Gradle and so far, it has been a very painful and agonizing experience because there are so many moving parts to the whole process:

  • Gradle itself.
  • Using the right plugins, then configuring them properly.
  • Understanding the intricacies of JCenter/Bintray publishing.
  • Too much incorrect information out there. There are a lot of tutorials available on the Internet, way too many actually, especially since a lot of them are out of date.
  • IDEA is offering close to no help while editing your Gradle file: no auto completion, claiming your file has errors when it doesn’t, not complaining about broken files, etc…

My goal getting into this operation was simple and, so I thought, reasonable: being able to publish snapshots and releases from the command line to both JCenter and Maven. So far, my conclusion is that there is really no simpleem> way to achieve this goal. There is a complicated way, which I describe below, but even that complicated way doesn’t quite achieve my goal. In the end, I’m getting close to that goal except that I’m still going manually through the Nexus UI to close and deploy the artifact to Maven Central. I’ll post an update if I ever solve this.

The final structure of the build layout looks like this:

  • build.gradle: main build file, which at the end includes
  • gradle/publishing.gradle: sets up publishing routines and values that are shared by both Maven Central and JCenter. At the end, this script includes
  • gradle/publishing-maven.gradle and gradle/publishing-jcenter.gradle, which include the respective plugins and perform the publishing.

What follows is not a tutorial (there are so many alrady) but instead, a series of errors that I encountered along the way and how I fixed them.

401 when uploading to bintray

Verify your credentials. One way of setting them is:

bintray {
    user = properties.getProperty("bintray.user")
    key = properties.getProperty("bintray.apikey")

I put these values in, which I load explicitly ( might be a better location):


Javadocs are not being published

Maven Central will reject artifacts that don’t contain Javadocs and these are typically not included by default:

task javadocJar(type: Jar, dependsOn: javadoc) {
    classifier = 'javadoc'
    from 'build/docs/javadoc'

task sourcesJar(type: Jar) {
    from sourceSets.main.allSource
    classifier = 'sources'

artifacts {
    archives jar
    archives javadocJar
    archives sourcesJar

Javadocs are not being uploaded

One line to add to your bintray configuration:

bintray {
    // Without this, javadocs don't get uploaded
    configurations = ['archives']

“Cannot create task of type ‘Jar’ as it does not implement the Task interface.”

At some point during my tribulations, I started encountering this mystifying error. I ended up realizing that IntelliJ had sneakily added an import at the top of my gradle file, which is obviously not the class that we want. Removing this import fixed the problem (and you might want to configure IDEA to exclude it from your imports to avoid this problem in the future).

Artifacts are not being signed

Add the following to your signing configuration:

apply plugin: 'signing'

signing {
    required { gradle.taskGraph.hasTask("bintrayUpload") }
    sign configurations.archives

Then in your (not

signing.secretKeyRingFile=(path to .gnupg/secring.gpg)

.asc files are not being generated

Another requirement from Maven Central, which you fix in the bintray configuration:

bintray {
    pkg {
        version {
            gpg {
                // Without this, .asc files don't get generated
                sign = true

“Return code is: 400, ReasonPhrase: Bad Request”

In my initial attempts, ./gradlew publish would fail with this error, probably one of the most frustrating things about Sonatype: the HTTP errors are completely opaque and they don’t give you any detail on why they failed, while they easily could. Here is a list of potential reasons for this 400:

  • user credentials are wrong
  • url to server is wrong
  • user does not have access to the deployment repository
  • user does not have access to the specific repository target
  • artifact is already deployed with that version if it is a release (not -SNAPSHOT version)
  • the repository is not suitable for deployment of the respective artifact (e.g. release repo for snapshot version, proxy repo or group instead of a hosted repository)

Just to list a few. While the server won’t give more details is beyond me and one of the main reasons why I wish I could stop dealing with Sonatype completely.

./gradlew uploadArchives” failing with mysterious HTTP errors

Another mistake I initially made was to try to upload the artifacts directly to the Maven Central repo instead of Sonatype’s Nexus staging host. The correct configuration is:

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))
            snapshotRepository(url: "") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))

As I said at the top of this article, you then need to go deploy the archive manually from the Nexus UI.

I can’t find an answer to my Gradle problem!

Here is a life pro tip: whenever you do Google searches about Gradle, restrict the results to only the last year and read the StackOverlow answers first. Anything published before is pretty much guaranteed to be out of date.

Sonatype documentation is terrible.

Yes, yes it is. For example, the first hit to learn how to deploy to Maven Central from Gradle will land you here. This article is actually an indirection to the “real” article here which is… 404. The next link is also a 404.

Extremely frustrating.

Next steps

My current configuration enables the following process:

  • ./gradlew bintrayUpload uploads the release to JCenter. It will fail if the version is a SNAPSHOT (intentional since I uploads the snapshots to Maven’s snapshot repo, this part is straightforward and fully automated). If you want to publish snapshots to JCenter as well, you can do this by publishing to JFrog, although my attempts in that direction have never succeeded.
  • ./gradlew uploadArchives will upload the snapshot to Maven’s snapshots repo and the release to Sonatype’s staging repo. This is decided automatically based on whether the version name contains the string “SNAPSHOT”.

The build files themselves add up to more than 300 lines, which is mind boggling to perform operations that should be close to standard. Gradle is certainly very far from having sensible defaults.

I’m hoping to eventually be able to fully deploy to Maven Central from the command line but I’m not sure it’s possible, so suggestions are welcome.

Easily inspect your SQLite database on Android

Here is a script I use very often for Android development: this small shell script will copy the database from your device on your file system and then launch SQLiteBrowser on it, allowing you to inspect your tables very quickly. I’ve found this script extremely useful, going sometimes as far as calling it multiple times while my code in on breakpoints in my IDE.

This script takes additional steps before pulling the database, such as changing a few file permissions, since I have noticed that some devices are more strict about allowing database pulling than others. As far as I can tell, this script has worked on every single device I’ve used so far.

# pull-db
# Inspect the database from your device
# Cedric Beust


adb shell "run-as $PKG chmod 755 /data/data/$PKG/databases"
adb shell "run-as $PKG chmod 666 /data/data/$PKG/databases/$DB"
adb shell "rm /sdcard/$DB"
adb shell "cp /data/data/$PKG/databases/$DB /sdcard/$DB" 

rm -f /tmp/${DB}
adb pull /sdcard/${DB} /tmp/${DB}

open /Applications/ /tmp/${DB}

Game design and game implementation

“Because of a bug (it is an off-by-one error) the parley can only work if enemy and party member *do not* speak the same language.”

One of the many memorable quotes from a fascinating article on an old school RPG by one of the developers. I don’t even know this particular video game, even though I was playing a lot of RPG during my Apple ][ and Amiga days, but this particular quote resonates with me because it ties game design and game implementation together in a very explicit way.

The whole article is a must read for anyone who’s interested in game development or game design, or both.

The case of the buggy executor

I spent a mystifying half hour chasing down a bug recently, so I thought I would share.

Here is a simple scheduled executor:

public class Exec {
    final static Logger logger = LoggerFactory.getLogger(Exec.class);

    private static final ScheduledExecutorService executor =
            new ThreadFactory() {
                public Thread newThread(Runnable r) {
                    Thread result = new Thread();
                    return result;

    public static void main(String[] args) {
        executor.scheduleAtFixedRate(() ->"Tick"),
                1, TimeUnit.SECONDS);

I always use a ThreadFactory in my executors since seeing thread names plainly in your trace simplifies debugging threading issues considerably. Whenever I see a pool-2-thread-3 in my thread dump, I track down the lazy library that caused that monstrosity and I seriously consider replacing it with one that is written by developers more respectful of my time.

Other than that, this code is pretty straightforward and if you run it, you would expect the string “Tick” to be displayed every second:

17:37:18.780 [BuggyExecutor] INFO  com.beust.Exec - Tick
17:37:19.780 [BuggyExecutor] INFO  com.beust.Exec - Tick
17:37:20.780 [BuggyExecutor] INFO  com.beust.Exec - Tick

However, if you run the code as provided above, you will see that it does absolutely nothing.

Your code doesn’t speak for itself

I recently reviewed a commit that said “Fix the list view bug”. I reviewed it, saw that it was fixing an off-by-one error, approved it and moved on.

A few days later, another commit went by that said “Really fix the list view bug”. This fix was a bit more involved and caused the first item in the list view to sometimes receive the wrong styling. I then realized that I shouldn’t have approved the first commit without asking a few more questions.

Here is another scenario. Let’s say you are asked to review the following code:

public static int compare(@Nonnull Long a, @Nonnull Long b) {
	return a.compareTo(b);

Seems pretty harmless, doesn’t it? No reason not to approve it.

How about this one:

 * @return 0 if the two numbers are equal, 1 otherwise.
public int cmp(@Nonnull Long a, @Nonnull Long b) {
	return a.compareTo(b);

Now we have a problem: the code and the comment do not agree. This should not be approved before asking the developer to fix this (either change the comment or change the code).

There is this prevalent notion in the software world that good code doesn’t need comments, that it stands on its own. Or that comments are a code smell.

Irritatingly, this myth just won’t die despite repeated evidence that comments are sometimes vital to code correctness. Proponents of this myth point out that it’s easy for comments to get out of sync with the code (see the example above) and decide that because this approach is not perfect, it should be avoided altogether.

This is a false dichotomy that is easily avoided by making it clear to your teammates that both code and comments need to be reviewed.

The problem with the first example that I gave is that the developer failed to disclose what the intent of his code was. The code type checks, is correct and fixes a bug, but it turns out to be doing something different than the developer intended, and the reviewer would have caught it if the developer had explained what the intent of his code is.

Not all code needs comments, but certain pieces of code are useless and can’t be verified without comments.

Your code says “What?”, your comments say “Why?”. Sometimes, you need both in order to assess the correctness of a commit. Just make sure you review comments as seriously as you review code.

Android, Rx and Kotlin: part 2

I haven’t been quite honest with you my previous post: the code I showed in the article doesn’t exactly result in the short video of the application at the top of the article.

If you run the code as is, you will notice something very irritating (and unacceptable in any application): whenever the app is pretending to make a network call, the entire user interface freezes for a second. You can’t type anything and the loading icon stops spinning. This is the classic symptom of blocking the main thread. You will remember that I am simulating network calls by simply sleeping for a little while, and obviously, if you do this on the main thread, you will freeze your UI.

By default, Rx runs everything on your current thread, which is the main thread in Android: the thread that is in charge of updating your user interface. Android is exactly like most graphical toolkits: you should only use the main thread to update your UI but anything else you do (network or file system access, computations, database updates, etc…) needs to be done on a background thread. Rx has a very good solution to this problem.


Until recently, AsyncTask was the recommended way of performing this kind of task: by creating and executing an AsyncTask, you can run your code in two locations, one that will be run in a background thread (doInBackground()) and once that task completes, code that will run on the main thread (onPostExecute()).

AsyncTask has a troubled past and it has evolved quite a bit over the many revisions of the Android API: first it was single threaded, then it became multithreaded and more recently, it’s running in the background on one thread in an attempt to provide both parallelism and sequencing at the same time. If you need more information about AsyncTask, this article explains how it evolved.

This is not the only issue with AsyncTask: it’s also fairly challenging to get its behavior right while going through configuration changes or the possibility of your activity being paused or destroyed while the task is still running.

Rx offers a few solutions to some of these problems, but not all.

Threading and Rx

Rx offers two methods to control your threading model: subscribeOn() and observeOn().

In a nutshell, observeOn() defines what thread your observer will run on (this is where you usually do the work) and subscribeOn() defines the thread where your operators will run (map(), filter(), etc…).

The parameter you give to these methods is a Scheduler, an Rx abstraction that encapsulates a thread. Rx defines a few standard ones:

  • Schedulers.computation(): When you are calculating something.
  • When you are doing I/O (network, file system, database access, …).
  • And a few others I won’t get into here.

Additionally, RxAndroid defines the more Android-specific AndroidSchedulers.mainThread(), which is self explanatory.

A typical piece of code on Android is to run a few tasks in the background (network access, expensive computation, database update, etc…) and based on the result of that action, you update your UI. The way to implement this with Rx is straightforward: you subscribe on whichever background thread is more appropriate for your actions and you observe on the main thread:

trait Server {
    fun findUser(name: String) : Observable<JsonObject>

data class User(val id: String, val name: String)

fun p(s: String) {
    println("[${Thread.currentThread().getName()}] ${s}")

    .flatMap {
    	p("Calling server.findUser");
    .map{jo ->
        p("Mapping to a User object")
    .subscribe{ u -> p("User: ${u}a") }

We start with a string (which could come from an EditText and we specify that we’ll be subscribing from the I/O thread. Then we call the server with that name (on the I/O thread), turn the JSON response into a User object and we print that object:

[IoThreadScheduler-1] Calling server.findUser
[IoThreadScheduler-1] Mapping to a User object
[Main] User: User(id=123, name=cedric)

Note that even though you can specify multiple subscribeOn, all the subscriptions will happen on the first scheduler (subsequent subscribeOn will be ignored). I’m not sure if this is by design or just an oversight, but it’s not really a problem in practice. If you ever want to subscribe on multiple schedulers, you can always make this happen in the body of your subscription itself (for example, in the example above, if the server call was actually using Retrofit, you would see it’s using its own thread pool to make that call).

And that’s about all there is to get started with thread management with Rx on Android. As you can see, structuring your code this way makes the intent and thread handling extremely clear and easy to trace through, much more so than with AsyncTask.

With the growing number of Android libraries adding support for Rx, it’s becoming even more trivial to use these libraries within this framework and combine them in straightforward yet powerful ways. You can see in the examples I used in this post and the previous one how Rx makes it trivial to combine network calls and GUI updates simply by the fact that Retrofit returns Observables. You should also take a look at SQLBrite, which wraps SQLiteOpenHelper in Observables to offer you similar flexibility but for database access.

Read part 1, part 3.