Archive for category Java

Easy SQLite on Android with RxJava

Whenever I consider using an ORM library on my Android projects, I always end up abandoning the idea and rolling my own layer instead for a few reasons:

  • My database models have never reached the level of complexity that ORM’s help with.
  • Every ounce of performance counts on Android and I can’t help but fear that the SQL generated will not be as optimized as it should be.

Recently, I started using a pretty simple design pattern that uses Rx to offer what I think is a fairly simple way of managing your database access with RxJava. I’m calling this the “Async Rx Read” design pattern, which is a horrible name but the best I can think of right now.

Easy reads

One of the important design principles on Android is to never perform I/O on the main thread, and this obviously applies to database access. RxJava turns out to be a great fit for this problem.

I usually create one Java class per table and these tables are then managed by my SQLiteOpenHelper. With this new approach, I decided to extend my use of the helper and make it the only point of access to anything that needs to read or write to my SQL tables.

Let’s consider a simple example: a USERS table managed by the UserTable class:

List<User> getUsers(SQLiteDatabase db, String userId) {
  // select * from users where _id = {userId}

The problem with this method is that if you’re not careful, you will call it on the main thread, so it’s up to the caller to make sure they are always invoking this method on a background thread (and then to post their UI update back on the main thread, if they are updating the UI). Instead of relying on managing yet another thread pool or, worse, using AsyncTask, we are going to rely on RxJava to take care of the threading model for us.

Let’s rewrite this method to return a callable instead:

Callable<List<User>> getUsers(SQLiteDatabase db, String userId) {
  return new Callable<List<User>>() {
    public List<User> call() {
      // select * from users where _id is userId

In effect, we simply refactored our method to return a lazy result, which makes it possible for the database helper to turn this result into an Observable:

Observable<List<User>> getUsers(String userId) {
  return makeObservable(mUserTable.getUsers(getReadableDatabase(), userId))
    .subscribeOn(Schedulers.computation()) // note: do not use

Notice that on top of turning the lazy result into an Observable, the helper forces the subscription to happen on a background thread (the computation scheduler here; do not use because it’s backed by an unbounded executor). This guarantees that callers don’t have to worry about ever blocking the main thread.

Finally, the makeObservable method is pretty straightforward (and completely generic):

private static <T> Observable<T> makeObservable(final Callable<T> func) {
  return Observable.create(
      new Observable.OnSubscribe<T>() {
          public void call(Subscriber<? super T> subscriber) {
            try {
            } catch(Exception ex) {
              Log.e(TAG, "Error reading from the database", ex);

At this point, all our database reads have become observables that guarantee that the queries run on a background thread. Accessing the database is now pretty standard Rx code:

MySqliteOpenHelper mDbHelper;

// ...

  .subscribe(new Action1<List<User>>()) {
    public void onNext(List<User> users) {
      // Update our UI with the users

And if you don’t need to update your UI with the results, just observe on a background thread.

Since your database layer is now returning observables, it’s trivial to compose and transform these results as they come in. For example, you might decide that your ContactTable is a low layer class that should not know anything about your model (the User class) and that instead, it should only return low level objects (maybe a Cursor or ContentValues). Then you can use use Rx to map these low level values into your model classes for an even cleaner separation of layers.

Two additional remarks:

  1. Your Table Java classes should contain no public methods: only package protected methods (which are accessed exclusively by your Helper, located in the same package) and private methods. No other classes should ever access these Table classes directly.

  2. This approach is extremely compatible with dependency injection: it’s trivial to have both your database helper and your individual tables injected (additional bonus: with Dagger 2, your tables can have their own component since the database helper is the only refence needed to instantiate them).

This is a very simple design pattern that has scaled remarkably well for our projects while fully enabling the power of RxJava. I also started extending this layer to provide a flexible update notification mechanism for list view adapters (not unlike what SQLBrite offers), but this will be for a future post.

This is still a work in progress, so feedback welcome!

You’ve been implementing main() wrong all this time

Since the very early days of Java (and C-like languages overall), the canonical way to start your program has been something like this:

public class A {
  public static void main(String[] args) {
    new A().run(args);

  public void run(String[] args) {
    // Your application starts here

If you are still doing this, I’m here to tell you it’s time to stop.

Letting go of ‘new’

First, install Guice in your project:


and then, modify your main method as follows:

public class A {
  public static void main(String[] args) {

So, what does this buy you exactly?

You will find a lot of articles explaining the various benefits of Guice, such as being able to substitute different environments on the fly, but I’m going to use a different angle in this article.

Let’s start by assuming the existence of a Config class that contains various configuration parameters. I’ll just hardcode them for now and use fields to make the class smaller:

public class Config {
  String host = "";
  int port = 1234;

This class is a singleton, it is instantiated somewhere in your main class and not used anywhere else at the moment. One day, you realize you need this instance in another class which happens to be deep in your runtime hierarchy, which we will call Deep. For example, if you put a break point in the method where you need this config object, your debugger would show you stack frames similar to this:

com.example.B.f(int, String)
com.example.Deep.h(Foo, int)

The easy and wrong way to solve this problem is to make the Config instance static on some class (probably A) and access it directly from Deep. I’m hoping I don’t need to explain why this is a bad idea: not only do you want to avoid using statics, but you also want to make sure that each object is exposed only to objects that need them, and making the Config object static would make your instance visible to your entire code base. Not a good thing.

The second thought is to pass the object down the stack, so you modify all the signatures as follows:

com.example.B.f(int, String, Config)
com.example.C.g(String, Config)
com.example.Deep.h(Foo, int, Config)

This is a bit better since you have severely restricted the exposure of the Config object, but note that you are still making it available to more methods than really need to: B#f and C#g have really nothing to do with this object, and a little sting of discomfort hits you when you start writing the Javadoc:

public class C {
   * @param config This method doesn't really use this parameter,
   * it just passes it down so Deep#h can use it.
  public void g(String s, Config config) {

Unnecessary exposure is actually not the worst part of this approach, the problem is that it changes all these signatures along the way, which is certainly undesirable in a private API and absolutely devastating in a public API. And of course, it’s absolutely not scalable: if you keep adding a parameter to your method whenever you need access to a certain object, you will soon be dealing with methods that take ten parameters, most of which they just pass down the chain.

Here is how we solve this problem with dependency injection (performed by Guice in this example, but this is applicable to any library that implements JSR 330, obviously):

public class Deep {
  private Config config;

and we’re done. That’s it. You don’t need to modify the Config class in any way, nor do you need to make any change in any of the classes that separate Deep from your main class. With this, you have also minimized the exposure of the Config object to just the class that needs it.

Injecting right

There are various ways you can inject object into your class but I’ll just mention the two that, I think, are the most important. I just showed “field injection” in the previous paragraph, but be aware that you can also prefer to use “constructor injection”:

public class Deep {
  private final Config config;

  public Deep(Config config) {
    this.config = config;

This time, you are adding a parameter to the constructor of your Deep class (which shouldn’t worry you too much since you will never invoke it directly, Guice will) and you assign the parameter to the field in the constructor. The benefit is that you can declare your field final. The downside, obviously, is that this approach is much more verbose.

Personally, I see little point in final fields since I have hardly ever encountered a bug that was due to accidentally reassigning a field, so I tend to use field injection whenever I can.

Taking it to the next level

Obviously, the kind of configuration object I used as an example if not very realistic. Typically, a configuration will not hardcode values like I did and will, instead, read them from some external source. Similarly, you will want to inject objects that can’t necessarily be instantiated so early in the lifecycle of your application, such as servlet contexts, database connections, or implementations of your own interfaces.

This topic itself would probably cover several chapters of a book dedicated to dependency injection, so I’ll just summarize it: not all objects can be injected this way, and one benefit of using a dependency injection framework in your code is that it will force you to think about what life cycle category your objects belong to. Having said that, if you want to find out how Guice can inject objects that get created at a later time in your application life cycle, look up the Javadoc for the Provider class.

Wrapping up

I hope this quick introduction to dependency injection piqued your interest and that you will consider using it in your project since it has so much more to offer than what I described in this post. If you want to learn more, I suggest starting with the excellent Guice documentation.

Ushering TestNG into the new year

I took advantage of the holiday break to completely revamp TestNG’s HTML reports, something I’ve been meaning to do for a very long time but never found the time to put at the top of my TODO list. I wrote the original reports pretty much with the very first version of TestNG, around 2004, and I hardly touched them since then. My intention with this rewrite was not just to revamp them visually but also technically, so that I could give myself as much freedom as possible to improve them from this point on.

Here are some notes I took during this process.

Content and appearance

First of all, here are the new reports. They show all the suites on the same page with a banner at the top, a navigator pane on the left hand side (which always stays visible) and a main panel on the right hand side, which shows all the information you requested depending on which item you clicked in the navigator. Pretty straightforward.

If you take a look at the source, you will see that it hardly contains any HTML: it’s mostly made of <div> and <span>. I didn’t really set out to do this initially, it’s just that whenever I used HTML elements, I inherited undesirable CSS attributes (margins, paddings, etc…) which I ended up resetting manually, so after a while, it just seemed much easier to use divs and spans knowing that they start with a clean CSS slate.

Since there is so much generated text, I pondered using a templating library to make my life easier. The first question to settle was: client side or server side templates? In this case, there is not really a “server” side, so the question is more “Java” or “Javascript” templates. I quickly rejected the client side approach which, while well adapted to serve pages over a network, doesn’t seem to have much benefit in this particular case since the page is generated locally. As for the Java side, the two standard options for templates (Freemarker and Velocity) seemed quite overkill for what I was trying to do, so I ended up writing my own tiny implementation of mustache.js (which I’ve used in the past and liked quite a bit)..

Once I had the templating code working, I started converting the Java code over to it but… something felt wrong. I quickly gained the impression that this was actually a step backward compared to doing the generation 100% from Java, and the reason quickly became apparent to me: the Java compiler.

The problem with using templates is that you get very little help from the tools. Some template libraries allow you to include other templates, which can reduce the repetition, but it’s still very easy to make simple typos or being forced to copy/paste a lot. In contrast, my Java hierarchies makes it quite trivial to, say, add a new panel on the right hand side with a link to the left. Implement the right class, declare it and it will automatically work, with the right defaults (CSS selectors) and a lot of the logic validated by the compiler.

This comfort was too good to pass up so I scrapped my template idea (my mustache.js implementation is still around, though, and I might use it some time in the future) and I stuck to 100% Java generation.

Javascript and development tools

Picking jQuery was a no-brainer, it’s hard for me to imagine doing anything in Javascript that manipulates the DOM without using JQuery. The logic in the new reports is fairly simple and is just above one hundred lines of Javascript.

Chrome’s development environment is also quite superb (I’m sure other browsers’ is just as good). In a nutshell, you have access to pretty much the same range of support that Eclipse or IDEA provides in Java: breakpoints, inspection and modification of variables, and even CSS debugging, which I’ve found invaluable to track down unexpected CSS behaviors (I came across quite a few).

If you haven’t kept up with what you can do with Chrome/Javascript/JQuery these days, try the following:

  • Open in Chrome.
  • Right click anywhere on the page and select "Inspect element". This will open the document explorer at the bottom.
  • With the focus in the document explorer, type ESC, which will open the REPL.
  • Type $('.navigator-suite-content').hide().
  • This will collapse the content under both suites.

You can see how easy it is to debug this kind of code.

Other libraries

If you select the "Times" link of the first test suite in the Navigator pane, you will see the following table:

Click to enlarge

I am using Google’s Chart Tools to display this table and it was quite straightforward to add. The only tricky part was how to generate the data in Javascript form and make sure that the table gets drawn only after this data has been initialized, which required using a few Javascript tricks to make sure the initialization order is correct. The Google Chart Tools are very powerful and I will probably use more of their API to display additional graphs in the TestNG reports (pie charts, etc…).


Here are my main take away points:

  • CSS still feels like black magic to me. The theory is trivial on paper but in reality, I often come across results that just don’t make sense. With the help of modern CSS debuggers, it’s a little bit easier to find out what is happening and why (I especially like Chrome’s “Computed style” which gives you a list of all the attributes that were derived from .css files), but there are still times where I feel completely helpless trying to find out why something is not how it should be.
  • I’m a bit concerned with the size of the reports: I was quite surprised to find out that the reports you’re looking at is one meg in size. Admittedly, it shows over five hundred test methods, but the fact that it contains all the panels, even those the user might not be interested in, is a potential place for improvement. I might decide to put each individual pane in its own file.
  • Javascript’s rubbery type system is handy for this kind of task, but I’m still a bit unsure how well it scales to hundreds of thousands of lines of code. For example, I really enjoy being able to say if (v) regardless of what type v is and have Javascript’s truthiness values do the right thing. Similarly, it’s nice to be able to add a parameter to a function and not having to update one single call site (if you call a Javascript method with less parameters than it expects, the extra parameters simply receive the Undefined value. Obviously you need to test for this in said function).
  • JQuery is great and it’s probably making Javascript even more popular than it already is. I’m hoping the Dart team is taking notes and making sure that a similarly powerful and elegant framework will be available in Dart.

Finally, a call for help: I welcome any feedback on how to improve the CSS of these new reports, so if you are so inclined, feel free to improve the look of these new reports and share your improvements with me. I will be very grateful.

Happy new year!

A tale of compromises in graphical user design

I recently received an alarming bug report about the TestNG Eclipse plug-in:

I’m talking about TestNG plugin for eclipse and its performance. Since some time this plugin contains new feature – searching already finished tests. (just “Search: ” with a field).

Performing this search makes that my eclipse does not response for minutes. After all I can see as a result – all historical run tests (according to my search request).

A bug that would cause Eclipse to be unresponsive for minutes is certainly unacceptable, so I tried to dig into this problem quickly. However, I was still not quite sure what this user was doing, and since I use and I work on the plug-in on a regular basis, the problem have been happening under some very specific and unusual circumstances. A few emails later, I received the following clarification:

Yes, I’m talking about the Search box and filtering.

‘historical data” means all collected test results. In case I don’t restart eclipse for weeks and start million tests (via providers) performance plays here important role.

In fact I don’t need the search box feature. Simply today I filled it up by mistake and lost half an hour… First to find, second to remove from the box…

I have to say I laughed when I read “I filled it by mistake and lost a half hour”. Okay, maybe not very funny from this user’s stand point, but funny because I can see how having a million test instances could cause the plug-in to become unresponsive.

The problem is caused by the “Search filter”, a feature that appeared in the plug-in a few months ago:

When you type characters in this text box, the results displayed in the tree get filtered and only the nodes that match the text are shown, a functionality that is very convenient when you want to inspect the results of a specific test.

This poor user created a test run with one million test instances, accidentally typed a letter in the search box and then the plug-in diligently went through the million nodes, retaining only those that contain this letter. Obviously, deleting this character will cause the exact same thing to happen, except that the entirety of the test suite will be restored in the tree view.

This code is fairly naive and not really optimized to account for the creation of a million TreeItems, so I wasn’t really surprised to hear that doing so would cause Eclipse to become unresponsive for a while. After all, the addition of these SWT objects has to be made on the Event Dispatch Thread at some point, and whether you add them one by one or in bulk (which is what the plug-in is doing), it’s pretty much guaranteed that the dispatch thread will get severely hogged.

It’s a pretty simple problem with a few obvious solutions, the hard part is finding out the best course of action.

My first thought at trimming down the solution space was to decide that a test result featuring millions of objects was an unusual case and that optimizing this part of the code should therefore not be my priority. But just for the sake of argument, I explored several ways of doing so and I came up with various ideas, among which doing some precaching of subwords (for example, so I can match three letter words to nodes very quickly) or by virtualizing the tree. I’m sure there are plenty of other techniques available and I’m definitely interested in hearing about them.

But for now, I decided to take a lighter approach, so I made two changes:

1) When I detect that the tree contains more than a certain number of nodes, I configure the text box not to start filtering until it has at least three characters. This addresses the problem of accidentally typing a letter in the box, and it also guarantees that when the filtering is triggered, it will match fewer nodes (since these will have to match three letters instead of just one). Obviously, the code sill has to go through the million nodes.

This looks simple enough but I couldn’t help trying to devise clever ways of actually calibrating these numbers: when should this behavior be triggered? 1000 nodes? 10,000 nodes? Also, should the minimum number of characters be a function of that number? For example, require two characters for 1000 nodes but three for 10,000 nodes? How about using a log in base 1000 to create these pairs of values? Or should the base of the log be 50?

2) I added a new “Clear results” icon to the toolbar:

Clicking it wipes the results displayed in all the panes, which guarantees that typing text in the search filter will do nothing (the search text box actually gets disabled).

The only concern I have with this approach is that users might get confused to see that sometimes, the search filter will activate with one character and other times, it requires three. They might even think that the search filter is broken if nothing happens after typing two characters.

You can work around this problem by providing a tool tip to the text box or, better, display a helpful text tip in a greyed out font inside the text box itself, saying something like “[Type at least three characters to search]”.

I found it interesting how such a simple problem can offer so many different avenues to solve it and how each one comes with benefits and costs that need to be carefully weighed. I’m hoping to have struck a correct balance with the current approach, one which solves the problem at hand without impacting most of the regular users too adversely.

Announcing TestNG 6.0

I’m happy to announce the release of TestNG 6.0. A lot of changes have gone into this release, which have slowly accumulated in the 5.14 line over the past few months. I’ll go over the most important features and bug fixes in this entry and I’m including the full change log at the bottom of this post.

If I had to pick the most prominent features for TestNG 6.0, they would be:

  • YAML.
  • Guice.
  • Improvements to the Eclipse plug-in.


As most of you know, TestNG uses an XML file to capture the entire description of your test suite, and while I think the schema for this file has managed to remain fairly simple over the years, the XML aspect of it can sometimes get in the way of making quick changes. I considered supporting an additional format for a while and I ended up narrowing my choices down to JSON and YAML. In the end, YAML won on tiny details such as not requiring the constant use of double quotes and the fact that JSON seems to be more optimized for computer than human consumption. Here is a short TestNG YAML example:

name: SingleSuite
threadCount: 4
parameters: { n: 42 }

  - name: Regression2
    parameters: { count: 10 }
    excludedGroups: [ broken ]
      - test.listeners.ResultEndMillisTest
      - test.listeners.TimeOutTest

Unfortunately, YAML and JSON don’t have as much tooling support as XML especially when it comes to editing and validation, but one thing that I liked in YAML is that it’s much more “copy/paste friendly” than XML. In XML, it’s rarely possible to just cut a line and paste it somewhere else in the file: you usually end up having to add surrounding tags or move closing tags. In YAML, you can usually just cut/paste lines right away. It’s also much easier to comment out regions than XML’s awkward “<!--” and “-->” delimiters.

I discussed my choice of YAML in this blog entry, and here is the official TestNG documentation.


From the very early days (around 2006), TestNG has made it possible for users to take control of the instantiation of their test classes thanks to the very useful IObjectFactory interface. Developers in need of creating the instances of their test classes themselves in order to prepopulate them could use this factory to perform whatever operations they needed and then return these instances back to TestNG, which would then use these to run the tests.

The release of Guice made this interface even more useful: instead of instantiating their test objects, users were now able to simply ask Guice to hand them a fully injected instance. This worked great but considering the number of discussions and requests that we received on the mailing-list on this topic, I started wondering if we couldn’t provide an even better way to support Guice directly with TestNG.

A few hours later, TestNG’s official Guice support was born:

@Guice(modules = GuiceExampleModule.class)
public class GuiceTest {

  ISingleton m_singleton;

  public void singletonShouldWork() {


In this example, TestNG will use the GuiceExampleModule module to retrieve an instance of the GuiceTest class and then use that instance to run the tests.

You no longer need to use the object factory, you can now directly tell TestNG which Guice module(s) should be used to create the instance of a given test. Here is the original blog post that started the discussion (note that the syntax changed slightly since then) and the direct link to the documentation.

Improved Eclipse plug-in

I have been adding a lot of features to the Eclipse plug-ins over the past months, among which:

  • A summary tab that allows you to browse the results very easily (shown partially above).
  • A search functionality for whenever you need to look for the result of a specific test among hundreds.
  • Vastly improved automatic conversions from JUnit 3 and JUnit 4 to TestNG.
  • A revamped display view that matches the testng.xml format more closely.

Here is a quick rundown of the latest features and the full documentation.


These are the big items, here is the full change log, along with the contributors:


  • Added: @Guice(moduleFactory) and IModuleFactory
  • Added: @Guice(module)
  • Added: timeOut for configuration methods
  • Added: -randomizesuites (Nalin Makar)
  • Added: IConfigurable
  • Fixed: @Test(priority) was not being honored in parallel mode
  • Fixed: @Test(timeOut) was causing threadPoolSize to be ignored
  • Fixed: TESTNG-468: Listeners defined in suite XML file are ignored (Michael Benz)
  • Fixed: TESTNG-465: Guice modules are bound individually to an injector meaning that multiple modules can’t be effectively used (Danny Thomas)
  • Fixed: Method selectors from suites were not properly initialized (toddq)
  • Fixed: Throw an error when two data providers have the same name
  • Fixed: Better handling of classes that don’t have any TestNG annotations
  • Fixed: XmlTest#toXml wasn’t displaying the thread-count attribute
  • Fixed: TESTNG-415: Regression in 5.14.1: JUnit Test Execution no longer working
  • Fixed: TESTNG-436: Deep Map comparison for assertEquals() (Nikolay Metchev)
  • Fixed: Skipped tests were not always counted.
  • Fixed: test listeners that throw were not reporting correctly (ansgarkonermann)
  • Fixed: wasn’t working.
  • Fixed: In parallel “methods” mode, method interceptors that remove methods would cause a lock up
  • Fixed: EmailableReporter now sorts methods chronologically
  • Fixed: TESTNG-411: Throw exception on mismatch of parameter values (via DP and/or Inject) and test parameters
  • Fixed: IDEA-59073: exceptions that don’t match don’t have stack trace printed in console (Anna Kozlova)
  • Fixed: IDEA’s plug-in was not honoring ITest (fixed in TestResultMessage)
  • Fixed: Methods depending on a group they belong were skipped instead of throwing a cycle exception
  • Fixed: TESTNG-401: ClassCastException when using a listener from Maven
  • Fixed: TESTNG-186: Rename IWorkerApadter to IWorkerAdapter (Tomás Pollak)
  • Fixed: TESTNG-415: Assert.assertEquals() for sets and maps fails with ‘null’ as arguments
  • Fixed: typo -testRunFactory
  • Fixed: NPE while printing results for an empty suite (Nalin Makar)
  • Fixed: Invoke IInvokedMethodListener.afterInvocation after fixing results for tests expecting exceptions (Nalin Makar)
  • Fixed: TESTNG-441: NPE in SuiteHTMLReporter#generateMethodsChronologically caused by a race condition (Slawomir Ginter)


  • Added: Convert to YAML
  • Added: New global preference: JVM args
  • Added: Eclipse can now monitor a test-output/ directory and update the view when a new result is created
  • Added: Right clicking on a class/package/project now offers a menu “TestNG/Convert to TestNG”
  • Added: Excluded methods are now listed in the Summary tab
  • Added: “Description” column in the excluded methods table
  • Added: Dialog box when the plug-in can’t contact RemoteTestNG
  • Added: Double clicking on an excluded method in the Summary tab will take you to its definition
  • Added: If you select a package before invoking the “New TestNG class” wizard, the source and package text boxes will be auto-filled
  • Added: When an item is selected in a tab, the same item will be selected when switching tabs
  • Added: A new “Summary” tab that allows the user to see a summary of the tests, sort them by time, name, etc…
  • Added: It’s now possible “Run/Debug As” with a right click from pretty much any element that makes sense in the tree.
  • Added: JUnit conversion: correctly replaces assertNull and assertNotNull
  • Added: JUnit conversion: removes super.setUp() and super.tearDown()
  • Added: JUnit conversion: removes @Override
  • Added: JUnit conversion: replaces @Test(timeout) with @Test(timeOut) (
  • Added: JUnit conversion: replaces @Test(expected) with @Test(expectedExceptions) (
  • Added: JUnit conversion: replaces fail() with (
  • Added: JUnit conversion: replaces Assert with AssertJUnit (
  • Added: The progress bar is now orange if the suite contained skipped tests and no failures
  • Added: Skipped test and suite icons are now orange (previously: blue)
  • Added: New method shortcuts: “Alt+Shift+X N”, “Alt+Shift+D N” (Sven Johansson)
  • Added: “Create TestNG class” context menu
  • Added: When generating a new class, handle overridden methods by generating mangled test method names
  • Fixed: Green nodes could override red parent nodes back to green
  • Fixed: Was trying to load the classes found in the XML template file
  • Fixed: Stack traces of skipped tests were not showing in the Exception view
  • Fixed: XML files should be run in place and not copied.
  • Fixed: NPE when you select a passed test and click on the Compare Result icon (Mohamed Mansour)
  • Fixed: When the run is over, the plug-in will no longer force the focus back to the Console view
  • Fixed: The counter in the progress bar sometimes went over the total number of test methods (
  • Fixed: org.eclipse.ui.internal.ErrorViewPart cannot be cast to org.testng.eclipse.ui.TestRunnerViewPart (
  • Fixed: Workspace preferences now offer the “XML template” option as well as the project specific preferences (Asiel Brumfield)
  • Fixed: TESTNG-418: Only last suite-file in testng.xml run by Eclipse plugin


  • Added: Section on Selenium (Felipe Knorr Kuhn)
  • Added: Link to an article on TestNG, Mockito and Emma in the Misc section

Upgrading to TestNG 6.0

Ant users can download TestNG 6.0 directly from the web site while Maven users only need to specify the following dependency:


Don’t forget to update your Eclipse plug-in as well.

One click test conversions

If converting your tests to TestNG is one of your new year’s resolution, you are in luck.

Introducing the improved JUnit to TestNG converter.

A couple of months ago, I gave a preview of the new features in the TestNG Eclipse plug-in and I observed that more and more people were converting their tests from JUnit 3 and JUnit 4 to TestNG. This latter was a surprise to me since I never really expected anyone would want to move away from JUnit 4 once they have migrated to it.

TestNG has long supported individual class conversions in the form of Quick Fixes:

I recently expanded this support and turned it into a full refactoring, which means that you can now apply it to entire packages, source folders or even projects.

To use it, install the latest TestNG Eclipse plug-in, open your Package Explorer and right click on either a package, a source folder or a project:

The refactoring wizard contains two pages.

The first one lets you customize the testng.xml that’s about to be generated (you can also choose not to generate one):

The next page shows you a view with all the changes that are about to be made to your source files. These changes are similar to the ones made with the Quick Fix except that they now apply to multiple files:

On this page, you can exclude certain files from being converted, and when you’re happy, press the Finish button.

Like all refactorings in Eclipse, you can undo your changes in one click if you change your mind:

In a next post, I’ll show how you can use this new functionality to help you check that your unit tests are as isolated as you think they are.

Now go ahead and convert your tests, that’s one less new year’s resolution you have to worry about!

More on multithreaded topological sorting

I received a few interesting comments on my previous entry regarding the new multithreaded topological sort I implemented in TestNG and there is one in particular from Rafael Naufal that I wanted to address:

The @Priority annotation couldn’t be adapted to say which free methods get scheduled first? BTW, the responsibility of knowing which nodes are free couldn’t be moved to the graph of test methods?

This is pushing my current algorithm even further in the sense that we not only want to schedule free nodes as they become available, we also want to schedule them in order of importance.

What makes a node more important than another? Its level of dependencies. The more a node is depended upon, the more beneficial it is to schedule it as soon as possible since this will end up freeing more nodes. Admittedly, you are still bounded by the size of your thread pool, but this is exactly what we want: increasing the pool size should lead to more parallelism and therefore better performance, but the current scheduling algorithm being fair (or random) means that we are not guaranteed to see this performance increase.

My first reaction was to modify my Executor but as it turns out, you can actually do this with the existing implementation. The constructor of all the Executors takes a
BlockingQueue in parameter, which is the queue that the Executor will use to process the workers. Unsurprisingly, there already is a priority queue called PriorityBlockingQueue.

All you need to do is to use that queue instead of the default one when you create your Executor and then make sure that the workers you pass it have a natural ordering. In this case, the weight of a worker is how many other workers depend on it, which is very easy to calculate.

On a related topic, I wanted to get a closer look at how the algorithm I described in my previous blog post actually works. I described the theory and I have tests that show that it seems to work as expected, but it occurred to me that I could actually “view it from the inside” with little effort.

First, I added a method called toDot which generates a Graphviz file representing the current graph. This turned out to be trivial:

* @return a .dot file (GraphViz) version of this graph.
public String toDot() {
  String FREE = "[style=filled color=yellow]";
  String RUNNING = "[style=filled color=green]";
  String FINISHED = "[style=filled color=grey]";
  StringBuilder result = new StringBuilder("digraph g {\n");
  Set<T> freeNodes = getFreeNodes();
  String color;
  for (T n : m_nodesReady) {
    color = freeNodes.contains(n) ? FREE : "";
    result.append("  " + getName(n) + color + "\n");
  for (T n : m_nodesRunning) {
    color = freeNodes.contains(n) ? FREE : RUNNING;
    result.append("  " + getName(n) + color + "\n");
  for (T n : m_nodesFinished) {
    result.append("  " + getName(n) + FINISHED+ "\n");
  for (T k : m_dependingOn.getKeys()) {
    List<T> nodes = m_dependingOn.get(k);
    for (T n : nodes) {
      String dotted = m_nodesFinished.contains(k) ? "style=dotted" : "";
      result.append("  " + getName(k) + " -> " + getName(n) + " [dir=back " + dotted + "]\n");
  return result.toString();

Then I modified the executor to dump the graph every time a worker terminates, and finally, I wrote a shell script to convert these dot files into images and to create an HTML file. I ran a simple test case, processed the files with the shell script and here is the final result.

A yellow node is “free”, green means that the node is “ready” (to be run in the thread pool), grey is “finished” and white nodes haven’t been processed yet. Dotted arrows represent dependencies that have been satisfied.

As you can see, the execution matches very closely what you would expect based on my description of the algorithm and I confirmed that changing the size of the thread pool creates different executions.


Hard core multicore with TestNG

I recently implemented a new feature in TestNG that took me down an interesting technical path that ended up mixing graph theory with concurrency.

Here are a few notes.

The problem

TestNG allows you to declare dependencies among your test methods. Here is a simple example:

public void a1() {}

@Test(dependsOnMethods = "a1")
public void b1() {}

public void x() {}

public void y() {}

In this example, b1() will not run until a1() has completed and passed. If a1() fails, b1() will be marked as “Skipped”. For the purpose of these articles, I call both method a1() and b1() “dependent” methods while x() and y() are “free” methods.

Things get more interesting when you want to run these four test methods in parallel. When you specify that these methods should be run in a pool of three threads, TestNG still needs to maintain the ordering of a1() and b1(). The way it accomplishes this is by running all the dependent methods in the same thread, guaranteeing that not only will they not overlap but also that the ordering will be strictly respected.

The current algorithm is therefore simple:

  • Break all the test methods into two categories: free and dependent.
  • Free methods are thrown into the thread pool and executed by the Executor, one method per worker, which guarantees full parallelism.
  • Dependent methods are sorted and run into an executor that contains just one thread.

This has been the scheduling algorithm for more than five years. It works great, but it’s not optimal.


Dependent methods are a very popular feature of TestNG, especially in web testing frameworks such as Selenium, where the testing of pages is very dependent on the ordering in which operations are performed on these pages. These tests are typically made of a majority of dependent methods, which means that the current scheduling algorithm makes it very hard to leverage any parallelism at all in these situations.

For example, consider the following example:

Since all four methods are dependent, they will all be running in the same thread, regardless of the thread pool size. An obvious improvement would be to run a1() and b1() in one thread and a2() and b2() in a different thread.

But why not push thing further and see if we can’t just throw all these four methods in the main thread pool and still respect their ordering?

This thought led me to take a closer look at the concurrent primitives availables in the JDK, and more specifically, Executors.

My first question was whether it was possible to add workers to an Executor without necessarily having them ready to run right away, but this idea soon appeared to me as going against the principle of Executors, so I abandoned it.

The other idea was to see if it was possible to start with only a few workers and then add more workers to an Executor as it’s running, which turns out to be legal (or at least, not explicitly prohibited). Looking through the existing material, it seems to me that Executors typically do not modify their own set of workers. They get initialized and then external callers can add workers to them with the execute() method.

At this point, the solution was becoming fairly clear in my mind, but before I get into details, we need to take a closer look at sorting.

Topological sort

In the example shown at the beginning, I said that TestNG was sorting the methods before executing them but I didn’t explain exactly how this was happening. As it turns out, we need a slightly different sorting algorithm than the one you are used to.

Looking back at this first example, it should be obvious that there are more than one correct way to order the methods:

  • a1() b1() x() y()
  • x() a1() b1() y()
  • y() a1() x() b1()

In short, any ordering that executes a1() before b1() is legal. What we are doing here is trying to sort a set of elements that cannot all be compared to each other. In other words, if I pick two random methods “f” and “g” and I ask you to compare them, your answer will be either “f must run before g”, “g must run before f” or “I cannot compare these methods” (for example if I give you a1() and x()).

This is called a Topological sorting. This link will tell you all you need to know about topological sorting, but if you are lazy, suffice to say that there are basically two algorithms to achieve this kind of sort.

Let’s see the execution of a topological sort on a simple example.

The following graphs represent test methods and how they depend on
each other. Methods in green are “free methods”: they don’t depend on any other methods. Arrows represent dependencies and dotted arrows are dependencies that have been satisfied. Finally, grey nodes are methods that have been executed.

First iteration, we have four free methods. These four methods are ready to be run.

Result so far: { a1, a2, x, y }

The four methods have been run and they “freed” two new nodes, b1 and b2, which become eligible for the next wave of executions. Note that while one of d‘s dependencies has been satisfied (a1), d still depends on b1 so it’s not free yet.

Result so far: { a1, a2, x, y, b1, b2 }

b2 and b1 have run and they free three additional methods.

The last methods have run, we’re done.

Final result: { a1, a2, x, y, b1, b2, c1, c2, d }

Again, note that this is not the only valid topological sort for this example: you can reorder certain elements as long as the dependencies are respected. For example, a result that would start with {a2, a1} would be as correct as the one above, which starts with {a1, a2}.

This is a pretty static, academic example. In the case of TestNG, things are a lot more dynamic and the entire running environment needs to be re-evaluated each time a method completes. Another important aspect of this algorithm is that all the free methods need to be added to the thread pool as soon as they are ready to run, which means that the ExecutorService will have workers added to its pool even as it is running other workers.

For example, let’s go back to the following state:

At this stage, we have two methods that get added to the thread pool and run on two different threads: b1 and b2. We can then have two different situations depending on which one completes first:

b1 finishes first and frees both c1 and d.


b2 finishes first but doesn’t free any new node.

A new kind of Executor

Since the early days, TestNG’s model has always been very dynamic: what methods to run and when is being decided as the test run progresses. One of the improvements I have had on my mind for a while was to create a “Test Plan” early on. A Test Plan means that the engine would look at all the TestNG annotations inside the classes and it would come up with a master execution plan: a representation of all the methods to run, which I can then hand over to a runner that would take care of it.

Understanding the scenario above made me realize that the idea of a “Test Plan” was doomed to fail. Considering the dynamic aspect of TestNG, it’s just plain impossible to look at all the test classes during the initialization and come up with an execution plan, because as we saw above, the order in which the methods are run will change depending on which methods finish first. A Test Plan would basically make TestNG more static, while we need the exact opposite of this: we want to make it even more dynamic than it is right now.

The only way to effectively implement this scenario is basically to reassess the entire execution every time a test method completes. Luckily, Executors allow you to receive a callback each time a worker completes, so this is the perfect place for this. My next question was to wonder whether it was legal to add workers to an Executor when it’s already running (the answer is: yes).

Here is an overview of what the new Executor looks like.

The Executor receives a graph of test methods to run in its constructor and then simply revolves around two methods:

* Create one worker per node and execute them.
private void runNodes(Set<ITestNGMethod> nodes) {
  List<IMethodWorker> runnables = m_factory.createWorkers(m_xmlTest, nodes);
  for (IMethodWorker r : runnables) {
    setStatus(r, Status.RUNNING);
    try {
    catch(Exception ex) {
      // ...

The second part is to reassess the state of the world every time a method completes:

public void afterExecute(Runnable r, Throwable t) {
  setStatus(r, Status.FINISHED);
  synchronized(m_graph) {
    if (m_graph.getNodeCount() == m_graph.getNodeCountWithStatus(Status.FINISHED)) {
    } else {
      Set<ITestNGMethod> freeNodes = m_graph.getFreeNodes();

When a worker finishes, the Executor updates its status in the graph. Then it checks if we have run all the nodes, and if we haven’t, it asks the graph the new list of free nodes and schedules these nodes for running.

Wrapping up

This is basically a description of the new TestNG scheduling engine. I tried to focus on general concepts and I glossed over a few TestNG specific features that made this implementation more complex than I just described, but overall, implementing this new engine turned out to be fairly straightforward thanks to TestNG’s layered architecture.

With this new implementation, TestNG is getting as close to possible to offering maximal parallelism for test running, and a few Selenium users have already reported tremendous gains in test execution (from an hour down to ten minutes).

When I ran the tests with this new engine, very few tests failed and the ones that did were exactly the ones that I wanted to fail (such as on that verified that dependent methods execute in the same thread, which is exactly what this new engine is fixing). Similarly, I added new tests to verify that dependent methods are now sharing the thread pool with free methods, which turned out to be trivial since I already have plenty of support for this kind of testing.

This new engine will be available in TestNG 5.11, which you can beta test here.

Update: I posted a follow up here.

Should programming languages support unit testing natively?

I used to be strongly opposed to this idea but I started changing my mind recently. Here is what happened.

The bad

Production and test code can be integrated at various levels:

  1. Supported by the language.
  2. Not supported by the language but mixing production and test code in the same classes.
  3. Production and test code live in different classes but in the same directory.
  4. Production and test code live in different directories.

I have always thought that options 2) and 3) are a bad idea because they make it hard to read and review the code, they contribute to the creation of huge classes and they negatively impact your build infrastructure (which must now be able to strip out the test code when you want to create a shippable binary). We (Google) feel strongly about these points, so we are strictly enforcing option 4) (although we often put our tests in the same package as the production code).

I think this practice is the most common out there and it works very well.

With that in mind, wouldn’t a language that natively supports unit testing be the worst case scenario?

The epiphany

Not long ago, I reflected on my testing habits for the past ten years, and I made a couple of interesting observations:

  • I feel the need to write tests for the code that I write very often.
  • Just as often, that need is thwarted by environmental constraints, so I end up not writing these tests.

My experience is with large software, large teams and huge code bases. When you work in this kind of environment, it’s very common for the company to have developed its own testing infrastructure. Sure, the code remains code, but how you run it and how you deploy it will vary greatly from company to company (and sometimes even from project to project).

Typically, I code a feature, iterate over it a few times and I reach a point when I’m pretty happy with its shape: it’s looking decent, it gets the job done and while there is obviously more work to be done on it, it’s mature enough that writing tests for it at this point will not be a waste.

The code to write these tests is usually pretty obvious, so I can form it in my head pretty quickly and materialize it in code not long after that. Now I need to find a way to actually run this test and make it part of our bigger testing infrastructure, and this is where things usually get ugly. I typically find myself having to change or update my environment, invoke different tools, pull out various wiki/HTML pages to brush up on what’s required to integrate my tests to the big picture.

The worst part is that I will probably have to relearn everything from scratch when I switch to the next project or the next job. Again, I will write the test (which is pretty easy since it’s the same language I used to write the production code) and I will find myself having to learn a brand new environment to run that test.

The environmental hurdle is not easy to address, but if the language that I am coding in supported unit tests natively, I would probably be much more tempted to write these tests since 1) there is now an obvious location where they should go and 2) it’s very likely that the test infrastructure in place knows how to run these tests that I will be writing.

The main gain here is that the developer and the testing infrastructure now share a common knowledge: the developer knows how to write tests and the infrastructure knows how to access these tests. And since this mechanism is part of the language, it will remain the same regardless of the project or the company.

How do we do it?

So what would a language that natively supports unit tests look like?

I know first hand that writing a test framework is not easy, so it’s important to make sure that the test feature remains reasonably scoped and that it doesn’t impact the language complexity too much. You will notice that throughout this entire article, I make a point of saying “unit test” and not just “test”. As much as TestNG is focused on enabling the entire spectrum of testing, I think it’s important for a language to only support unit testing, or more precisely, to only make it easy to test the compilation unit that the test is found in.

Interestingly, very few modern languages support unit testing, and the only one that I found among the “recent” ones is D (I’m sure commenters will be happy to point me to more languages).

D’s approach is pretty minimal: you can declare a unittest section in your class. This keyword acts as a method and you simply write your tests inside:

// D
class Sum
int add(int x, int y) { return x + y; }
Sum sum = new Sum;
assert(sum.add(3,4) == 7);
assert(sum.add(-2,0) == -2);

This is as barebones as it can get. The advantage is that the impact on the language itself is minimal, but I’m wondering if I might want to be able to write different unit test methods instead of having just one that contains all my tests. And if we’re going down that path, why not make the unittest keyword be the equivalent of a class instead of just a method?

// Pseudo-Java
public class Sum {
public int add(int x, int y) { return x + y; }
unittest {
public void positiveNumbers() {
Sum sum = new Sum();
assert(sum.add(3,4) == 7);
public void negativeNumbers() {
Sum sum = new Sum();
assert(sum.add(-2,0) == -2);

As I said earlier, I think it’s very important for this feature to remain as simple as possible, so what features from sophisticated testing frameworks should we remove and which ones should we keep?

If we stick with D’s approach, there is probably little we can add, but if we go toward a class keyword, then there are probably two features that I think would be useful:

  • Method setUp/tearDown (which would already be useful in the example above, where we create a new Sum object in both test methods.
  • Exception testing.

At this point, I’m already feeling a bit uncomfortable with the extra complexity, so maybe D’s simplistic approach is where we should draw the line.

What do you think about native support for unit testing in programming languages? And what features would you like to see?

Why I think that IDEA going open source is not a good sign

It looks like I shocked quite a few people with my recent prediction of doom for IDEA, so I thought I’d take some time to elaborate.

Here is what I said:

cbeust: JetBrains deserves the utmost respect for what they have created and pioneered, but IDEA going opensource means that it will now slowly die

cbeust: About IDEA: commercial software that goes open source never ends well, even for products that don’t suck

First of all, I’d like to make it crystal clear that I have nothing but the utmost respect for the guys at JetBrains, who possess three very rare qualities:

  • They are innovators. It’s not exactly easy to come up with new ideas, whatever your field is, but these guys have come up with a lot of concepts that are now part of every developer’s daily life.
  • They know how to write a great application. Who would have imagined that it would be possible to create not only such a snappy Swing application but also one that just seems to read your thoughts?
  • They managed to sell their product while competing against a free product that is of equally high quality (Eclipse) and funded by a very rich company (IBM).

About that last point: there is a saying that claims that if you are trying to sell software that competes against free products, you should change business. I don’t buy that, and it’s not just because I used to work for a company that was doing exactly that (BEA). A lot of companies are doing fine selling products that compete with free software, and they all have one thing in common: their product doesn’t suck. JetBrains can certainly be counted as one of them.

Having said all this, I still see the move from commercial to open source as a sign that the business is struggling. A lot of companies have gone down that path in the past and all of them have tried to make it pass as a selfless action meant to help the community, but the truth is that they were just having a harder time selling their software, so making it open source is usually a last ditch effort to regain mindshare while trying to make money somewhere else.

I can’t think of a single example where a struggling commercial software suddenly started regaining market share when they went open source. Can you?

I have no insight on how well JetBrains is doing, so it’s quite possible that they are one of these rare exceptions. Maybe they were making tons of money with IDEA licenses and they really decided to suddenly give the product away out of kindness for the Java community. Even with these parameters, it still doesn’t really sound like a good idea to me, but well.

Whatever side of the fence you stand on, one thing is clear about this move: it means less revenue for JetBrains for the foreseeable future. And what this means is that they will have less means to compete against Eclipse and less power to add features to either of the editions (the Community one or the Ultimate one).

And this is where a lot of companies make a fatal mistake: they think that making their software open source will automatically generate a ground swell of patches and additions from the community that will float them back to the top.

And in my experience, this never happens.

Oh patches will be sent and I’m sure a few isolated developers will come up with very cool additions to IDEA, but without a committee of JetBrains employees at the receiving end to sort through these patches and act as a strong steward (“reject this one”, “accept this one as is”, “accept this one but it needs more work”, “accept this one but we need to integrate it with XXX”, etc…), these patches will just start piling up and they won’t be processed.

The challenge here is not just technical, it’s about product management, and open source communities are just not good at that. Hackers scratch their itch and when they’re done, they move on to the next itch with very little interest in how buggy their code is or how well it integrates with the rest of the platform. They leave that up to others.

So I’m pretty pessimistic about IDEA’s future. I think the community edition will soon start stagnating and in one year, it will have made little progress. The Ultimate edition might fare well for a little while, as long as fans help support it by paying the $249, but I’m skeptical that this revenue will be enough to keep such an ambitious product alive.

And of course, Eclipse’s apparently unstoppable momentum isn’t helping. These guys just don’t seem to rest and the amount of features and directions that they keep expanding on is just mindboggling.

I wish the best to IDEA. I really do. I think Eclipse wouldn’t be nearly as good as it is right now if IDEA wasn’t around and IDEA’s disappearance from the landscape would mean that Eclipse risks stagnation as well. Competition is good for users. I really hope that I’m wrong with my predictions.

Let’s meet again here in one year.