Archive for March, 2010

Wireless mice

I decided to upgrade my mice both at work and at home.

My number one priority is: no cables. I’m so tired of seeing inextricable piles of cables around my computers that I’m now actively on a mission to eradicate them all. A secondary priority is that I’d like the mouse to be fairly neutrally shaped, since I use my left hand. I’m right handed, but more than twenty years ago, it occurred to me that I should probably try to save my right hand as much as possible, and moving the mouse is an activity that doesn’t require a lot of accuracy, so I tried to switch. It took me about a week but it’s actually a lot easier than it sounds, and yes, you can be an effective left hand mouse user even for games. But let’s save this for a future post.

Since I play video games at home, I wanted that mouse to have a few additional functionalities, such as a high resolution and adjustable DPI.

Finding decent wireless mice turned out to be harder than I initially thought, even despite a few recommendations on Twitter and Buzz. A few people suggested the Mighty Mouse but I quickly ruled it out for a few reasons:

  • It’s very, very small.
  • It doesn’t have adjustable DPI.
  • The touch features strike me as awkward (you have to curl your finger and make sure you don’t move the mouse while you touch it) and, worse, they look like they will greatly increase the risks of Repeatitive Strain Injury.

For work, I ended up picking the Logitech Marathon M705:

Its two minor weaknesses is that it’s a bit smaller than I would like and that clicking on the middle button requires some training, but overall, I am pretty happy with its feel and performance.

For home, the only mouse that met all my criteria is the Razer Mamba:

The Razer is by far the best mouse I have ever used. Ever. It’s just incredible, it’s the kind of mouse that you would expect Apple to produce except that Apple seems to be really terrible at designing mice. First of all, the Razer feels fantastic in your hands. It’s light but not too light, the side buttons are very well positioned, even if you are using your left hand.

The Razer is very easy to recharge: no fumbling around trying to find minuscule connectors, just place the mouse on its charging stand. Another clever feature is that if you’re running out of battery but you still need to use your mouse, you can just unplug the cable from the stand and plug it into your mouse, turning your Razer into a connected mouse while you need it. I’ve only had to do this once, though, because I hadn’t placed the mouse on its stand for more than a week.

The Razer is expensive, though ($129.99) but considering the amount of time I spend using a mouse every day, it’s absolutely worth it.

Announcing TestNG 5.12

I’m happy to announce the release of TestNG 5.12.

The most important change is that JDK 1.4 and Javadoc annotations are no longer supported. If you are still using JDK 1.4, you will have to stick with TestNG 5.11.

The most notable new feature in this new release is the annotation @Listeners, which lets you add listeners directly in Java (no need for XML). Any TestNG listener (i.e. any class that extends ITestNGListener) can be used as a value for this annotation, except IAnnotationTransformer which needs to be known before TestNG starts parsing your annotations.

For example, suppose that one test class is failing and you want to share the full report with a coworker, you can use the EmailableReporter to generate a single HTML file that you can then email. All you need to do is add the following @Listeners annotation:

public class VerifySampleTest {
  // ...

and then run the class with TestNG, which will generate the following report:

I also made a few improvements to the Eclipse plug-in, such as a New File wizard:

You can download TestNG and the TestNG Eclipse plug-in at the usual places, and the new version will be available in the main Maven repository shortly. Here is the full change log for 5.12:


  • Removed: JDK 1.4 and Javadoc annotation support
  • Added: @Listeners
  • Added: IAttributes#getAttributeNames and IAttributes#removeAttribute
  • Added: testng-results.xml now includes test duration in the <suite> tag (Cosmin Marginean)
  • Added: Injection now works for data providers
  • Added: TestNG#setObjectFactory(IObjectFactory)
  • Added: Priorities: @Test(priority = -1)
  • Added: New attribute invocation-numbers in <include>
  • Added: testng-failed.xml only contains the data provider invocations that failed
  • Added: IInvokedMethodListener2 to have access to ITestContext in listeners (Karthik Krishnan)
  • Fixed: @Before* methods run from factories were not properly interleaved
  • Fixed: The TextReporter reports skipped tests as PASSED (Ankur Agrawal)


  • Added: New file wizard: can now create a class with annotations, including @DataProvider
  • Added: You can now select multiple XML suites to be run in the launch dialog
  • Fixed: @Test(groups = [constant]) was taking name of the constant instead of its value.
  • Fixed: Issue 476 – NPE with Groovy Tests (Andrew Eisenberg)
  • Fixed: The custom XML file is now created in the temp directory instead of inside the project
  • Fixed: In the launch dialog, now display an error if trying to pick groups when no project is selected
  • Fixed: Was not setting the parallel attribute correctly on the temporary XML file

Better mock testing with TestNG

When you are using mocks in your tests, you usually follow the same pattern:

  1. Create your mock.
  2. Set it up.
  3. Play it.
  4. Verify it.

Of these four steps, only steps 2 and 3 are specific to your tests: creating and verifying your mock is usually the exact same code, which you should therefore try to factor out. It’s easy to create your mock in a @BeforeMethod but the verification aspect has never been well integrated in testing frameworks.

Until today.

Mock verification, or more generally, systematic invocation of certain test methods, is a feature that’s been requested several times by TestNG users and while I’ve often been tempted to go ahead and implement it, something has always felt “not right” about it.

As it turns out, there are apparently three ways that you can automate mock verification in TestNG: one that is incorrect, one that doesn’t work and one that actually works.

Let’s start with the incorrect one:

  public void init() {
    this.mock = // create mock

  public void t1() {
    // set up and play mock

  public void t2() {
    // set up and play mock

  public void verify() {

This approach will run the verification after each test method, but the problem is that this code is being run in a configuration method (@AfterMethod) instead of a test method (@Test). Configuration methods are handled differently by TestNG in how they fail and how they get reported (in a nutshell, a configuration method that fails typically aborts the entire run since your test environment is no longer stable).

My next thought was to use TestNG’s groups and dependencies:

  public void init() {
    this.mock = // create mock

  @Test(groups = "mock")
  public void t1() {
    // set up and play mock

  @Test(groups = "mock")
  public void t2() {
    // set up and play mock

  @Test(dependsOnGroups = "mock")
  public void verify() {

At least, this solution runs the verification in a @Test annotation, but because of the way dependencies are run, the order of invocation for the code above will be t1(), t2() and then verify(). This is obviously wrong since we will only be verifyin whatever mock was played last. You could consider having each test method create and store a different mock and then have verify() go through all these mocks and calling verify() on them, but it’s clearly a subpar solution.

So let’s turn our attention to the correct solution, which involves the use of a method interceptor.

I already feature method interceptors in a previous entry that showed you how to create a new annotation @Priority that allows you to order your methods. The idea is similar here, except that instead of reordering the methods that we want TestNG to run, we are going to change their number as well.

For this to work, we introduce two new annotations: @Verify which indicates that this method needs to be verified, and @Verifier, which annotates the method that performs the verification. Here is an example of how you use them:

  public void t1() {
    // set up and play mock

  public void t2() {
    // set up and play mock

  public void verify() {

Now we need to write a method interceptor that will go through all the @Test methods, locate the verifier and then return a list of methods where each method annotated with @Verify is followed by the invocation of the @Verify method. The code is straightforward:

IMethodInterceptor mi = new IMethodInterceptor() {

  public List<IMethodInstance> intercept(List<IMethodInstance> methods,
      ITestContext context) {
    List<IMethodInstance> result = Lists.newArrayList();
    IMethodInstance verifier = null;

    // Doing a naive approach here: we run through the list of methods
    // twice, once to find the verifier and once more to actually create
    // the result. Obviously, this can be done with just one loop
    for (IMethodInstance m : methods) {
      if (m.getMethod().getMethod().getAnnotation(Verifier.class) != null) {
        verifier = m;

    // Create the result with each @Verify method followed by a call
    // to the @Verifier method
    for (IMethodInstance m : methods) {
      if (m != verifier) {

      if (m.getMethod().getMethod().getAnnotation(Verify.class) != null) {

    return result;


Running it produces the following output:


With this simple interceptor (full source), you can now completely factor out the boiler plate logic of your mocks and focus on their business logic.

Happy mocking!


Amiga, MUI and code nostalgia

The MUI toolkit for the Amiga

A session of link hopping caused me to hunt down a program I wrote for the Amiga in 1994. I didn’t really think I would be able to locate it, but once again, I underestimated the power of the Internet.

Not only did I find it but Aminet, the entire archive where I published it, is actually online!. The software is an accounting program called Banker which featured a fairly complex user interface. I was actually using a GUI toolkit called MUI (Magic User Interface) which was extremely impressive for its time. It featured complex widgets with support for a lot of listeners, on the fly reloading of resources and skins, localization, etc…

The only problem with MUI is that it was a bit ahead of its time processor-wise, so user interfaces written with it tended to be a bit sluggish. But it was totally worth it.

Back to Banker, I realized while browsing its entry on Aminet that the archive contained its source, so I suddenly became very eager to see the kind of code that I was writing sixteen years ago. The archive is a .lha, another format that was popular on the Amiga, and for which I was quickly able to find a decompressor running on Mac called DropUnLha.

I was bracing myself, expecting the worst, but… well, it’s actually not that bad. I uploaded the whole project to for posterity, and here is one of the sources. Check out this cute comment ASCII art at the top of the file, neat, uh?

My only regret is that I wasn’t able to come up with any screen shot of Banker, even in this review of my program from a Dutch magazine, so I would need to run the the Amiga emulator to really see what it looked like.

How about you, dear readers: what’s the oldest piece of code you’ve been able to dig up?

The IDE, reloaded

Here is a very interesting take on the concept of Integrated Development Environment.

As opposed to traditional IDE’s, which work at the same level as the Java language itself (classes and packages), this IDE, called Code Bubbles, allows you to work at a much finer granularity: methods, fragments of code and whatever you need for the resolution of a specific task. All these tasks are linked to each other in a workspace, thus allowing you to stay focused only on what is relevant for your current task.

Of course, the concept is not new since it’s exactly what Mylyn is trying to achieve, but to be honest, every time I’ve tried to get into Mylyn (and I tried several times over the past years), I ended up giving up in frustration. This is not to say that Mylyn is a bad product, just that retrofitting such an idea on a traditional IDE, no matter how flexible, is probably impossible.

Still, I can’t shake this impression that it should be possible to mix both approaches, and considering the mindsharing that Eclipse has, being able to offer an intuitive and lightweight add-on that would enable the kind of unit of work granularity that Code Bubbles enables could be very interesting.

And this thought led me to git, but I’ll need to make a digression first.

One of the strengths of git is its branching model: branches are so cheap that you find yourself branching all the time and then switching, merging and committing very often.

Another interesting aspect of source control systems (not limited to git) is that the diffs that you are creating capture the unit of work that is relevant to you. And a git branch is actually very similar to a Code Bubbles Workspace.

So how about an Eclipse perspective that would be based on git branches?

The perspective wouldn’t just show the diffs, an information that is in itself not very interesting, but it would be a bit smarter than that and be able to infer that if you modified a couple of lines in the method init(), the that whole method should become a bubble in that perspective. Intelligent linking between bubbles could also be provided by looking at the chronological order in which the methods have been edited: git would only know that you added two lines in the method init() and that you then renamed a field in the class Foo, but the perspective would note that the two events are related since they followed each other, and it would reflect this by linking the bubbles.


Gates, Jobs and the future of computing

This article describing the early days of Windows was a very interesting read, and even though I know this part of the computer history pretty well, I did learn a couple of things that I’d like to share.

The first is that after Windows 2.0 came out, Apple sued Microsoft for copying the look and feel of its MacIntosh:

In 1988, Apple decided to sue Microsoft over Windows 2.0’s “look and feel”, claiming it infringed on Apple’s visual copyrights. Having been a principal manager in charge during development of Windows 2.0, I was now caught up in the maelstrom and over the next year I got a thorough education on the US legal process as I briefed the Microsoft legal team, created exhibits for them, and was grilled by deposition by the other side. To me the allegation clearly had no merit as I had never intended to copy the Macintosh interface, was never given any directive to do that, and never directed my team to do that.

Interestingly, the suit ended up being dropped because:

Apple had previously granted a license to Microsoft to use any part the interface included in its applications for the Mac.

I’m guessing a few heads must have rolled in the Apple legal department when they realized that they filed a suit that they had already signed themselves out of.

But the more interesting part comes in the next paragraph:

However, I can recall that within my first year at Microsoft, Gates had acquired a Xerox Star, and encouraged employees to try it out because he thought it exemplified the future of where the PC would be headed and this was long before Microsoft even saw a Mac or even a Lisa from Apple. Gates believed in WYSIWYG (What You See Is What You Get–i.e. fidelity between the screen and document output) and the value of a graphical user interface as far back as I can remember. And prototypes of Windows existed long before the first appearance of the Macintosh.

Intrigued about the timing, I did some digging and I found out that Gates bought that Xerox Star in 1981:

Among the developers of the Gypsy editor, Larry Tesler left Xerox to join Apple in 1980 and Charles Simonyi left to join Microsoft in 1981 (whereupon Bill Gates spent $100,000 on a Xerox Star and laser printer)

This was just a few months after Steve Jobs himself got his epiphany about graphical user interfaces and the mouse during a visit to Xerox PARC:

In the early 1980s, Jobs was among the first to see the commercial potential of the mouse-driven graphical user interface

This story about Steve Jobs is well known but the fact that just a few months later, Bill Gates himself envisioned the same future of computing is news to me.

Lua and World of Warcraft

Ultimate Craft Queue add-on

A few people asked me what I was doing with Lua, so here it is:

It’s a simple add-on called “Ultimate Craft Queue” that I wrote to help me craft glyphs. This will probably only mean something to WoW players, but in short, with some discipline, it’s possible to make a lot of money with this profession in WoW, but there are a lot of repeated operations involved in this process, so any automation you can come up with helps. This add-on is simply helping me streamline my process.

Back to Lua, I have to admit that the problems I described in my previous entry were pretty minor overall, and the hardest part of writing this add-on was the WoW Lua API itself, not the language. It is extremely powerful, if the number of add-ons in existence is any indication, but it takes quite a bit of effort to navigate through the scarce and sometimes nonexistent docs. The usage of extra libraries (such as AceGUI) is pretty much mandatory and there are quite a few holes (such as the tooltip API) that make a WoW add-on developer’s life absolutely miserable.

But once it works, it’s really rewarding…

Suffering in Lua

I have been writing a lot of Lua recently, and it hasn’t been a very pleasant experience.

The first thing that took a little while to get used to is the fact that table indices start at 1, not 0. Admittedly, it is possible to configure this, but the default is certainly confusing and I have been battling nil errors in tables that I knew couldn’t be empty.

But the worst was the following:

function f()


function f(n)
  print("f(n):" .. n)

This snippet will print "f()", indicating that when the interpreter read f(2) and couldn’t find a matching function (since it wasn’t defined yet), it decided to get rid of my parameter and call f() instead. And all this without any error or warning.

It doesn’t stop here:

function f(n)
  if n then print("f(n):" .. n)
  else print("f(nil)")


This will print… f(nil).

Here again, the interpreter couldn’t find a signature matching f() so it decided to pick f(n) and simply pass nil as parameter.

I am willing to suffer some discomfort when using a dynamically typed language, but this is really terrible and I dread the fact that it’s inevitably going to cause more bug hunting in the future….

A couple more things of interest:

  • Writing in Lua has confirmed me in my conviction that using end as a
    closing statement is really painful. Not only is it noise when I read a source but it just
    makes it impossible to match a closing end to its opening statement (you will also notice that
    there is not always such a thing: if requires a then but the declaration
    of functions doesn’t require anything). If you’re going to create a language, do me
    a favor and pick symmetric opening/closing tokens (or significant spacing, like Python).

  • Lua is doing something clever with its comment syntax, which I haven’t seen in any other
    languages: comment blocks are defined by --[[ and --]] but if you add
    a third hyphen on the opening part, then the commenting is disabled again:

      This is commented out
      ... but this is not

    I like the practicality of this idea.

What I find disappointing is that Lua is a language that’s more recent than Python, so it’s hard to understand why it’s being so lackdaisical about signaling errors. Having said that, there is no question that Lua is alive and well and used increasingly more often as an embedded languages, especially in video games, so it’s most likely here to stay…