Archive for February, 2006

Announcing TestNG 4.6

I am happy to announce the release of TestNG 4.6 (the Eclipse and IDEA plug-ins
have been updated as well).  There are a lot of new features in this
release, here is a quick rundown:

  • Method thread pools.  TestNG was
    already allowing you to run all your tests in parallel, and you can now use this
    feature on individual test methods as well:

    @Test(invocationCount = 10, threadPoolSize = 3, timeOut = 10000)
    public void f1() { … }

  • In this example, the method f1() will be invoked ten times from a pool made
    of three threads.  If any of these invocations fail to complete with ten
    seconds, TestNG will abort the test and mark it failed.  You can find more
    information on this feature in
    this article and in
    the
    documentation
    .

     

  • A new Reporter
    API
    lets you log messages that will be reproduced in the HTML reports,
    either on each individual method or as a combined output.
  • @Test now contains a description attribute that will also be included in the
    final reports:
  • @Test(description = "Verify that the server is up")
    public void serverShouldBeRunning() { … }

  • The reports have been considerably improved and now:
    • Give a list of all the methods that didn’t run.
    • Show all the methods with different colors based on their class.
    • Use both relative and absolute timings, to make it easier to
      cross-reference your tests with your logs.
    • List the parameters passed to each test method, if any.


     

    You can see a full report sample
    here.
     

  • Writing your own reports has never been easier with the introduction of
    the IReporter
    interface
    .  Only one method to override, it doesn’t get any easier
    than this…
     
  • @DataProviders can now know which test method they are
    providing data for.  If you declare the signature of your @DataProvider
    with a java.lang.reflect.Method as first parameter, TestNG will
    invoke it with the @Test method that is about to be executed. 
    This makes it easier for you to provide slightly different data based on the
    current test method.  See the
    documentation for more details.
     
  • Numerous bug fixes in the Eclipse plug-in (see the CHANGES file) and an
    improved view of the results (see the picture at the beginning of this
    post).

Download TestNG at http://testng.org.

 

Touch screen done right

Interesting
demonstration of a multi-touch screen…

Statistical testing

How would you test a random() function?

Let’s assume that it can be
initialized with a seed that you supply in order to generate different sequence
of random numbers, but that the same seed will always generate the same
sequence.

The first idea that comes to mind is to pick a constant seed, write down the
numbers returned, make them the "expected" value of your test, run the test with
that same seed and compare these values one by one.  It’s a start, but it’s
a far cry from testing the actual specification of your method, which is
approximately "returns a
(pseudo)-random number between 0.0f and 1.0f".

Enter statistical testing.

First of all, you need to define exactly what you mean by "random".  One
way would be to define it in terms of average:  "For a big enough sample of
numbers, the average will be 0.5f with an error of 0.01f".

Here is a quick implementation:

@Test
public void verifyAverage() {
  float sum = 0;
  int count = 10000;
  for (int i = 0; i < count; i++) {
    sum += random();
  }
  float average = sum / count;
  float tolerance = 0.01f;
  assertTrue(0.5 - tolerance <= average && average <= 0.5 + tolerance);
}

Of course, you should extend this test in many ways, such as testing on
bigger samples, use different seeds (I didn’t use any in this example) or use a
different metric.  For example, what if your algorithm is buggy and returns
pairs of 0.1, 0.9, 0.1, 0.9, etc…  It will pass this test but the
distribution is obviously not correct.  To address this, you might want to
measure the standard deviation of the returned values.

Here is another
potential bug:  what if the values "bunch up" around the average, say, they
are always between 0.4 and 0.6?  Again, both verifyAverage() and
verityStandardDeviation() will probably pass, so you might want to introduce a third test
for the distribution, say "verifyEntropy()".

Statistical testing comes in handy in many other situations.  Here are
two more examples.

How would you verify that your Web application can create simultaneously a thousand users?  Imagine that your Web site is tremendously
popular and you have people sign up in bursts.  All these pages are going
to try to insert/update rows in the database roughly at the same time, so how do
you make sure that your transactions are correctly isolated?

Again, statistical testing to the rescue.  Simulate all these users
accessing your database simultaneously and make sure your database contains the
right values at the end (this is slightly different from load-testing, which
only makes sure that the performance of your server remains acceptable, but you
do test these two approaches similarly:  by firing a lot of simultaneous
requests to your server).

And finally, I reach the point of this post:

How can you test that
your code is thread-safe?

Of course, your first reaction should be to understand the code you are trying to
test, analyze the various values that can come in contention and make sure these 
values are adequately protected (typically with synchronization).  But as soon as
your code becomes complicated enough and starts calling into more methods (some
of which you might not even have the source of), this approach becomes very
quickly impractical and your implementation is at best theoretical.

If you multiply the number of "yield points" of your code (locations where
the JVM can preempt your thread) with the number of ways the JVM can preempt
you, you quickly realize that there is no way you can be 100% sure that you
covered all the scenarios.

Again, statistical testing is can help you increase the percentage of comfort
you have in your testing.

The upcoming TestNG 4.6 contains a very powerful feature that makes this kind
of testing trivial:  individual method thread pools.

Consider the following code:

@Test(threadPoolSize = 10, invocationCount = 10000)
public void verifyMethodIsThreadSafe() {
  foo();
}

@Test(dependsOnMethods = "verifyMethodIsThreadSafe")
public void verify() {
  // make sure that nothing was broken
}

invocationCount has been in
TestNG for quite a few releases but threadPoolSize is new, and it basically
instructs TestNG to create a pool of ten threads that will then be used to
invoke the test methods ten thousand times.  Thanks to its dependency, the
verify() method will be invoked once all the verifyMethodIsThreadSafe() methods
have been called and it will double check that the data modified by the
concurrent code is what we expect.

Here is a quick illustration of this feature where the test method sleeps a
random interval before exiting.  We call this method six times with a pool
of three threads:

private void log(String s) {
  System.out.println("[" + Thread.currentThread().getId() "] " + s);
}
  
@Test(threadPoolSize = 3, invocationCount = 6)
public void f1() {
  log("start");
  try {
    int sleepTime = new Random().nextInt(500);
    Thread.sleep(sleepTime);
  }
  catch (Exception e) {
    log("  *** INTERRUPTED");
  }
  log("end");
}

Here is a sample
output:

[10] start
[8] start
[9] start
[10] end
[10] start
[9] end
[9] start
[8] end
[8] start
[8] end
[10] end
[9] end
PASSED: f1
PASSED: f1
PASSED: f1
PASSED: f1
PASSED: f1
PASSED: f1

As you can see, the first three runs fill the thread pool, which then blocks
until one of the threads finish.  Thread#10 finishes first, and it is
reallocated to another run of the method right away, and so on.  Finally,
all the threads end and TestNG reports that all six invocations have passed.

What if one of these methods is taking too long to respond?

You can use
another feature of TestNG to make sure that your tests won’t be locked up
forever:  timeOut (this attribute already existed in older
versions of TestNG and it’s simply being reused here).

Let’s make things a bit more interesting and specify a timeOut of 500ms but
this time, making the method sleep a random number of milliseconds between 0 and
1000.  What this means is that whenever the method sleeps for less than
500ms, it will pass, but if it takes longer to wake up, TestNG will interrupt it
and mark it as a failure.

Here is the code:

@Test(threadPoolSize = 3, invocationCount = 6, timeOut = 500)
public void f1() {
  log("start");
  try {
    int sleepTime = new Random().nextInt(1000);
    if (sleepTime > 500log("   should fail");
    Thread.sleep(sleepTime);
  }
  catch (Exception e) {
    log("  *** INTERRUPTED");
  }
  log("end");
}

And the output:

[11] start
[12] start
[12] should fail
[13] start
[13] should fail
[11] end
[14] start
[14] should fail
[12] *** INTERRUPTED
[12] end
[13] *** INTERRUPTED
[13] end
[15] start
[16] start
[16] end
[14] *** INTERRUPTED
[14] end
[15] end

===============================================
Test Suite
Total tests run: 6, Failures: 3, Skips: 0
===============================================

In this run, three methods came up with a sleep time greater than 500 and
therefore, announced that they should fail.  A few seconds later, these
three methods got interrupted by TestNG and then marked as failures.

Individual method thread pools will appear in TestNG 4.6, which will be
released very soon (beta versions are available if you are interested).

 

Update: Thanks to JB and David for pointing out that the property you want
to test about the returned values is entropy and not a Gaussian distribution.
I updated this article accordingly.

Update 2: TestNG 4.6 beta can be downloaded here.

New Eclipse 3.2M5: yummy

I am noticing quite a few interesting new features in
the brand new

Eclipse 3.2M5
:

  • Infer generic type.
  • Refactoring scripts .
  • New refactoring:  introduce indirection.
  • Multiple problems of the same category can be fixed at once.
  • The Eclipse formatter can be run without launching the Eclipse UI.
  • Eclipse is gaining more and more FindBugs-like functionalities (null
    reference analysis, assignment of parameters, etc…).
  • Quite a few sexy SWT improvements.

The full

New and Noteworthy
has all the gory details.

 

Speaking in Oakland

I am speaking at EBIG
in Oakland
tomorrow night (Wednesday February 15th).  The subject will,
of course, be testing.  The show starts at 6:30pm.  Stop by and say
hi!  (and bring a résumé while you’re at it :-) ).

Python going extinct?

Guido recently posted
an interesting thought about his recent decision to
reject two proposals to add lambdas to Python.

It’s a hard problem, but the reason why this particular aspect is interesting
is the grounds for refusal:  not only does Guido think that all the
proposals so far have failed to be "pythonic" enough (especially because they
force the language to switch between two modes, one where spaces are significant
and one when they’re not), but he flat out claims that there is no solution to
this problem.

In short, there will never be any lambdas in Python.  And a bunch of
other potential features will most likely never make it into the language for
that reason.

Of course, Python is Guido’s language and ultimately, what he says goes. 
I certainly respect his efforts to keep his language in line with his
philosophy.

But it makes me sad that a feature that looks so elegant (space significant
indentation) might actually spell Python’s doom.  The problems that
developers solve
every day change constantly, and languages need to evolve as well if they want
to stay relevant, but unfortunately, Python seems to be headed toward an evolutionary dead end.

In 1998, Guy Steele published a seminal paper called
Growing a
Language
in which he spelled out a few rules to make a language as flexible
to change as possible.  Of course, we can’t expect Python to have been
designed with these rules in mind since it was created in 1990, but after 1998,
it would definitely have been wise to try and incorporate some of Guy’s ideas to
make sure that Python would be able to withstand the tides of changes.

Interestingly, Matz (the author of Ruby) saw this coming.  Three years
ago, I posted an article called
Flaws in Ruby in
which I pointed out a few aspects where I thought Ruby could improve.  One
of the improvements that I suggested was space-significant indentation. 
Matz commented on this entry and said:

"The meaningful indenting" is plain wrong for a language like Ruby,
where expressions and statements are interchangeable. See the lambda problem in
Python. Guido decided to remove it in the future. But I won’t give up lambda nor
value giving blocks.

It’s quite eerie to see Matz talk about the "Lambda problem in Python" back
in 2003.  He definitely saw that one coming, and as shown by Guido’s
article, the topic is as hot today as it has been on the Python mailing-list
these past years.

From this standpoint, Ruby seems to be better fitted to face the challenges
of the future.  First because it is not limited by the space indentation
problem (an improvement I was wrong to suggest, as it turns out), and second because it seems to have a decent amount of potential to
create Domain Specific Languages (for example, Ruby on Rails lets you use
keywords such as :o ne_to_many which are not recognized by Ruby but which
syntactic placement simplifies greatly programming).

I’ll conclude by quoting two
comments
that were made by readers of Guido’s article, which summarize my point well:

Having toyed
around with Ruby and blocks, I know how nicely they work, how cool they look,
how gracefully they fit in the language. But Python’s not ruby, something that
works for ruby doesn’t necessarily work for Python, and blocks don’t. Everyone’s
time would probably be much better invested in creating useful, new, original
features (== ripping Lisp or Smalltalk ;) )

I think it
would be sad to see Python as the last kid on the block without a cool bike.

As opposed to the last person, my point is not to say that blocks are cool or
uncool:  they are just the symptom of a deeper problem in Python’s design
and implementation that will most likely prove to be a big challenge for its
adaptability in the coming years, with Ruby and Groovy setting the bar
increasingly higher.

What do you think, is Python in danger to be marginalized because it can’t
evolve any more?

 

Surveying the IDE landscape

Like many others, I feel some sadness hearing that
Borland
is getting out of the IDE business
.

It doesn’t come as a big surprise:  even though JBuilder took an early
lead in the IDE space many years ago, it was never able to keep up with the
crazy innovation pace that Eclipse and IDEA imposed.

After Basic and Assembly Language (6502 all the way, baby), Pascal was the
first language I was exposed to in computer science classes, and sure enough,
our tool of choice was Turbo Pascal (and also a compiler called "pc" on our UNIX
systems).

Turbo Pascal was so amazingly fast that it baffled even our teachers. 
One key press (what was it… F4?) and hundreds of line got crunched into tight
8086 code in seconds.  I also remember that the screen displayed how many
lines of code per second it compiled — a nice touch that added to the speed
racing feeling of the experience.  They cut a lot of corners to have such a
fast compiler (e.g. dying at the first error), but it worked beautifully and
their tool was instrumental in bootstrapping the software revolution as we know
it today.

A page is turned.

Fast forward to the present.

What does this mean to us today?

Well…  not much.  Borland’s decision to move to Eclipse further
validates the importance of the Eclipse platform, especially since even IDEA is
making shy moves in this direction as well, as illustrated by their
recent support for
the Eclipse compiler
(imagine a world where IDEA would give you instant
feedback on compilation errors…  mmmh).

Despite being the only commercial IDE left, IDEA is showing more resilience
than ever and JetBrains certainly deserves heaps of credit for being able to
sell a product in the face of such high quality free competition. 

But I
have to say I’m not optimistic on their ability (or anyone’s ability) to
maintain a business in these conditions.

I am betting that in the coming year, IDEA will move toward the Eclipse
platform more aggressively and use it as the foundation of their new efforts.  This will have
the benefit of allowing the brilliant JetBrains engineers to stop worrying about
implementing the low-level layers of their platform, benefit from the fantastic
Eclipse plug-in API and finally, let them focus on what we, developers, really
care about:  a top-notch programming environment using the concepts, the look and
feel and the user experience that have made IDEA the roaring success it is today.

 

Not enjoying GMail's latest features? Here's how

Do you keep reading about all the cool features that keep being added to
GMail (multiple From addresses, RSS feeds, chat, etc…) and you’re not seeing them on
your account?

We deploy features to GMail progressively, so it’s possible that either your
country is not supported yet, or the server which hosts your account hasn’t
received the latest features.

Without any guarantees, here are two tips that will make sure you will
receive these new features as soon as they are available (and if you’re lucky,
they will start working right away):

  • If you are using an https connection to access GMail, remove the ‘s’ (i.e. the
    address should read "http://mail.google.com").  Eventually,
    all the new services will support https, but they typically don’t
    initially.

     

  • Make sure your language (Settings / General) is set to "English (US").

Now you can enjoy living on the bleeding edge of email.

Distributed TestNG

And I thought na

Announcing TestNG 4.5

I’m happy to announce the immediate availability of
TestNG 4.5.  It features a lot of bug fixes, a
few new minor features (runAfter didn’t make it in this release but
will appear in the next one).  And of course, a new look for the Eclipse plug-in
(the update site has been updated as well), a new look for the documentation and
a few added sections as well.

[Interestingly, this is my 350th entry on this weblog, not counting the
two years I spent on JRoller before that]

Here is the change log.

4.5

Core:

  • Fixed: Methods were not implicitly included, only groups
  • Fixed: Bug with failed parent @Configuration don’t skip child
    @Configuration/@Test invocations
  • Fixed: Bug with overridding @Configuration methods (both parent and
    child were run)
  • Fixed: Bug when overriding beforeClass methods in base class (cyclic
    graph)
  • Added: Support for JAAS (see org.testng.IHookable)
  • Fixed: Problem with nested classes inside <package name="foo.*"
  • Fixed: If a group is not found, mark the method as a skip instead of
    aborting
  • Fixed: testng-failed.xml was not respecting dependencies
  • Fixed: class/include method in testng.xml didn’t work on default package
  • Fixed: DTD only allowed one <define>
  • Fixed: ArrayIndexOutOfBoundsException for jMock
  • Added: dependsOnMethods can contain methods from another class
  • Fixed: JUnitConverter required -restore, not any more (option is now a
    no-op)
  • Fixed: JUnit mode wasn’t invoking setName() on test classes
  • Added: Regular expressions for classes in <package>
  • Added: Distributed TestNG
  • Fixed: Command line parameters and testng.xml are now cumulative
  • Fixed: Reports now work for multiple suites
  • Fixed: Was ignoring abstract classes even if they have non-abstract
    instances
  • Fixed: If setUp() failed, methods were not skipped
  • Fixed: Was not clearly indicating when beforeSuite fails
  • Added: @Configuration.inheritGroups
  • Fixed: inconsistency between testng.xml and objects regarding method
    selectors

Eclipse plug-in:

  • New look for the progress view.