Archive for August, 2006

Confessions of a reluctant switcher (part 4)

It’s been a couple of months that I’ve been using my Mac Book Pro now, so
what’s the verdict?

Definitely mixed.

With the help from the people who posted comments on my previous posts (here
and here), I was
able to configure my Mac in ways that I feel very comfortable with, now:

  • Double finger tap is equivalent to a right click.
     
  • I love scrolling with two fingers (more convenient than my Thinkpad’s
    pad side scrolling).
     
  • I had to activate keyboard shortcuts for all widgets.
     
  • It feels good to be back to zsh.

Having said that, there are a few things that still bother me and probably
won’t go away at this point:

  • Eclipse is harder to use.  I blame the Command/Control key insanity. 
    It sounded simple at first:  "Use Command whenever you used to use
    Control".  "Great", I thought, "because the Command key is easier to
    reach than Control".  Except that…  there are exceptions
    all the time (H and Q come to mind), and for these, you need to revert to Control. 
    Now I find myself always having to remember when I should use Control or
    Command, and it’s slowly driving me insane.
     
  • Mac OS is still not keyboard-friendly, and I find myself having to reach
    for the mouse way too often.  I find this very insulting to disabled
    and power users alike.
     
  • Mac OS X feels less snappy than Windows, even on my two-year old Thinkpad. 
    It’s hard to describe, but in general, switching windows is faster on XP,
    Widgets are more responsive (they react when the mouse passes over them),
    and overall, I feel that I navigate faster between different tasks on
    Windows than on Mac OS.  Also, the Windows command prompt, while underpowered
    compared to UNIX shells, scrolls much faster than any Mac OS consoles I’ve
    tried.
     
  • The user interface is inconsistent and limited in silly and irritating
    ways (I still find myself wanting to resize my windows from anywhere, or
    wanting to resize certain dialogs, or moving columns around in table
    widgets).
     
  • Mac OS is not Java-friendly.  It doesn’t support any wireless
    toolkits, so no Java ME development is possible, and the recent decision to
    drop the Cocoa bindings sends the clear message that Apple doesn’t care
    about Java.  I want to work on a Java-friendly operating system.

But the ultimate test was a few weeks ago, when I was about to fly out to the
East Coast for the weekend and I found myself wondering if I should take my Mac
or my Windows laptop.  It didn’t take long to choose Windows for the
following reasons:

  • It’s lighter.
     
  • It doesn’t burn my thighs if I keep it in my lap for more than a half
    hour.
     
  • It has better battery life (do I really need two CPU’s if I all I’m
    doing is watching a DVD on the plane?).

The bottom line?

Both operating systems are good, you won’t be disappointed whichever you pick, but Windows remains my operating system of choice,
especially for development.  Windows is and remains the ultimate operating
system for hackers and tinkerers.

 

The danger of mock objects

Somebody recently asked how he could test a void method that is supposed to
close a connection.

I responded:

  • Do something with the connection, make sure it works
  • Call your method
  • Do something with the connection, make sure it throws an exception

Somebody else said:

Pass it a mock connection which has an expectation of a call on to the
close() method.

The problem with this answer is that if you upgrade your JDBC driver and that
in the new version, close() is broken, your test will keep passing but your
application will fail.

Mock objects can give you a deceptive sense of confidence, and that’s why you
should avoid them unless there is really no alternative.

 

How many snakes on this plane?

Snakes are laid out on a plane with the following requirements:

  • Each snake has 3 parts: head, middle and tail.
  • Each snake is touched by exactly one head, one middle, and one tail.
  • Each snake can only touch another snake at most once.
  • Each part of a snake must touch a different part of another snake (no
    head-head, middle-middle, tail-tail).
  • The snakes can’t overlap or cross as they are on a plane.

What is the minimum number of snakes that you need?

 

Powerless

The magnetized connector of my Mac Book Pro is a clever device, and even fun
to play with.  It’s always a pleasant feeling when you bring the cable next
to the plug and suddenly feel the magnetic attraction clip it into place. 
It has also saved the day on more than one occasion, where the cable got
severely yanked out as I was moving my laptop without realizing the cable was
wedged somewhere.

Except that…

This kind of thing has pretty much never happened to any of my previous
laptops.  I can’t even recall last time I tripped on the cable or yanked it
hard by accident.

I pondered that for a while. 

How come all the laptops I’ve had before
the Mac Book Pro had the "old way" kind of a connector that doesn’t really react
well to a brutal lateral move, but it’s never been a problem, and then I get a
Mac that solves this problem I never had, but that I suddenly begin to have it a
lot…

Another manifestation of the Heisenberg principle?

No, much simpler, actually.

The power connector on the Mac is on the side.  Which is a very dumb
idea.

Announcing TestNG 5.1: making testing easier, one thread at a time

TestNG 5.1 is now available for download from
http://testng.org
.  The change log is included below and contains a lot
of bug fixes, but also a particular feature I’d like to expand on.

TestNG makes it very easy for you to run your tests in separate threads,
which provides a very significant speed up in a lot of cases (this particular
user said that switching to multithreaded tests

reduced their test run times from forty minutes down to… four minutes!
). 
However, when you do this, you can encounter various problems if the classes you
are testing are not multithread-safe (which is very often the case).

For example:

public class ATest {
private A a;
@BeforeClass
public void init() {
this.a = new A();
}
@Test
public void testf1() {
// test a.f1();
}
@Test
public void testf2() {
// test a.f2();
}
}

In this example, the two test methods testf1() and testf2()
test the methods A#f1 and A#f2 respectively, but when you ask
TestNG to run these tests in parallel mode, these two methods will be invoked
from different threads, and if they don’t properly synchronize with each other,
you will most likely end up in a corrupt state.

One way to solve this problem is to declare that the test methods depend on
each other, thus forcing TestNG to run them in the same thread, but that’s
obviously tedious and also not semantically correct (they don’t really depend on
each other).

Another possibility is to make A#f1 and A#f2 properly
synchronized, but again, it’s a lot of work just to enable testing, and while
I’m fine with making minor modifications to my classes to make them easier to
test (making certain methods more visible, adding accessors, etc…), I think
that making my classes multithread-safe crosses the line.

Therefore, TestNG 5.1 provides a new attribute in the @Test
annotation, which can be specified at the class level:

@Test(sequential = true)
public class ATest {
  …

When TestNG sees this flag, it will make sure that all the test methods in
the given class are always run sequentially (right now, it uses the simplest way
to achieve this goal:  run all the test methods in the same thread, but
there are other ways to do this).

An interesting consequence of this fix (which was trivial to make since
support for sequential runs this was already in TestNG because of dependencies)
is that the workers used by TestNG can now contain either:

  • A single test method.
  • A set of ordered methods (if dependsOnGroups or
    dependsOnMethods
    is being used).
  • A set of unordered methods (if @Test(sequential = true) was
    used).

Obviously, the last two workers are going to be holding on to threads longer
than the first one, but I still think that the runs will be faster than if the
parallel flag is not set.

Note that the link to the thread of discussion that I posted above (here
it is again
) is a good example of how new features are added to TestNG:

  • A user posts a request for a feature.
  • I ask for other people to comment on the issue, we validate the need.
  • Once the feature has been deemed useful, a few emails are exchanged on
    the proper way to add it to TestNG.
  • The feature is implemented, tested and I upload a beta on
    http://testng.org for immediate validation
    by the user.

Here is the complete change log for TestNG 5.1 (big thanks to Alexandru for
all the bug fixing!)

Enjoy!

Core

  • Added: @Test(sequential = true)
  • Fixed: TESTNG-102 (Incorrect ordering of @BeforeMethod calls when a
    dependency is specified)
  • Fixed: TESTNG-101 (HTML output contains nested <P> tags and a missing <tr>
    tag)
  • Added: support for specifying test-only classpath (http://forums.opensymphony.com/thread.jspa?mesageID=78048&tstart=0)
  • Fixed: TESTNG-93 (method selectors filtering @BeforeMethod)
  • Fixed: TESTNG-81 (Assert.assertFalse() displays wrong expected, actual
    value)
  • Fixed: TESTNG-59 (multiple method selectors usage results in no tests
    run)
  • Fixed: TESTNG-56 (invocation of @Before/AfterClass methods in
    parallel/sequential scenarios)
  • Fixed: TESTNG-40 (failures suite does not contain @Before/After
    Suite/Test methods)
  • Fixed: TESTNG-37 (allow passing null parameter value from testng.xml)
  • Fixed: TESTNG-7 (display classname when hovering method)

Eclipse plug-in

  • Added: run contextual test classes with parameters from suite definition
    files
  • Added: TESTNG-100 (Show HTML reports after running tests)
  • Added: TESTNG-97 (Double click top stack to raise comparison)
  • Added: TESTNG-84 (plug-in UI for suite option should support absolute
    path)
  • Added: TESTNG-20 (copy stack trace)
  • Fixed: TESTNG-72 (display groups with non-array values)
  • Fixed: TESTNG-64 (Eclipse plug-in applies added groups to all launch
    configurations)
  • Fixed: TESTNG-28 (Cannot select groups from dependent eclipse projects)
  • Fixed: TESTNG-25 (do not display fully qualified method name when
    running contextual test class)

Improved behavior

  • TESTNG-98 (temporary files have guaranteed fixed names)
  • TESTNG-95 (Assertion failed comparison trims trailing ">")
  • TESTNG-70 (TestNG prevents eclipse from opening an older CVS version of
    a java class)
  • Display of test hierarchy information (TESTNG-29)

The feature that almost was

It started as a simple TestNG idea I had while riding my bicycle on my way to playing tennis.

TestNG allows you to specify that a certain test method is expected to throw an
exception.  If the test does, then TestNG marks it as a success.  If
it fails to throw or throws a different exception than the one expected, then
TestNG will record the test as a failure.

The current syntax uses an attribute of the @Test annotation:

@Test(expectedExceptions = NumberFormatException.class)
public void shouldThrow()
{ … }

But now, I was wondering if I couldn’t replace this with:

@Test
public void shouldThrow() throws NumberFormatException
{ … }

The idea is that whenever TestNG encounters a test method that declares a
throws
clause, it will expect that test method to throw that exception, exactly as if the developer had specified
expectedExceptions.

I ran the idea by a couple of people who didn’t see anything wrong with it, so I went ahead and implemented it. The fix took two minutes and ten lines of code.

An idea that everybody likes and that takes ten lines of code… sounds like a winner, right?

Well, it was… until I ran the TestNG regression tests and noticed that a few of them failed. I investigated and found a test like this:

@Test
public void verifyFile() throws IOException {
 // test that uses File and can throw IOException
}

Unsurprisingly, the new TestNG expected this method to throw an IOException while the throws clause is just saying that this code could throw. Since there is no reason to declare this, I decide that I can get rid of the throws clause
as follows:

@Test

public void verifyFile() {
 
try {
    // test that uses File and can throw IOException
 
}
  catch(IOException ex) {


  }
}

It works, but… is it wise?

First of all, the regression tests failed. This is always a red flag: if they failed, it means you just introduced a change that could break existing code.

But more importantly, it looks like I now need to handle exceptions in all my future test methods myself. Come to think of it, it’s pretty nice to be able to ignore all the possible exceptions that your test code can throw and know that TestNG will notice this and automatically fail the test, regardless of what
went wrong.

When I write a test to verify that something works well, that’s exactly what I want to focus on: everything must go as planned. If the slightest thing goes wrong, such as an unexpected exception, I want the testing framework to take care of it.

In the face of this mounting evidence, I decided that the change was not worth it and reverted my code.

Moral of the story: tests that break are trying to tell you something. 
Listen to them.

Untested code is the dark matter of software

Recently, somebody posted an innocent-looking question on the JUnit mailing-list, basically saying that he finds unit testing hard, confessing he doesn’t always do it and asking for opinions about whether his situation is normal and if everybody else manages to do testing 100% of the time.

I have to say, even I underestimated the virulence of the responses that followed. I’ll skip the messages along the line of “I test 100% of the time, something is wrong with you” to focus on another response from Robert Martin that crystallizes an extreme attitude that is so detrimental to Java software in general. Here are a few relevant excerpts…

Code coverage for
these tests should be very close to 100% (i.e. high 90s). If you
don’t have this, then you don’t KNOW that your code actually works.
And shipping code that you aren’t as certain as possible about is
unprofessional.

That’s a bit extreme, but not entirely untrue. What this statement fails to distinguish is that there are several levels of “unprofessionalism”. I can think of a few that are way more critical than “shipping code that it not covered by tests”:

  • Missing the deadline.
  • Shipping code that doesn’t implement everything that was asked of you.
  • Not shipping.

I don’t know about you, but if I have to choose between shipping code that is not covered or even not automatically tested and one of the three options above, I will pick one of the three options. And I would consider anyone else not doing this to be extremely unprofessional.

If you don’t have this [code coverage], then you don't KNOW that your code actually works.

There are plenty of ways to know that your code works. Testing it is one. Having thousands of customers over several years, consecutive successful releases and very few bug reports on your core functionality is another.

Claiming that only testing or code coverage will tell you for sure that your code works is preposterous.

The argument about "TIME" is laughable. It is like saying that we
don't have time to test, but we DO have time to debug. That's an
unprofessional attitude.

This seems to imply that there are only two kinds of code:

  • Code that is tested and works.
  • Code that is not tested and doesn't work.

There is actually something in the middle: it's called "Code that is not tested but that works".

This kind of code is very common, in my experience. Almost prevalent. And this is also why I am convinced that if other circumstances warrant it, it's okay to write the code, ship it and write the tests later.

It's simple common sense, really: when faced with tough decisions, use your judgment. Your boss is paying you to decide the best course of action with your brain, not to base the company's future on one-liners pulled from a book.