Archive for category Uncategorized

Software headaches

Every once in a while in your career, you encounter problems or concepts that blow your mind. Either because they are complex or just because they are just plain exotic. Here are a few I can remember over the past 30+ years that I have been in the software industry…

  • Assembly language. I encountered this for the first time on my Apple ][. After a year of learning some Applesoft Basic, I encountered a curious instruction that I had never seen before: CALL 768. When I ran it, it played some music. After weeks of research (this was in the early 80’s… no Internet nor even books), I managed to figure out that I needed to switch to the “monitor”, convert the address in hexadecimal ($300) and see the “listing” (basically opcodes). I was absolutey mystified. I couldn’t make sense of any of this and but I did manage to alter the pitch and tune by inserting values in random places. It took me years to finally get a grasp of what’s going on. One thing for certain: I was hooked.

  • fork(). I encountered this mysterious function in my early CS classes, and if you’re not familiar with UNIX, it basically lets you spawn a new process. The semantics of this function is absolutely baffling, I just can’t understand how anyone could ever come up with this and think it’s intuitive. But as years go by, you just get used to it, just like you stop noticing this eyesore building every morning on your way to work.

  • Pointers. Aaah… pointers. Just when you think you start having a handle on this programming thing, a mean teacher throws you a curve ball and tries to explain to you how pointers work in Pascal. How inhuman. I suffered months of mental anguish trying to wrap my head around this concept, and then suddenly, it made sense (and I even managed to relate this to my earlier assembly language discoveries… now *that* was an epiphany).

  • Continuations. As opposed to the other three items described above, I can’t say that even as of today, I understand continuations. They just don’t make any sense to me and even if they did, I just can’t see any practical use to them except maybe to torture students and make sure you grade along the curve. Yuck.

    These are the main painful experiences I can remember, I’m sure there are more.

    How about you, readers: do you have any painful learning experiences to share in the area of computers?

How fast can you type?

The title says it all… Test your typing speed!

I recommend doing it three times in a row and take the average speed. I seem to score between 120 and 130 words per minute.

How about you?

The language of interviews

When I interview someone, I usually let them use the language of their choice
with between C, C++, C# and Java. There are several reasons for that:

  • I want them to be comfortable.  It’s already hard enough to be in
    an interview, much less being forced to use a syntax or an API they are not
    familiar with.  Of course, I don’t pay too much attention to syntactic
    details or making sure they use the right method name as long as the logic
    of what they write is sound.
  • It’s not that I have something against Ruby or other fourth generation
    languages, but I found that these languages work at a level of abstraction
    that is a little too high to give me a meaningful insight on the candidate
    for a forty-five minute interview.  There are plenty of very
    interesting problems that are hard to solve even in Ruby or Python (and
    actually, it’s quite likely this is the kind of problem that the
    candidate will be struggling with if she gets hired), but formulating the
    question and writing the solution would take several hours.

The real challenge is therefore to find a problem that is very easy to
express and which solution in one of the languages mentioned above will give me
enough information on this candidate to formulate a verdict.

Interestingly, the choice that the candidate makes already reveals a few
things on their abilities.  I found that typically, C/C++ people tend to be
very comfortable with low-level algorithmic questions ("pointers and recursion",

to quote Joel
) but fare very poorly as soon as we "move up the stack"
(object-oriented design, design patterns, enterprise frameworks, etc…). 
Conversely, Java/C# people are more comfortable with these concepts but get
easily stumped on "tight memory" types of questions.

Of course, great candidates excel at both, which brings me to my next point.

Good developers are born good.  Their brain is wired a certain way and
they can chew on any CS concept thrown in their direction and spit it out with a
bow tie.  Most of these developers then go to school and move from the "gem
in the rough" state to that of a "pure diamond".  School accelerates and
expands their knowledge.  Of course, there is hardly anything they learned
in school that they couldn’t have learned by themselves, but the formal process
of learning, reading book and listening to teachers saves them years and years
of work.  It also expands their minds to concepts they would probably never
have encountered in their professional career.

With that in mind, I find Joel’s obsession on pointers and recursion quite

There are two important facts to keep in mind about pointers and recursion:

  1. They are important concepts and any serious developer should probably be
    comfortable with them.
  2. You will hardly ever use any of these concepts for today’s typical
    programming jobs.

How’s that for a paradox?  How do you interview for this?

Well, it’s actually very easy to do a quick check on pointers and recursion,
even in Java, but it’s equally important to spend most of your interviewing time
on other areas that are more relevant to the job the person will be asked to do. 

One of my friends pointed out that what we are seeing today is a more
distinct separation between "system programmers" (kernel, device drivers, etc… 
which require C/C++ and pointer juggling) and "application programmers" (for
which pretty much any programming language will do, including Visual Basic). 
What’s really puzzling is that Joel’s company produces a bug-tracking software,
and it’s hard to imagine why you would need an army of superstar programmers .  A few selected senior tech leads and designers?  Sure.  But
an entire team of them…  doubtful.

As for Joel’s reference to Paul Graham’s vastly over-hyped essay
"Beating the Averages", I am
still trying to decide which of the following two quotes is the most ridiculous:

  • His start-up had an edge over its competitors because of the
    implementation language they chose.
  • Because of this choice, they were able to implement features that their
    competitors couldn’t.

Actually, I’ll call that a tie:  both claims are equally preposterous.

Paul Graham has been a dinosaur for a long time and his disturbing elitist
stance ("if you don’t know Lisp, you’re an idiot") oozes from every paragraph of
every single programming essay he has ever authored.  So far, Joel had
managed to remain reasonably objective and interesting in his posts, but his
extremely narrow background (Microsoft technologies in C/C++ and bug-tracking
software) is beginning to take a toll on his objectivity and I find that most of
his writings are more and more missing the big picture.  I hope he’ll turn
around soon and open up to modern programming topics, because frankly, I am
having as much fun using Ruby on Rails or Eclipse and EJB3 today as I did
Copper list based demos
on my Amiga fifteen years ago or coding floppy disk
drivers in 6502
on my beloved Apple ][ twenty years ago (gasp).



My podcast with the JavaPosse is now


Annotation design patterns (part 2)

In a previous entry,
I discussed an annotation design pattern called "Annotation Inheritance". 
Here is another annotation design pattern that I have found quite useful.

Class-Scoped Annotations

This design pattern is very interesting because it doesn’t have any
equivalent in the Java World.

Imagine that you are creating a class that contains a lot of methods with a
similar annotation.  It could be @Test with
TestNG, @Remote if you are using
some kind of RMI tool, etc…

Adding these annotations to all your methods is not only tedious, it
decreases the readability of your code and it’s also quite error prone (it’s
very easy to create a new method and forget to add the annotation).

The idea is therefore to declare this annotation at the class level:

public class DataBaseTest {
  public void verifyConnection() { … }
  public void insertOneRecord() { … }

In this example, the tool will first look on each individual method if they
have an @Test annotation and if they don’t, look up the same annotation
on the declaring class.  In the end, it will act as if @Test was
found on both on verifyConnection() and insertOneRecord().

The question now is:  how will the tool determine which methods the
class annotation should apply to?

There are three strategies we can consider:

  1. Apply the annotations on all the methods (private, protected,
    Probably not the most intuitive way.
  2. Apply the annotations only on the public methods.
    This seems fairly intuitive to me, you just need to be careful what
    methods you declare public.
  3. Apply the annotations on a set of methods picked differently.
    An interesting approach discussed further below.

Of course, we should also add another dimension to this matrix:  should
the methods under consideration be only on the current class or also inherited
from superclasses?  To keep things simple, I’ll assume the former for now,
but the latter brings some interesting possibilities as well, at the price of

Using visibility as a means to select the methods might be seen as a hack, a
way to hijack a Java feature for a purpose different than what it was designed
for.  Fair enough.  Then how could we tell the tool which methods the
class-level annotation should apply to?

An antiquated way of doing it is using syntactical means:  a regular
expression in the class-level annotation that identifies the names of the
methods it should apply to:

@Test(appliesToRegExp = "test.*")
public class DataBaseTest {
  public void testConnection() { … } // will receive the @Test annotation
  public void testInsert() { … } // ditto
  public void delete() { … } // but not this one

The reason why I call this method "antiquated" is because that’s how we used
to do it in Java pre-JDK5.  This approach has a few significant flaws:

  • It forces you to obey a naming convention.
  • It makes refactoring difficult (the IDE’s don’t know much about the
    meaning of the string "test.*").
  • It is not type safe (if the regular expression changes, you need to
    remember to rename your methods).

A cleaner, more modern way to do this is to use annotations:

@Test(appliesToMethodsTaggedWith = Tagged.class)
public class DataBaseTest {
  public void verifyConnection() { … }

  public void insertOneRecord() { … }

Of course, this solution is precisely what we wanted to avoid in the first
place:  having to annotate each method separately, so it’s not buying us
much (it’s actually more convoluted than the very first approach we started

So it looks like we’re back to square one:  class-level annotations
applying to public methods seems to be the most useful and the most intuitive to
apply this pattern, and as a matter of fact, TestNG users have taken quite a
liking to it.

Can you think of a better way?

Testing asynchronous code

A user recently submitted his problem on the TestNG mailing-list:  he needed to send asynchronous messages (this part hardly ever failed) and then wanted to use TestNG to make sure that the response to these messages was well received.

As I was considering adding asynchronous support to TestNG, it occurred to me that it was actually very easy to achieve:

private boolean m_success = false;

public void sendMessage() {
  // send the message, specify the callback

private void callback() {
  // if we receive the correct result, m_success = true

@Test(timeOut = 10000)
public void waitForAnswer() {
  while (! m_success) {

In this test, the message is sent as part of the initialization of the test with @BeforeClass, guaranteeing that this code will be executed before the test methods are invoked.

After this, TestNG will invoke the waitForAnswer() test method which will be doing some partially busy wait (this is just for clarity: messaging systems typically give you better ways to wait for the reception of a message).  The loop will exit as soon as the callback has received the right message, but in order not to block TestNG forever, we specify a time-out in the @Test annotation.

This code can be adapted to more sophisticated needs:

  • If the sending of the message can also fail and you want to test that as well, you should turn the sendMessage() into a @Test method as well, and in order to guarantee that it will be called before waitForAnswer(), simply have waitForAnswer() depend on sendMessage():
    @Test(groups = { "send" })
    public void sendMessage() {
      // send the message, specify the callback
    @Test(timeOut = 10000, dependsOnGroups = { "send" })
    public void waitForAnswer() {
      while (! m_success) {

    The difference with the code above is that now that sendMessage() is a @Test method, it will be included in the final report.

  • It is not uncommon for messaging systems to be unreliable (or more precisely, "as reliable as the underlying medium"), so your business logic should take into account the potential loss of packets.  To achieve this, you can use the "partial failure" feature of TestNG:
    @Test(timeOut = 10000, invocationCount = 1000, successPercentage = 98)
    public void waitForAnswer() {
      while (! m_success) {

    which instructs TestNG to invoke this method a thousand times, but to consider the overall test passed even if only 98% of them succeed (of course, in order for this test to work, you should invoke sendMessage() a thousand times as well).

“Partial failures” are a new feature of TestNG 2.1, which will be released very soon.

Getter-based injection

Alright, I’m just back from vacation (report will be posted soon) and I have
been following the reactions to my classification of getter injections with a
lot of interest.  Just for memory, here it is again:

  1. Force the dependency to be passed to the constructor (not always
    possible and very invasive).
  2. Have the container invoke a setter, or set a field on your class (mildly
    intrusive since it forces you to specify a setter or a field you will never
    use yourself).
  3. Have the container provide the getter (not intrusive at all, except that
    now, you need the container for your application, which makes
    out-of-container testing problematic).

It looked like we had reached a stalemate:  option 3 looks like the best
of both worlds except that it mandates the presence of the container, which is
bad news for testability and brings us back to square one.

And then, Bob Lee came up to me and said "Why does the getter need to be
abstract again?".  And he proceeded to put together a
quick proof of concept, which
has been
discussed on TheServerSide
since then.

It was one of these "doh!" moments.  Of course, the getter doesn’t need
to be abstract, but I guess my EJB bias made me overlook this fact.  Bob is
right, there is absolutely no need for this getter to be abstract, and now we
have an answer for all these Test-Driven Development weeni^H^H^H^H^H (sorry, had
a Hani moment) advocates:  not only does getter injection work well with
TDD, it makes TDD a first class citizen.

In the absence of a container, your class is testable right out of the box. 
All you need to do is provide a default implementation for your getter that will
work in your testing environment.  And if you run your code in a container,
the getter will be overridden with a different value for production or staging.

How is that for the best of both worlds?

Giving credit

is not the first one wondering why the EJB3 Experts Group is not
giving credit where credit is due.  I have heard this quite a few times
since last week, although to be honest, the complaints were only coming from the
people who thought they should be credited, so they’re probably not the most
subjective people either anyway…

First of all, I’d like to dispel the conspiracy idea (very popular these
days).  There is absolutely no conscious intent from the group or from its
members to avoid giving credit where credit is due.

Second, there is a difference between inventing a concept and popularizing
it, but there is no question that both deserve credit.

When the first suggestions of dependency injections surfaced on the Experts
Group, I didn’t immediately make the connection with existing IoC containers

  • These proposals were natural extensions of the EJB2 model (ejbCreate()
    is a pretty good example of inversion of control:  you are telling the
    container to invoke you whenever it wants to initialize your bean).
  • They were innovative in their own right (as far as I know, EJB3 is the
    first one to use annotations to enable dependency injection, an approach I
    like a lot and that I’ll blog about in a near future).

Innovation happens all the time in software and except for a few very rare
exceptions, it’s usually very hard to tell who invented what, but we certainly
know of a few names who are strongly associated to certain ideas (and Aslak is
certainly on my list in the IoC category).

So apologies if a few feelings were hurt, but there is not much we can do
about this and we certainly had no intention to offend anyone.


From classes to components

After Holub’s nonsensical article about getters and setters, it’s quite a
relief to read Anders
Hejlsberg interview
, especially since the C# architect is basically saying
the exact opposite of what Holub tried to say.

Anders emphasizes the fact that nowadays, developers need to think less in
terms of classes and more in terms of components (another debate that has
recently flared up in the Java community).

The way I see it, Components are a superset of Classes.  What exactly
differentiates components from classes is open for interpretation, but I like
Anders’ simplification:  while classes are about properties and methods
(PM), components are defined by PME:  properties, methods and events.

This observation makes both Properties and Events prime citizens of the
Component programming model, which explains why C# supports both of them
natively, while Java achieves this through interfaces (another language that has
native support for accessors but not events is Ruby).

I, for one, really wish that Java had native support at least for accessors,
so that we can finally drop the confusing "A read-write property is defined if
the Java class has two methods, getFoo() and setFoo().  In this case, the
name of the property is that of the method where you remove "get" and lowercase
the first letter of the remaining name".  Yikes.

Another topic that Anders discusses in this interview is delegates.

Ever since Microsoft’s initial attempts to add delegates to its own Java
Virtual Machine, Java developers have had a very strong bias against this
concept.  While creating an incompatible JVM is indeed something that
should be fiercely condemned, it’s a shame that the concept of delegates was
thrown away with it, because it makes a lot of sense in a language such as Java.

Anders gives several reasons why delegates are a good idea, but to me, the
one that’s most important is that delegates allow you to keep the amount of
classes and interfaces to a reasonable level.

Every Java developer who has written Swing applications (or any GUI, for that
matter) knows how Action objects quickly proliferate, making the whole
architecture hard to follow, not mentioning the number of objects that get
created just so that a callback method can be invoked.

Delegates allow you to tie an Action to one single method.  No new
interface or new class is needed.

Delegates go one step further:  they don’t require type conformance but
only signature  conformance.  This design choice reopens the age-old
debate about static versus dynamic binding, and more particularly, begs the
following question:  if two methods have the exact same signature but
belong to two different classes, are they semantically equivalent?

My experience is that in practice, it’s something that doesn’t really cause
problems (I usually make a similar observation about untyped Collections and the
fact that in practice, the downcast is rarely a source of ClassCastExceptions).

Anders makes some other excellent points, such as the performance gain of a
delegate versus a direct method invocation, or his interesting take on what he
calls "simplexity".  Read the interview for more details.