A humble suggestion to protect your important information

When you create a new account on a web site, it’s pretty much the norm to be asked to enter an email address. Why aren’t there more (any?) web sites that ask me for an additional email address that will be used exclusively to send me password resets?

With such a simple extra step, my account has become impossibly hard to crack. Not just because my “safe email address” will, obviously, be protected by two-step authentication or other similar mechanism, but more importantly because the hackers have no idea what that email address is: it’s not used to send any email, it only receives emails sporadically, and the only way to find out what it is is to hack into the bank’s servers (I’m pretty sure that hackers able to do that will find far more interesting accounts to empty than mine).

Another benefit is that I don’t need to burden my regular email address with two-step authentication, which would be a hassle since I log into that account so often from so different places.

So… What am I missing? Why isn’t this practice more widespread?

No silver bullet: reloaded

I had to read the recent article “Functional Programming Basics”, by Robert Martin, several times to make sure that I wasn’t missing any deeper point, but every time, I just became more and more concerned by the sloppiness of the arguments presented.

Here are a few thoughts.

First, it’s almost certainly true that functional programming is the next big thing.

We’ll see about that. This has been predicted for several decades and it still hasn’t happened. Some say that it will never happen, others say it’s already happened and more moderate people just observe that quite a few core concepts of functional programming have been leaking into mainstream languages for a while, so in some way, functional programming is slowly percolating through the industry.

Functional programming is programming without assignment statements.

This is a pretty silly statement since functional programming languages have assignments. Maybe the emphasis should be more on referential transparency or perhaps “variables can never be assigned more than once” would be a more accurate description of one of the important tenets of functional programming.

Let’s look at a functional program for the squares of integers. We’ll use the language Clojure for this, though the concepts we’re going to explore work the same in any functional language.

(take 25 (squares-of (integers)))

I find the use of Clojure (or more generally, Lisp) to illustrate his points a bit puzzling because while the question of whether functional programming will play a big part in the software industry’s future is still up for debate, it seems pretty clear to me that 1) object orientation and 2) static typing are two concepts that have clearly established themselves as powerhouse ideas that have an established track record and a bright future ahead of them. Scala or Kotlin would have been a better choice, especially they are not that much more verbose on such a trivial example:

// Kotlin
ints.map { it * it }.take(25)

Read that sentence again; because I did something important there. I took the three separate definitions of the functions that I gave you in the preceding paragraph and combined them into a single sentence. That’s called: (are you ready for the buzzword?) Referential Transparency. [cue: Fanfare, glitter, ticker tape, cheering crowds]

This is where I start being a bit confused by Robert’s point, because if anything, the code he just showed is illustrating a lot more composition than referential transparency.

If you are trying to sell functional programming as a big thing in order to prop up your consultant business, you should probably skip the trivial stuff and start with the hard problems right away, such as preserving referential transparency and immutability, demonstrating the power of composition while accessing a database, returning JSON data from a servlet and have another servlet modify that same database. Do this with a functional programming language and with snippets of code that show clear advantages over today’s Java and C# based solutions and you have yourself a market.

Honestly, we programmers can barely get two Java threads to cooperate.

Not really. Hundreds of thousands of Java programmers write multithreaded code every day and they are doing just fine, because the intricacies of parallelism are abstracted away from them by the frameworks they rely on to get their job done. I’m not talking just about complex Java EE or Spring-based containers but even the simple Tomcat or Jetty servers. More advanced programmers dabble with the amazing java.util.concurrent classes but even those never really have to think much about the hard problems that heavy paralellism involve, because domain gurus such as Brian Goetz and Doug Lea have already done all the hard work for them.

I continue to be wholly unconvinced by the claim that “the new era of multi cores is going to wreak havoc on our code bases unless we adopt a functional programming language”. This claim has two sides, both equally bogus.

As I mentioned above, the fact that we are now running our programs on multiple cores instead of multiple CPU’s brings very little practical difference for anyone writing multithreaded code. What processor/core/green thread/lightweight process your thread will be riding should have zero impact on how you structure your code. If your code was multithread safe on “n CPU’s / 1 core”, it will be equally multithread safe on “1 CPU / n cores” or even “n CPU’s / m cores”. The only major difference is that it now becomes possible to test for actual parallelism (not just concurrency) on inexpensive developer boxes.

The second part of the claim, that only functional programming languages can save us from the impending multicore apocalypse, is equally dubious but I’ll save this for another post, although I’ll point out that traditional OO and imperative programming has shown an amazing power of adaptation and versatility for several decades now, so whatever the next big paradigm will be, it will have to bring significant improvements to the table.

More importantly, if you could peer into the computer’s memory and look at the memory locations used by my program, you’d find that those locations would be initialized as the program first used them, but then they would retain their values, unchanged, throughout the rest of the execution of the program. In other words, no new values would be assigned to those locations.

This is a bizarre claim that shows that Bob only has a passing familiarity with the internals of how a compiler or a VM works. Just because the observable effects of a program are consistent with immutable structures does not mean that the underlying memory is not being shifted around by the OS or by the abstraction layer standing between the source code and the CPU.

Moreover, the claim that immutable structures automatically lead to easy concurrency is not just naïve but plain incorrect: there are a lot of algorithms (such as merge sort) that are inherently not parallelizable.

Overall, this is a fairly unconvincing article that seems to miss most of the well recognized benefits of functional programming (composability remains the aspect that has the highest impact on my code on a daily basis) while touting advantages that range from poorly understood concepts to plain incorrect claims.

Migrating from Ember.js to AngularJS

I recently spent some time converting a medium-sized Javascript application using Ember.js to AngularJS, here are a few thoughts about the migration.

First of all, why migrate at all?

The Ember.js code is based on a 0.9x version and as Ember.js is approaching 1.0, I was growing increasingly nervous at the thought of porting the application to 1.0 considering the number of breaking changes that the new version contains (nothing wrong with that, it’s the risk you take when you use a pre-release version). Since there would be major work involved, I thought that taking another look at Angular wouldn’t hurt.

Overall, Angular is a much more extensive framework than Ember. On top of offering a binding/MVC framework, Angular supports:

  • Modules. You pick and choose which functionalities you want and you can create modules of your own code as well.
  • Injection and testability. This should come as no surprise since one of Angular’s main developers, Misko, is a regular poster on Google’s testing blog.
  • Support for partials and inclusion of templates. This is another thing I really missed with Ember.js, which offers no easy way to break down your HTML templates into smaller, more manageable files. Angular allows you to do this either through routes and partials or straight inclusion of files.
  • Documentation. While the Ember.js documentation is fairly large, I found it very unorganized and I often resorted to searching the page to find what I need. In contrast, Angular’s documentation follows a clean structure where each page contains a full example of the concept being explained (a trend I wish more documentations followed: with this kind of model, it doesn’t matter much if your documentation is well written as long as the code samples work. Also, I really like the way they present the code in these samples, with individual panels for the HTML, Javascript and any other file you might need).

There are a lot of fundamental differences in philosophy between Ember.js and Angular, and one of the most important ones is how they handle templates.

Ember.js follows the more classical way of making you write files that look like HTML but are not really since they are littered with directives that drive the template engine:

    {{#each people}}
  • Hello, {{name}}!
  • {{/each}}

In contrast, Angular template files use proprietary HTML attributes which will be processed by its compiler to modify the DOM:

  • Hello, {{person.name}}!

I have found that Angular’s approach makes it easier to debug your templates because the HTML that you are inspecting is exactly what it looks like after expansion. Reading Ember.js processed templates is made harder by the fact that they are littered with magic Ember identifiers (e.g. “id="metamorph-34-start"“).

Data binding is performed very differently between the two frameworks. Ember.js makes you wrap all the objects that are part of your model so that it can monitor them. Despite offering theoretical great power, updating Ember objects is still fraught with annoyances:

  • You need to remember to call the setter: object.set('foo', newValue). Assigning this field directly will not fail but will cause some data binding to not work.
  • Cascading updates are being executed right away. While this seems like a good idea in theory, the practice is very different since there are times where I want to do batch updates to my objects before telling my UI to update itself. I ended up adding boolean flags to manually prevent my listeners from automatically triggering until my model was ready, which seems to defeat the purpose of data binding.

Angular’s data binding is much more straightforward (you just update the objects of your model directly) and “most of the time”, it works. There are times where you will have to tell Angular to perform its updates, but these are rare. See this very interesting explanation from Misko on StackOverflow.

Ember’s data binding is also magic is a few more ways. You can bind values together (for example, defining that a “full name” is made of “first name” and “last name” by declaring dependencies between these fields) and you declare these by adding “Binding” to the end of the variable. This takes a little to get used to. Bindings at the template level are pretty awkward too, especially for attributes:

  
  <img {{bindAttr alt="altText"}} />

  
  <img alt="{{altText}}" />

I had a few other items to cover in this comparison but I’ll stop here, except for one last point: IDEA’s Javascript support is light years ahead of Eclipse and a joy to work with.

Overall, I have to say that I am enjoying working with Angular a lot more than I ever did with Ember, and between the credentials of both the company and the engineers driving Angular, the vibrant community (the mailing-list gets about fifty emails every day), the high quality documentation and support and the modern concepts it supports (modularity, testability, injection), it’s clear to me that Angular has a tremendous amount of momentum and is bound to become the defacto standard with which Javascript single page applications are written in the future.

Coding challenge: a sliding window map

A lot of sites offer programmatic access to their content via API’s. The main advantage for the producer of this content is to be able to control finely what they export and how they export it (and also avoid being scraped) while clients receive the data they need in a structured and documented way. These API’s are usually strictly monitored to make sure that clients can’t abuse them. For example, the producer will typically limit how often clients can call a certain API, how much data they can transfer every minute, etc…

Today’s coding challenge is based on these requirements.

We want to make multiple calls to a site that only allows to use a key 500 times every 10mn. You are given a collection of keys (strings), a max count (e.g. 500) and a time period (e.g. 10mn) and your task is to implement getNextKey() as follows:

public class SlidingWindowMap {
  public SlidingWindowMap(Set<String> keys, int maxCount, long periodMs) {
    // ...
  }

  /**
   * @return a key that has been used less than `maxCount` times during the
   * past `periodMs` milliseconds or null if no such key exists.
   */
  public String getNextKey() { 
    // Your brilliant solution goes here
  }

}

I’m keeping the statement of the problem as open as possible to encourage exploration, but feel free to ask for clarifications if something is not clear. Please post your solution on a pastebin/gist like site (or wherever your code will be easy to read) and link to it in your comment. All languages are welcome.

Advanced dependency injection with Guice

The more I use dependency injection (DI) in my code, the more it alters the way I see both my design and implementation. Injection is so convenient and powerful that you end up wanting to make sure you use it as often as you can. And as it turns out, you can use it in many, many places.

Let’s cover briefly the most obvious scenarios where DI, and more specifically, Guice, are a good fit: objects created either at class loading time or very early in your application. These two aspects are covered by either direct injection or by providers, which allow you to start building some of your object graph before you can inject more objects. I won’t go too much in details about these two use cases since they are explained in pretty much any Guice tutorial you can find on the net.

Once the injector has created your graph of objects, you are pretty much back to normal and instantiating your “runtime objects” (the objects you create during the life time of your application) the normal way, most likely with “new” or factories. However, you will quickly start noticing that you need some runtime information to create these objects, other parts of them could be injected.

Let’s take the following example: we have a GeoService interface that provides various geolocation functions, such as telling you if two addresses are close to each other:

public interface GeoService {
  /**
   * @return true if the two addresses are within @param{miles}
   * miles of each other.
   */
  boolean isNear(Address address1, Address address2, int miles);
}

Then you have a Person class which uses this service and also needs a name and an address to be instantiated:

  public class Person {
    // Fields omitted

    public Person(String name, Address address, GeoService gs) {
      this.name = name;
      this.address = address;
      this.geoService = gs;
    }

    public boolean livesNear(Person otherPerson) {
      return geoService.isNear(address, otherPerson.getAddress(),
          2 /* miles */);
    }
  }

Something odd should jump at you right away with this class: while name and address are part of the identity of a Person object, the presence of the GeoService instance in it feels wrong. The service is a singleton that is created on start up, so a perfect candidate to be injected, but how can I achieve the creation of a Person object when some of its information is supplied by Guice and the other part by myself?

Guice gives you a very elegant and flexible way to implement this scenario with “assisted injection”.

The first step is to define a factory for our objects that represents exactly how we want to create them:

public interface PersonFactory {
  Person create(String name, Address address);
}

Since only name and address participate in the identity of our Person objects, these are the only parameters we need to construct our objects. The other parameters should be supplied by Guice so we modify our Person constructor to let Guice know:

  @Inject
  public Person(@Assisted String name, @Assisted Address address,
      GeoService geoService) {
    this.name = name;
    this.address = address;
    this.geoService = geoService;
  }

In this code, I have added an @Inject annotation on the constructor and an @Assisted annotation on each parameter that I will be providing. Guice will take care of injecting the rest.

Finally, we connect the factory to its objects when creating the module:

    Module module1 = new FactoryModuleBuilder()
        .build(PersonFactory.class);

The important part here is to realize that we will never instantiate PersonFactory: Guice will. From now on, all we need to do whenever we want to instantiate a Person object is to ask Guice to hand us a factory:

@Inject
private PersonFactory personFactory;

// ...

Person p = personFactory.create("Bob", new Address("1 Ocean st"));

If you want to find out more, take a look at the main documentation for assisted injection, which explains how to support overloaded constructors and also how to create different kinds of objects within the same factory.

Wrapping up

Let’s take a look at what we did. First, we started with a suspicious looking constructor:

public Person(String name, Address address, GeoService s) {

This constructor is suspicious because it accepts parameters that do not participate in the identity of the object (you won’t use the GeoService parameter when calculating the hash code of a Person object). Instead, we replaced this constructor with a factory that only accepts identity fields:

public interface PersonFactory {
  Person create(String name, Address address);
}

and we let Guice’s assisted injection take care of creating a fully formed object for us.

This observation leads us to the Identity Constructor rule:

If a constructor accepts parameters that are not used to define the identity of the objects, consider injecting these parameters.

Once you start looking at your objects with this rule in mind, you will be surprised to find out how many of them can benefit from assisted injection.

A note on null pointers

It’s the second time in a few weeks that I have read something along the lines of:

In a year at foursquare, I’ve seen exactly one null pointer exception.

To me, this kind of statement is similar to

Ever since I switched to BASIC, I haven’t seen a single stack trace.

and it shows a fundamental misunderstanding of what a programming error is. Similarly, hearing Tony Hoare say that introducing null was a “one billion dollar mistake” makes me really question if he understands the fundamental idea behind crashes.

Null pointer exceptions, stack traces, core dumps, guru meditations, etc… are usually all the manifestation of a simple phenomenon: unexpected values. Somehow, your code expected a value but received another one. This is a programming error, also known as a bug.

If you are using a language that supports monadic Option/Maybe types, by definition, you will not be seeing any null pointer exceptions, but this doesn’t mean that you have solved the “unexpected value” problem.

In languages such as Java or C, null pointers translate into crashes that are hard to miss and usually easy to diagnose. What would be the equivalent of this in Scala?

The answer is easy: receiving a None when you expected a Some. Because of the way you thread monads through your functions, such a bug will not trigger any exceptions nor any crash: your None value will happily make its way through monadic transforms and will simply result in a no-op every time it’s being mapped. Of course, your program will yield an unexpected value that you will take notice of at some point, and then you will have to go through the painful process of retracing all the Options that your transformations have gone through to find out which one returned None when it should have returned a Some.

At this point, you should probably ask yourself: if this piece of code should never return a None, why return an Option at all? In such cases, you are better off unboxing your Option and returning its raw value. Of course, the downside of this is that you might have to lift your value again if the following computations happen in the Option monad as well.

As you can see, Options come with their own trade offs and hard design decisions as well. No such thing as a free lunch.

Which leads us to the next big misconception about this class: Option doesn’t solve null pointer exceptions nor unexpected values, it just saves you from having to test against null. And actually, doing so (e.g. pattern matching your option against Some/None) is usually considered bad practice, even though it’s sometimes necessary for Java interoperability reasons.

Personally, I favor the way Groovy, Fantom and Kotlin address the problem over having to lift all my values into Option. For example, here is how Kotlin lets you ignore null values encountered along a chain of invocations:

bob?.department?.head?.name

If either of these accesses returns null, the result of this expression will be null. No need to put your values into monadic boxes nor mapping through them, just use the standard composition operator you are familiar with if you are using a C family language.

By all means, do use Option whenever you can if you are programming in Scala (and you will probably realize that you can’t use it as much as you would like because Java libraries, and some Scala ones, are just not implemented to take Option parameters), but don’t listen to people who tell you that because you are no longer seeing any null pointer exceptions, your code is safer. It’s not. You still have to fix your bugs.

Reinventing assertions

If you have ever written a test in Java, you are undoubtedly familiar with the Assert class:

  Assert.assertEquals(result, expected);

Java 5 introduced the assert keyword, but because it wasn’t enabled by default, the world of testing has continued to use the Assert class.

I’ve always been bothered by the fact that the methods on this class are static, which shouldn’t come as a surprise since getting rid of statics in my test code base was one of the motivating factors I had for creating TestNG.

Over the past years, I have received an increasing number of requests to make assertions more flexible in order to support various scenarios:

  • Logging all the assertions as they are happening, even if they succeed.
  • Accumulating the results of multiple assertions inside a test method but not failing until the end of the method.
  • Performing a certain operation whenever an assertion fails (for example, taking a screen shot of the browser, as is often required in Selenium).

I decided that the time had come to revisit assertions in order to make it more flexible so it can accomodate all these real world scenarios.

The first step is to create a new assertion class that supports exactly the same methods as Assert but without these methods being static. Here is a test that uses this new assertion class:

import org.testng.asserts.Assertion;

@Test
public class A {
  private Assertion m_assert = new Assertion();

  public void test1() {
    m_assert.assertTrue(true, "test1()");
  }
}

On top of having all its assertion methods being instance methods and not static ones, the class Assertion defines several life cycle methods that subclasses can override. Here is the list as it stands today:

  • onBeforeAssert()
  • executeAssert()
  • onAssertSuccess()
  • onAssertFailure()
  • onAfterAssert()

I’m not going to go into details right now, these names should be self-explanatory. All these methods receive an instance of IAssert in parameter which captures the assert that’s about to be run:

public interface IAssert {
  public String getMessage();
  public void doAssert();
}

For example, here is what a class that logs each assert would look like:

public class LoggingAssert extends Assertion {

  private List<String> m_messages = Lists.newArrayList();

  @Override
  public void onBeforeAssert(IAssert a) {
    m_messages.add("Test:" + a.getMessage());
  }

  public List<String> getMessages() {
    return m_messages;
  }
}

Here’s how to use this class in your test:

@Test
public class A {
  private LoggingAssert m_assert = new LoggingAssert();

  public void test1() {
    m_assert.assertTrue(true, "test1()");
  }

  public void test2() {
    m_assert.assertTrue(true, "test2()");
  }

  @AfterClass
  public void ac() {
    System.out.println("Tests run in this class:" + m_assert.getMessages());
  }
}

Now let’s take a look at the implementation of a “soft assert” class: we want each assert to just run but not throw an exception if it fails. Instead, we just want to record it and once we’ve reached the end of the class (or the suite), assert the whole result.

Here a simple implementation of such a class. The idea is simply to override the executeAssert() and if an exception occurs, catch it, store it and carry on:

public class SoftAssert extends Assertion {
  private Map<AssertionError, IAssert> m_errors = Maps.newHashMap();

  @Override
  public void executeAssert(IAssert a) {
    try {
      a.doAssert();
    } catch(AssertionError ex) {
      m_errors.put(ex, a);
    }
  }

  public void assertAll() {
    if (! m_errors.isEmpty()) {
      StringBuilder sb =
      		new StringBuilder("The following asserts failed:\n");
      boolean first = true;
      for (Map.Entry<AssertionError, IAssert> ae : m_errors.entrySet()) {
        if (first) {
          first = false;
        } else {
          sb.append(", ");
        }
        sb.append(ae.getValue().getMessage());
      }
      throw new AssertionError(sb.toString());
    }
  }
}

How to use it:

@Test
public class A {
  private SoftAssert m_assert = new SoftAssert();

  public void multiple() {
    m_assert.assertTrue(true, "Success 1");
    m_assert.assertTrue(true, "Success 2");
    m_assert.assertTrue(false, "Failure 1");
    m_assert.assertTrue(true, "Success 3");
    m_assert.assertTrue(false, "Failure 2");
    m_assert.assertAll();
  }
}

The result:

FAILED: multiple
java.lang.AssertionError: The following asserts failed:
Failure 2, Failure 1
	at org.testng.SoftAssert.assertAll(SoftAssert.java:32)

On top of the default new assertion classes that TestNG will ship with, I expect the following techniques to become fairly common:

  • Using different assertion instance fields to mix and match the type of assertions depending on which one you need
  • Extending the assertion classes to add new methods or override specific ones.

Of course, the older (static) Assert class will continue to work in TestNG, but starting with the next TestNG version, consider using Assertion in your test code to gain some additional flexibility. And as always, feel free to send me your feedback, especially now that this implementation hasn’t officially shipped yet.

Apple maps

I can see three consequences to Apple rolling their own maps in iOS 6:

  • iOS users get turn by turn navigation.
  • Google has to keep innovating in that area to stay ahead (not that they ever stopped, but the more competition, the better).
  • There are even less reasons to buy a Garmin or a Tom Tom GPS.

Dark Souls

I heard about Dark Souls when it came out because it was said to be incredibly difficult. I made a mental note of trying it out one day and then forgot about it. A few weeks ago, two good friends mentioned it to me again with high praises, so I decided to give it a shot. I dusted off my Playstation 3 (I’m pretty much exclusively a PC gamer), grabbed the game ($35 on Amazon, quite a bargain) and launched it with an open mind.

Nothing could have prepared me for what happened next.

Be prepared to die. And die again. And again. And again.

While the game starts with the standard tutorial on how to use the controller to move your character about and how to fight, I wasn’t exactly ready to face a boss five minutes into the tutorial and immediately die. Then try again, and die. And die, again and again. Ever time I fight it, I chip at his health a bit more but I realize that my sword hilt is barely damaging it at all, and at this pace, it would take me a very, very long time to kill it, assuming I manage to stay alive for that long.

I don’t want to spoil the game so I’ll just say that there is a way out, obviously, and once you figure it out, you realize that the game wasn’t being “stupidly” hard, it was just being “hard but fair”. It was giving you a small taste of what lies ahead. And that looked not just deliciously refreshing but quite enticing as well. You will die a lot in Dark Souls, but most of the time, it will be your fault and you will learn from it. And the next time you try, you’ll get a bit further.

Having said that, limiting the description of this game to this simple observation doesn’t come even remotely close to doing it justice.

Much, much deeper than it looks.

First of all, Dark Souls has the best combat system I have ever seen in more than thirty years of gaming. It’s not even close. Not only do you need to learn to hit, but also to dodge, to parry, to roll, to position yourself correctly, to backstab and to riposte. You also need to learn your opponent. Each of them (trash and boss) have cues that tell you what the next attack is going to be, and each of these attacks can be mitigated or completely avoided if you know what to do.


A bonfire, your best friend in the game.

Next, you have the enemy population. I’ve been constantly impressed with the level of attention that the developers have put in making the game challenging but manageable. There is only one level of difficulty on Dark Souls (with a small caveat, see below), so you don’t get to choose how hard or how numerous the enemies are. But they are planted along your way in very, very calculated ways that will make you want to be extremely mindful when you progress through new areas. This is not the game where you rush in a pack of enemies and mash your controller buttons until you are surrounded with carcasses. As a matter of fact, dealing with more than one enemy at a time will very often result in a quick death.

The leveling is also quite innovative. As opposed to most RPG/MMORPG’s, you are in complete control of how your character progresses. When you kill enemies, you earn “souls” which you can then use either to level, buy things or upgrade your existing gear. There is never a better path on what to do with all these souls, and the game is so well tuned that you can put all your souls into equipment upgrades and complete it at level 1.

When you die, all your souls drop on the ground and you need to run back there to retrieve them. However, if you die on your way to retrieve them, these souls are lost. This is a very simple mechanism (halfway between World of Warcraft and Everquest, which was very punishing) but one that creates a very palpable and constant tension. You don’t care too much about dying when you’re walking around with a small number of souls, but as you progress, that number grows and very soon, you start wondering if pressing ahead is not too risky and if you shouldn’t turn around, go back to a check point (bonfire) and spend these souls, which you can’t bank. Obviously, the fact that in new areas, the mobs are strategically positioned to kill you if you’re not careful makes for some very tense moments (“do not die, do not die!”). It’s not very often that playing a game raises my heart rate and hand clamminess, but Dark Souls definitely has this effect on me on a regular basis.

Who’s the boss?

The bosses are probably the least innovative aspect of Dark Souls. Don’t get me wrong, it’s nowhere near “Deus Ex Human Revolution bosses” bad: the Dark Souls bosses are still fairly challenging to the point where initial encounters usually result in a oneshot death within thirty seconds, but so far, I haven’t seen incredibly innovative mechanics. The bosses are still a lot of fun to figure out and I strongly recommend not reading any strategy beforehand, because killing bosses after figuring them out all by yourself alwas makes for an amazing feeling of elation. I do think the value of Dark Souls is more that the attention to combat details applies to all the mobs and not just the bosses.


The Hydra, one of the many, many bosses you will encounter.

Here is another innovative aspect of Dark Souls: when you reach a bonfire, all the enemies respawn (except bosses and a few special enemies). I was quite baffled when I discovered this, but I’m now realizing how important this aspect is to the Dark Souls atmosphere and philosophy. I won’t dig too deep in the details, but this aspect opens up the possibility of farming, which gives you an always available option to improve your character in case you think it’s not strong enough for the area you’re trying to explore.

Tools of the trade

Do you enjoy gearing up and tuning your weapons and armor? Dark Souls has plenty to offer in that area as well, with a mindbogglingly number of items to find and a lot of varied ways to enchant them the way you want them. Like many things, there is not a single true path there and it’s much more important to use and tune a weapon that fits your play style than picking up the one with the highest numbers.

Dark Souls is also very open in terms of character evolution. When the game starts, you get to choose the class that you want, but this class is not much more than a set of stats that are precalculated for you. When comes the time to level, you are free to put these souls into whatever stat you feel like, and if you started with a Knight but decided that you wanted it to cast a decent number of spells or wield daggers and move around very fast, just allocate the points in the correct categories (Faith and Dexterity) and you’ll create the character that represents your style of play. This flexibility of character build is what is prompting players to restart the game many times over and experiment with various builds. However, many choose to continue on with the same character, which is another intriguing aspect of Dark Souls.

Here is how it works.

Beyond the first play through

I haven’t completed the game yet, so I’m only repeating what I’ve read, but once you beat the final boss, you can reset the entire game and start from scratch. The new game (referred to as “NG+”: New Game Plus) features tougher enemies and bosses, and obviously, better rewards in gear and souls. Quite a few people seem to enjoy this aspect of the game as well since I’ve seen many say that they were on their third, fifth or tenth (NG+10!) play through.

Two is a crowd

One final noteworthy and innovative aspect of Dark Souls is multiplayer. First of all, you can play the entire game offline without any problems, but you will miss out on some very neat features of the game. If you choose to play while signed in, a few interesting multiplayer options open up.

On top of the traditional duels against human opponents, you can invade someone else’s game or be invaded. These invasions can be either co-operative or hostile.

For co-op play, you can leave a mark letting everyone know that you are willing to be summoned. If a player chooses to summon you, you will materialize in their world and, most likely, help them out with a boss that they’re having trouble with. Being summoned this way gives you a chance to observe that boss more carefully so you are more prepared to kill it on your own. Alternatively, you can ask for help, which can come either in the form of another player or from certain NPC’s, which are made available throughout the game for specific bosses.

Invasions can also be hostile, but you need to explicitly make yourself available to them in order for PvP to occur (like flagging yourself at WoW). In Dark Souls, you do that by “regaining” humanity. By default, you are not human (the game calls this state “hollow”). Regaining humanity will net you added rewards but it will also make you available for PvP, which means that other players can then invade you. When this happens, the current area becomes sealed and the PvP phase will only end when one of the two players dies (or the phantom withdraws). I’m not much of a PvP person myself, but I have to say that receiving the suddent notice that “Player xxx has invaded!” on your screen is always a bit stressful, and you start getting paranoid trying to locate that person before they find you.


In red, a phantom that has invaded your world.

A couple more interesting aspects of the online play: 1) players can leave messages for everyone else to see (“Chest ahead”, “Be wary of left”, etc…) and 2) you can see where and how other online players died by touching their bloodstain. In theory, this is supposed to give you an idea of the kind of trap or trash pull that lies ahead, but in practice, I can’t say I’ve found much value in bloodstains.

To the top with you!

I have played more than one hundred hours so far. I’m probably about 2/3rd through the game, and I’m still loving every minute of it. In more than thirty years of video games, I can only think of two games that I ever played more than one hundred hours: World of Warcraft and Civilization. Dark Souls has definitely earned the title “One of the best games I have ever played” with its mix of innovative features, amazing tuning, clever mechanics and engrossing world. Both WoW and Civilization have given me incredible amounts of satisfaction and fun, but Dark Souls goes beyond this by making me stressed, tense and incredibly elated when I finally overcome a difficult part.

It’s currently available on Playstation 3 and XBox 360, but a PC version is in the works and scheduled to ship this summer.

You’ve been implementing main() wrong all this time

Since the very early days of Java (and C-like languages overall), the canonical way to start your program has been something like this:

public class A {
  public static void main(String[] args) {
    new A().run(args);
  }

  public void run(String[] args) {
    // Your application starts here
  }
}

If you are still doing this, I’m here to tell you it’s time to stop.

Letting go of ‘new’

First, install Guice in your project:

        <dependency>
          <groupId>com.google.inject</groupId>
           <artifactId>guice</artifactId>
           <version>3.0</version>
        </dependency>

and then, modify your main method as follows:

public class A {
  public static void main(String[] args) {
    Injector.getInstance(A.class).run(args);
  }
}

So, what does this buy you exactly?

You will find a lot of articles explaining the various benefits of Guice, such as being able to substitute different environments on the fly, but I’m going to use a different angle in this article.

Let’s start by assuming the existence of a Config class that contains various configuration parameters. I’ll just hardcode them for now and use fields to make the class smaller:

public class Config {
  String host = "com.example.com";
  int port = 1234;
}

This class is a singleton, it is instantiated somewhere in your main class and not used anywhere else at the moment. One day, you realize you need this instance in another class which happens to be deep in your runtime hierarchy, which we will call Deep. For example, if you put a break point in the method where you need this config object, your debugger would show you stack frames similar to this:

com.example.A.main()
com.example.B.f(int, String)
com.example.C.g(String)
com.example.Deep.h(Foo, int)

The easy and wrong way to solve this problem is to make the Config instance static on some class (probably A) and access it directly from Deep. I’m hoping I don’t need to explain why this is a bad idea: not only do you want to avoid using statics, but you also want to make sure that each object is exposed only to objects that need them, and making the Config object static would make your instance visible to your entire code base. Not a good thing.

The second thought is to pass the object down the stack, so you modify all the signatures as follows:

com.example.A.main()
com.example.B.f(int, String, Config)
com.example.C.g(String, Config)
com.example.Deep.h(Foo, int, Config)

This is a bit better since you have severely restricted the exposure of the Config object, but note that you are still making it available to more methods than really need to: B#f and C#g have really nothing to do with this object, and a little sting of discomfort hits you when you start writing the Javadoc:

public class C {
  ...
  /**
   * @param config This method doesn't really use this parameter,
   * it just passes it down so Deep#h can use it.
   */
  public void g(String s, Config config) {

Unnecessary exposure is actually not the worst part of this approach, the problem is that it changes all these signatures along the way, which is certainly undesirable in a private API and absolutely devastating in a public API. And of course, it’s absolutely not scalable: if you keep adding a parameter to your method whenever you need access to a certain object, you will soon be dealing with methods that take ten parameters, most of which they just pass down the chain.

Here is how we solve this problem with dependency injection (performed by Guice in this example, but this is applicable to any library that implements JSR 330, obviously):

public class Deep {
  @Inject
  private Config config;

and we’re done. That’s it. You don’t need to modify the Config class in any way, nor do you need to make any change in any of the classes that separate Deep from your main class. With this, you have also minimized the exposure of the Config object to just the class that needs it.

Injecting right

There are various ways you can inject object into your class but I’ll just mention the two that, I think, are the most important. I just showed “field injection” in the previous paragraph, but be aware that you can also prefer to use “constructor injection”:

public class Deep {
  private final Config config;

  @Inject
  public Deep(Config config) {
    this.config = config;
  }

This time, you are adding a parameter to the constructor of your Deep class (which shouldn’t worry you too much since you will never invoke it directly, Guice will) and you assign the parameter to the field in the constructor. The benefit is that you can declare your field final. The downside, obviously, is that this approach is much more verbose.

Personally, I see little point in final fields since I have hardly ever encountered a bug that was due to accidentally reassigning a field, so I tend to use field injection whenever I can.

Taking it to the next level

Obviously, the kind of configuration object I used as an example if not very realistic. Typically, a configuration will not hardcode values like I did and will, instead, read them from some external source. Similarly, you will want to inject objects that can’t necessarily be instantiated so early in the lifecycle of your application, such as servlet contexts, database connections, or implementations of your own interfaces.

This topic itself would probably cover several chapters of a book dedicated to dependency injection, so I’ll just summarize it: not all objects can be injected this way, and one benefit of using a dependency injection framework in your code is that it will force you to think about what life cycle category your objects belong to. Having said that, if you want to find out how Guice can inject objects that get created at a later time in your application life cycle, look up the Javadoc for the Provider class.

Wrapping up

I hope this quick introduction to dependency injection piqued your interest and that you will consider using it in your project since it has so much more to offer than what I described in this post. If you want to learn more, I suggest starting with the excellent Guice documentation.