Archive for category Uncategorized

How to "Go Home" on your Verizon Droid (and Android in general)

As you probably know by now, Verizon’s Droid has been officially announced and it will be available in the US on November 6th.

It’s running Android 2.0 (“Eclair”), which is by far the most advanced release we have worked on. And in case you didn’t follow, this is the third official release in about a year (there has actually been one more that was never made official).

One of the features that has received the most coverage so far is our turn-by-turn navigation application, which turns your Droid into a speaking GPS device. While most articles I have read do a good job of covering its basic features, some articles deplore the absence of a “Go Home” function.

Well, actually, this function already exists.

It’s very simple really, all you need to do is create a home shortcut. Here is how you do it:


    



Long press on the Home screen, select Shortcuts and then Directions.




Make sure that “Turn by turn navigation” is checked.
Enter your home address (or any other),
pick a name for your shortcut and press “Save”.


    


Your shortcut is ready, tap it to start navigating.

Happy navigation on your Droid!

Duct tape and the brittleness of agility

Duct tape, reloaded

In a recent article, Joel Spolsky discusses the concept of the “duct tape programmer”. According to Joel, a duct tape programmer is a developer that “gets things done”. They don’t spend too much time over designing, discussing or writing tests: they just sit down at their desk and code until the feature is ready to be used by customers.

Joel uses Jamie Zawinski (jwz) as the perfect example of a duct tape programmer. For people who don’t know jwz, he was made famous during the Lucid/XEmacs/Netscape era. He was known for never sleeping and tirelessly being busy coding. His contributions include vast portions of XEmacs (in C and Lisp) and countless features in Netscape. He retired from coding a few years back, bought a club in San Francisco which he uses to organize events. His web site still shows his undying geekdom and if you’ve never had a chance to read about them, check out some of the pranks he came up with during his tenure at Netscape.

jwz is the perfect example of a duct tape programmer and the kind of developer that Joel would want on his team, as opposed to “software astronauts” that spend more time discussing problems than implementing solutions.

Clean Code is code that hasn’t been run enough times

Not surprisingly, Rob Martin didn’t like Joel’s article. Although he tries to be civil and compromising, Rob is pretty much at the other end of the spectrum when it comes to software development. In particular, Joel’s emphasis that testing should come second since it doesn’t directly impact customer satisfaction rings a sour note with the entire Agile community.

The funniest part in Rob’s rebuttal article was:

Programmers who say they don’t have time to write tests are living in
the stone age. They might as well be saying that man wasn’t meant to
fly.

Well, man is indeed incapable of flying, which is why we need to use devices to achieve that goal.

Joking aside, Rob’s assertion that a software product that is not tested is necessarily buggy is pure fantasy. There are thousands of software products that we use regularly that are probably very poorly tested (or poorly automatically tested), and yet, they work and they are fairly usable.

The crux of the disagreement can probably be found in Rob’s following statement:

I want you to ship, but I don’t want you to ship shit.

Nobody can disagree with this, but I bet you that Joel and Rob have very different ways of defining “shit” in the software world. For Joel Spolsky and Jamie Zawinski, “shit” is a product that is buggy or unusable. For Rob Martin, “shit” is software that wasn’t designed with TDD and that doesn’t have 100% test coverage.

In order to understand their standpoint, it’s important to keep in mind who these two people are and what they do. Joel is the founder of a fairly successful software company and their flagship product, Fogbugz, a bug tracker, seems to be quite liked by its users. Rob Martin is employed by Object Mentor, a methodology consultant company.

They both have something to sell, although it seems to me that Joel probably doesn’t expect that this kind of article will help boost the sales of Fogbugz. On the other hand, it’s important for Object Mentor to make sure that Agile, XP and the other methodologies that their business is based on, keep being discussed and cited as positive technologies that help deliver products faster.

Is Agile on the way out?

Joel’s article is just the most recent example of a growing backlash that is slowly building against Agile and XP. Here is another testimony from Mike Brunt, someone who has had a terrible experience with the practice.

Even though it’s unlikely that Joel and Mike know each other or read each other’s article before writing their own, some of Mike’s points are very close to what Joel is saying. For example:

Agile programming emphasizes coding.

This may sound like a good thing but it really is not, especially when you emphasize coding over feature #1 (shipping). Unit tests fall into that category. Unit tests are tools for the programmers. They are a convenience, one of the many ingredients that you use in the kitchen but that your customers really don’t care much about.

The extreme emphasis on developer comfort (unit tests, code coverage, TDD, etc…) over the satisfaction of your customers is something that has always deeply disturbed me in the Agile/XP movement. I have expanded in more details on this topic in my book, so I won’t repeat everything here, but there is another point that I feel strongly about: if I have time to write either a unit test or a functional test, I will always pick the latter, because such a test will exercise a feature of the product that my users will be seeing, as opposed to a unit test, which is only destined to make my life easier (i.e. find bugs faster).

Agile is fragile

The comments on Mike’s articles were not very friendly. It doesn’t take much to get Agilists riled up, and this post was no exception. However, most of these comments are all using the same old tired argument: “if you use Agile and it didn’t work for you, you were not doing Agile properly”.

This argument is very similar to the one you read whenever somebody dares to post that Linux is not working out for them: “Oh you’re using Fedora (1)? No wonder you’re having these problems, you should be using Ubuntu (2)”. Replace (1) and (2) with any of the Linux distributions and you basically have the template of the response you will see on any post that dares to criticize Linux.

This sort of answer is very similar to “Oh Agile didn’t work for you because you were doing Agile wrong”, and both these statements come from delusional people who just don’t understand that if their technology is so hard to use “right”, then it’s useless.

Brittleness in software

Linux and Agile/XP are both technologies that I call “brittle”. Brittle because you need to manipulate them very carefully or they will just explode in your hands. Brittle because you need to follow extremely precise guidelines to use them, and failure to do so dooms you to failure.

Finally, they are brittle because the amount of expertise necessary to use these technologies is simply too high.

However, Agile/XP is in much worse shape than Linux, which has quite a few success stories. Put in the right hands and used in the right conditions, Linux can do wonders, and hundreds of companies (including my employer) can attest to that. But Agile/XP doesn’t really have any track record of success to show despite many, many years of trying to become mainstream.

Ironically, the few times where I have seen a few Agile practices being successful was when the teams using them were cherry-picking from the Agile manifesto. They read all the points in the Agile manifesto, chose the practices that made sense to them and disregarded the rest. And it worked.

Agilists are not very agile

There are two paradoxes here:

  • Teams that “don’t do Agile” (i.e. they don’t follow the manifesto to the letter) can be successful.
  • The very same people who advocate Agile are actually far from being agile and open-minded. “Agile means this and no variation is allowed” never sounded very agile to me.

I know Rob Martin really believes in all these principles he advocates, but it’s really hard for me to forget the fact that he’s making a living out of them. If software methodologies become easy to apply and they no longer require a five day course to learn, his employer will go out of business.

On the other hand, while Joel Spolsky never misses an opportunity to mention Fogbugz (and I can’t really blame him, it’s what his company does), I don’t think he has much to gain from a commercial standpoint with this kind of article. I do think that he’s exaggerating his points a little bit in order to be provocative and generate indirect publicity and I’m pretty sure that Fogbugz is a lot more tested than he wants us to think.

But overall, I’m happy to see the pendulum beginning to swing the other way. Instead of advocating religious methodologies, I want to see thought leaders suggest common sense and judgment and show flexibility when it comes to recommending technologies. I think we will have reached this point when an Agile advocate comes to see me, takes a look at my team, chats with them a little bit and then tells me “I think stand-up meetings will be useful to your team but you should probably not use pair programming”.

Now, this is true agility.

ScummVM month



“Indiana Jones and the Fate of Atlantis” on Android

I installed the ScummVM (Wikipedia) on my Android phone and put a couple of games on it: they work great. For people not familiar with the ScummVM, it’s a virtual machine created by Ron Gilbert at Lucas Arts to factor out some of the tedium associated to the creation of adventure games in a library. Of course, the concept of virtual machines was not new even back then (around 1987) but the idea of creating such a machine for adventure games was quite innovative at the time and the VM was used in the following years to create about twenty different games.



ScummVM on Android

The ScummVM was ported on a dizzying array of platforms and operating systems, including Android recently, by Angus Lees. Once I had the machine running on my phone, I couldn’t resist installing it on my laptop as well. The good thing about the VM is that it doesn’t matter which platform the game runs on: you can just copy the data files to your computer, point your native ScummVM to it and you’re in business.

I have a lot of fantastic memories associated with the ScummVM, back when the days when adventure games were popular, and in particular, Indiana Jones and the Fate of Atlantis, shown above on my Android phone.

My second favorite game, and one of the most difficult adventure games I have ever played is “Zak Mc Kracken and the Alien Mindbenders”. This game is hard and some of the puzzles are downright in the “Are you kidding me?” category. This was an early ScummVM game and the interface didn’t really help either (for example, there was no support for hovering, you had to click on an object to find out if it was active) but it has an epic feel in the various continents and puzzles it submits you to, and of course, the ever present humor that is a staple of most Lucas Arts ScummVM games.



“Zak McKracken and the Alien Mindbenders” on Mac

And of course, my number 1 favorite ScummVM game of all times is… The Secret of Monkey Island, which is coincidentally making the video game news in two areas: Lucas Arts just released a “remastered” edition and also a new series based on it is beginning.

The remastered version is a complete rewrite of the original game. This new version is native, not a ScummVM game, and interestingly, it lets you switch between the old and the new version on the fly by pressing on F10, so you can see for yourself how the new game compares to the original one:



“The Secret of Monkey Island” (original version)



“The Secret of Monkey Island” (new edition)

If you have never played any of the Monkey Island games, I strongly recommend spending $10 and downloading this new edition on Steam (Windows only, unfortunately).

And for people who still enjoy the quirky humor and the overall relaxing and piratey impression that permeates the entire series, Lucas Arts is releasing a brand new Monkey Island game in the form of five short stories. The first episode is available today on Steam under the name “The Launch of the Screaming Narwhal”:



“The Launch of the Screaming Narwhal” on Windows

I just started playing it but I have a feeling that the puzzles have taken a Myst-like tone that is going to be challenging but probably exhilarating to solve.

Happy ScummVM month!

Advanced parallel testing with TestNG and data providers

TestNG allows you to run your test methods in separate threads. You can configure the size of the thread pool and the time-out and TestNG takes care of the rest. For example, consider the following test class invoked with a thread pool size of 2:

@Test
public class A {
public void g1() { log("g1"); }
public void g2() { log("g2"); }
public void g3() { log("g3"); }
public void g4() { log("g4"); }
}

The output:

Thread:9 g4()
Thread:8 g2()
Thread:8 g3()
Thread:9 g1()

As you can see, TestNG created a pool of two threads and it is dispatching all the test methods on each of these threads as they become available. You can also configure the threading strategy (“each test method in its own thread”, “each class in its own thread”, etc…) and the time out for each of these thread pools.

Another popular feature of TestNG is data providers. Let’s add two methods and two data providers to the test class above:

@DataProvider()
public Object[][] dp1() {
return new Object[][] {
new Object[] { 1 },
new Object[] { 2 },
new Object[] { 3 },
new Object[] { 4 },
};
}
@Test(dataProvider = "dp1")
public void f1(Integer n) {
log("f1", n);
}
@DataProvider
public Object[][] dp2() {
return new Object[][] {
new Object[] { 11 },
new Object[] { 12 },
new Object[] { 13 },
new Object[] { 14 },
};
}
@Test(dataProvider = "dp2")
public void f2(Integer n) {
log("f2", n);
}

f1() will be invoked with 1, 2, 3 and 4 while t2() will receive 11, 12, 13 and 14.

Here is the output (each color represents a different kind of test method: one for the four methods that don’t use any data provider, one for f1() and one for f2():

Thread:9 g4()
Thread:8 g3()
Thread:9 g2()
Thread:8 f1(1)
Thread:9 f2(11)
Thread:9 f2(12)
Thread:9 f2(13)
Thread:8 f1(2)
Thread:9 f2(14)
Thread:8 f1(3)
Thread:9 g1()
Thread:8 f1(4)

Everything is still running on a thread pool of size 2, but you will also notice that the two methods using data providers (f1() and f2()) are invoked in sequence on the same thread. In other words, f1() is invoked on one thread an then it remains on that same thread until it has received all the values from its data provider (1, 2, 3 and then 4). Same thing for f2() and the values 11, 12, 13 and 14.

Extending multithreading to data providers has been one of the most requested features for TestNG, and I’m happy to announce that it’s now implemented and it will be part of the next release of TestNG.

In order to make a data provider run in a pool of threads, you use the new annotation parallel:

@DataProvider(parallel = true)
public Object[][] dp2() {

Data Providers are run in their own thread pool, which is different from the thread pool used for test methods. Let’s run the example above again with a test thread pool size of 2 and a data provider thread pool of 3:

Thread:9 g4()
Thread:8 g3()
Thread:8 g2()
Thread:9 f1(1)
Thread:10 f2(11)
Thread:11 f2(12)
Thread:12 f2(13)
Thread:9 f1(2)
Thread:12 f2(14)
Thread:9 f1(3)
Thread:9 f1(4)
Thread:8 g1()

In this run, both the g methods and f1() are running on the test thread pool (remember that even though f1() is using a data provider, it’s not using parallel=true, so it’s using the test thread pool). The novelty here is that the four invocations of f2() are now happening on three different threads (10, 11 and 12). These three threads are part of the data provider thread pool, which was configured with a size of three.

Let’s now make the other data provider parallel as well:

@DataProvider(parallel = true)
public Object[][] dp1() {

The output:

Thread:9 g4()
Thread:8 g3()
Thread:8 g2()
Thread:10 f2(11)
Thread:11 f1(1)
Thread:12 f1(2)
Thread:10 f1(3)
Thread:11 f1(4)
Thread:12 f2(12)
Thread:11 f2(13)
Thread:10 f2(14)
Thread:9 g1()

This time, only the g() methods are using the test thread pool (threads 8 and 9) while the two methods using a data provider (f1() and f2()) are sharing the data provider thread pool (threads 10, 11 and 12).

With this new feature, TestNG makes it even easier to run your tests in parallel, and tests that are using data providers returning large sets of values are likely to see a significant decrease in running time.

Parallel data providers will be part of TestNG 5.10 but you can already download the beta and try it for yourself.

You can't ignore the types

The thread called “Getting Dynamic Productivity in a Static Language” on Artima has generated a lot of very interesting comments.

In particular, the following statement made me react:

So that was the history, and so I was asking him what it was that made him feel so productive in Smalltalk, and one of the things he said is he didn’t have to waste time thinking about types.

This is a fundamental mistake that a lot of dynamic enthusiasts (short for “dynamically typed language enthusiasts”) keep making over and over.

The fact is: you always have to think about types when you write code, period.

Dynamic enthusiasts are convinced that they can ignore this aspect of development altogether, but it always comes back to bite you, either when

  • you write tests for your code
  • you try to refactor it
  • somebody else needs to modify it
  • or simply when someone needs to use your code

Eventually, you look at an object and you have to figure out what methods or messages it responds to, and I find that this problem is much easier to solve when the answer is in the source code in a form that can be enforced by the compiler.

If you are interested in this topic, read the entire thread, it’s worth it.

Chrome gripes

As much as I want to like and to use Chrome, several problems are
still preventing me from switching, among which:

    No plug-ins. This is probably the most glaring hole. The best way to
    ship a product with features that you know are missing is to give
    developers a chance to implement these features for you. Firefox
    has made plug-ins an inseparable concept of the browser, it’s really
    a pity that Chrome didn’t follow in its footsteps and thought that
    v1.0 could ship without plug-ins.

  • No title bar. I don’t know what the team was thinking or if it was
    a deliberate omission, but the fact that the title bar doesn’t
    contain the title of the current HTML page makes it very hard for me
    to navigate through the multitude of browser windows that I usually
    work with. Having to go to the task bar to know which window is
    which is very lame.

  • Menu bar in a weird location. What’s up with all
    these developers that keep thinking that reinventing user interfaces
    is cool? It’s not. Putting the menu bar in the middle right area
    just to save 16 vertical pixels is dumb. Respect user’s UI muscle
    memory and put the menus and their menu items in the expected location.

  • No keyword search. Failing to have plug-ins, I could live with just
    being able to specify keyword searches. For people not familiar
    with this concept that came from Firefox, it lets you define
    variable URL’s that you can type directly in the address bar with
    different values. For example, I have a Wikipedia
    keyword that I assigned to “w” that lets me type “w saturn” in the
    address bar and that will show me that entry on the sixth planet of
    our solar system in Wikipedia right
    away. I have defined a multitude of these keywords (e.g. searching
    a colleague in our employee database, looking up a stock symbol,
    etc…) but Chrome is making me less productive by forcing me to
    have to do extra typing and clicking.

  • No “Reload all tabs”. Ok, maybe that’s just me, but I use this all
    the time on Firefox. On Chrome, I find myself having to go on
    all my tabs and press “Reload” on each of them. Again, the Chrome team didn’t have to
    incorporate this feature in 1.0 but a plug-in API
    would have guaranteed to make this a non-issue.

As it stands right now, the fact that Chrome is the fastest Web
browser on the planet is not enough to make me use it on a regular
basis as long as these functionalities remain absent…

Disappointed in Alienware

A few weeks ago, I decided to upgrade my two year old PC. It was
still serving me well and it was running World of Warcraft fine, but
I thought that two years
was long enough in hardware time to warrant an upgrade. So far, I had always bought my
PC’s from Dell but this time, I decided to go more hardcore and order
one from Alienware.

My first disappointment was that it took much longer for them to ship
it than my initial reading of the web site led me to believe.
I thought I would get my new computer in about five
days, but after that time, my payment hadn’t even been processed yet.
After that, it took more days assemble it, then run the tests and
finally ship it. Overall, two weeks separated the time where I
confirmed my order online and the time where it was delivered to my
home.

Unfortunately, my new PC never worked properly: after one or two hours, it would completely lock up and I had to cold
reboot it. And because of that, Vista didn’t get a chance to dump any
kind of meaningful information, so viewing the Event Log produced
absolutely no clue about what was happening. This was absolutely
maddening.

I contacted Alienware’s Technical support and a long email dance
started. They started by offering me generic advice and then became
more and more technical as it turned out that none of their solutions
worked. In the latest stages, they were asking me to update my BIOS
(which voids your warranty) and other crazy things such as changing the
clock timings of my memory. The more I interacted with the technicians,
the more obvious it was to me that they had absolutely no idea what
they were doing and that all these recommendations were actually
complete shots in the dark.

Of course, none of them worked and my system continued to crash every
two hours or so. At the height of my frustration and running out of
options, I decided to start the cancellation process, something I wasn’t
exactly looking forward to since it appeared that not only would I
have to pay the shipping costs (which they charged me
for $170 on the way in) but they also featured a 15% restocking fee (something I
would fight tooth and nail since I really doubt restocking fees apply
to defective units).

I also started doing more research on my side and just as I was about
to finalize the return process, I finally found the miracle fix.

All I needed to do was to go to Vista’s Power Options and check the
“High Performance” option, as shown below:

Once I did this, my computer stopped crashing.

Overall, I find it absolutely unacceptable that a company whose core
business is gaming and high-end PC’s was unable to diagnose such a
simple problem, but what is even worse is that they didn’t even catch
this problem in their hundreds of tests that they claim they subject
the machines to before they ship them to customers.

I’m going to keep this machine since it now works correctly, but this
is definitely the first and the last time that I ever buy something
from Alienware.

Coding challenge wrap-up


Click to see Eric's full comic strip


Click on the image to see Eric’s full comic strip

I certainly didn’t expect so many reactions to the coding challenge… More than 130 comments so far, wow.

First of all, I owe an apology to all commenters for my annoying comment system (which prohibits posting URL’s that start with “http”). I’m very sorry about that but I receive so much spam that it’s a necessity. Some people braved the odds and posted their solution anyway, others used creative ways to submit their code, such as using Google Documents or Paste Bin (which has the benefit of syntax highlighting).

Thank you all for putting up with this and participating in this fun contest anyway.

I’ve learned a lot from all these solutions and discussions, which featured the following languages: Java, C, Perl, Erlang, Javascript, C#, Groovy, Haskell, AS3, C, Fan, LUA, J, OCaml, Factor, Forth, Lisp, Ursala, Prolog.

Here is a quick wrap-up.

The solutions basically fall into three categories:

  1. Concise.
  2. Fast.
  3. None of the above.

Overall, languages that support closures do well on the conciseness aspect, with solutions that can fit in 1-5 lines, among which:

Ruby

(98..103).select { |x| x.to_s.size == x.to_s.split(//).uniq.size }

Python

(i for i in xrange(start, end) if len(str(i)) == len(set(str(i)))):

Scala

for (i <- 1 to 100000; s = i.toString; if HashSet[Char](s:_*).size ==
s.size) println(i)

J

f =: [:(#;[:>./2-~/\])(#~([:*/[:~:":)"0)
For a minute, I thought that last entry was a joke or that maybe, the poster was disconnected in the middle of posting his solution. But no, J is a real programming language.

Ursala

#import nat
func = ^(nleq$^+ difference*typ,length)+ ~&triK2tkZ2FlS+
iota+successor; * ^lrhPX/~& %nP

Quite an intidimidating syntax here too :-)

The problem with the concise solutions is that they eliminate duplicate digits by converting the integer into a string, which results in prohibitive running times (the Ruby code takes 27 hours to complete with max = 10^10, which is the baseline I'll be using from now on).

Let's turn our attention briefly to solutions that are neither concise nor fast...

I was disappointed to see the Erlang code, to be honest, because the (only) solution that was posted is a bit frightening. I would love to see more attempts in Erlang that are either concise or fast. This problem seems to be a good candidate for sharding, since all the numbers that satisfy the requirements can be found in complete isolation of the others, so this a good opportunity for Erlang to show what it's good at.

Also, somebody posted a Prolog solution which is shorter than Erlang's, so I don't think the declarative aspect of Erlang is the reason for the length of this solution.

Can somebody post an Erlang solution that is either concise or fast?

Similarly, Perl and Javascript didn't particularly shine in the contributions that I saw in the comments. People also posted solutions in Forth (a bit hard to read, but my Forth is rusty) and even Factor (which is reasonably concise but also seems to use a lot of libraries).

Crazy Bob was the first one to post a solution that is reasonably concise and also scaringly fast. Not surprisingly, it's not brute force, it only uses primitive integers, it uses a bitmask to keep track of which digits have already been used and to top it all, it's recursive (it's not very often you see recursive code that is faster than everything else, although admittedly, the recursion doesn't go very deep).

Bob's first attempt was able to calculate all numbers from 1 to 10^10 in half a second, which blew away everything that had been posted so far. Interestingly, the C version of his Java code ran at about the same speed as Java.

Quite a few people observed that my problem was similar to generating permutations, with the little twist that '0' is not allowed to appear in first position. With that in mind, I thought I would see people grab the standard implementations for permutations that you can find on the web, adapt them to the constraints of my problem and then post them here. Interestingly, the opposite happened: Bob's solution is not only the fastest, but it can probably form the basis for a canonical solution to solve permutations quickly, especially since it's only limited by the number of characters that you can represent in a bitmask (64 if you want the bitmask to remain a primitive, unbounded if you represent it with a more complex structure).

Then, Mauricio entered the fray with OCaml and his initial attempt took 640ms to run up to 10^10. That was quite a shocker to me. I studied quite a bit of OCaml during my PhD (which I did at INRIA, so this should come as no surprise), but Caml and OCaml quickly fell off my radar when I moved on. I was quite surprised to see a real functional language be as fast as the top contenders in a purely algorithmic contest. Mauricio's version is about as concise as Java, which makes this even more impressive.

Some time later, Mauricio wrote a more functional version of his solution, which ended up running in about the same time as his imperative approach.

And then, Bob came up with a solution that runs twice faster.

It's not very often that I get excited by a piece of Java code, because I think that Java is boring (in a good way). Obviously, Bob's trick is not specific to Java, but I did go "wow" the first time I read it.

Bob's initial algorithm is not very complicated but it does have a few code paths which, at first sight, would be good candidates for possible optimizations. What I found the most interesting in his approach is that he found the optimization in the most unexpected place: by getting rid of incrementations and by adding a class.

Bob introduced a class Digit that is a doubly linked list of digits. Initially, 0 points to 1 which points to both 0 and 2, which points to... You get the idea.

Calling next() on such an object gives you the next digit in the sequence without requiring an increment operation. The trick here is to rearrange the list as soon as you use a digit so that invoking next() will always return a digit that can be added to your number without violating the requirements. For example, as soon as you add 1 to your solution, the Digit list now becomes 0 <-> 2 <-> 3... There are a couple of additional tricks with regard to head tracking and backtracking, but that's pretty much the core of the optimization.

Beautiful code indeed.

Concluding remarks

My first take away from this little exercise is that conciseness only goes so far. I have seen people post a one-liner solution on their blog in language X and conclude "X rocks". Except that their solution will take hours to complete.

A credible language must make it possible for developers to solve problems with either conciseness or performance in mind, and ideally, allow a whole spectrum between these two extremes.

The comments also discussed the definition of "brute force", and after some initial turmoil, everybody seems to agree that Bob's solution is not brute force since it only ever considers valid digits. At this point, I challenge anyone who disagrees to come up with a "really non brute force" solution which should, in theory, be much faster than Bob's solution. Good luck with that :-)

Finally, I'd like to get a better sense of performances for all the languages that have participated in this little contest, so I offer the following follow-up problem: port Bob's solution to your language of choice, post the code along with 1) how fast Bob's Java solution runs on your system and 2) how fast your solution runs.

So far, only Java, C/C++ and OCaml have shown to be up to the task. Can you add your own favorite language to the list?

PowerPoint feature request

I’m currently writing a brand new presentation (more on this later) and I find myself in need of a feature I had never thought of before.

I’d like to create two versions of this presentation: one that fits in forty-five minutes and one in two hours, the longer version being a superset of the shorter one: it just contains a few additional slides here and there.

Is there any other way for me to do this besides having two separate files with a lot of slides duplicated?

Ideally, I’d like to be able to put my slides in groups: some will be in the “short” group and other will be in both the “short” and “long” groups. When I start the presentation, I tell the software which group I am showing today (this idea will undoubtedely sound familiar to TestNG users :-) ).

The closest I could find in PowerPoint is by hiding/showing individual slides, which is really impractical.

Does anyone have a better solution with either PowerPoint, Keynote or even Google Presentation?

Android

Yes, this is what I have been working on. In six days, we will tell you everything, we promise!