The languages I originally started with were Ruby and Java, but later expanded to include JRuby and C. Any reasonable programmer would be able to align these and say this is how they expect them to align--and for the most part, they would be right! However, as a thought experiment, why not try it?
I've already said that we're doing the Fibonacci sequence and we'll use recursion first just because recursion is fun and it will really tax the system. Here's the pseudo code for every implementation:
If args.length < 1, print "usage: use an argument for calculation", exit
for (0..arg[0])
print(fib(i))
next
int fib(n)
if n < 2, return 1
else return fib(n-1) + fib(n-2)
So as you can see, this is a VERY simple program. My expectations were that the languages would be ranked like so:
- C
- Java
- JRuby
- Ruby
- C - 0m1.778s
- Java - 0m0.933s
- JRuby - 0m11.043s
- Ruby - 3m31.096s
Alright, how do we prevent that and still get some good comparisons? Well, I think instead of letting Java do its JIT, I modified the program to get rid of the loop; just do one number at a time. Here are those times:
- C - 0m1.100s
- C (optimized) - 0m0.466s
- Java - 0m0.625s
- JRuby - 0m7.164s
- Ruby - 2m7.298s
So what? Why am I even testing Ruby? Just to see the performance comparison, really. I've just recently started using it and figured this would be a good comparison of computation time, as well as language syntax. I never expected an interpreted language to be on par with a compiled language--at least for this test.
Ruby is great for lots of things, it's easy to learn, easy to read, but as we see, not great at number crunching. At the same time, putting it on top of the JVM (via JRuby), we get closer to a comparable result with C and Java.
If we use the iterative method for finding Fibonacci numbers, we get these numbers:
- C - 0m0.011s
- C (optimized) - 0m0.002s
- Java - 0m0.102s
- JRuby - 0m0.431s
- Ruby - 0m0.004s
Hmm... Does that make sense? Well, the JVM has to start up which takes some time, so yeah. But Ruby faster than C? I looked a little more at what I was doing in my code. Suspect was the fact that I was printing every number out to the screen. I removed that, in effect just testing looping and some simple math, and here's the numbers: (fib(1000))
- C - 0m0.002s
- C (optimized) - 0m0.002s
- Java - 0m0.099s
- JRuby - 0m0.421s
- Ruby - 0m0.004s
Stepping up the number, Ruby starts becoming less efficient. So an interpreted language can perform reasonably as well as optimized C. Interesting.
In the end, it's a fun test that drives home the point that when you're solving a problem, understanding it and the capabilities and pitfalls of your language makes a difference. Wield your tools appropriately.