Oct 10

On JVM, languages, platforms, and frameworks

Today apple announced through their Java update that their support for the JVM is now deprecated and will possibly be removed from the future OS releases. The blogosphere is flaming, mostly with Java supporters who are either pissed off at Apple, worried about the future of their investment in the Java platform, or both. I don’t think the future of the Java platform should be in question due to any apple decisions, at the end of the day, there aren’t many production Java deployments on OS X, but it is a fact that a large portion of Java developers utilize OS X as their primary development platform. These developers, without proper support for their environments, will move to Linux and maybe even Windows. This move alone will probably not hurt apple in the short run, but their behavior towards isolating different developer groups, will eventually come back on them. Developers from any environments will cautiously approach, as the tamed Leopard and eventually Lion might bite them in the ass at the least expected moment.

It might be that Apple wants Oracle to take the charge in maintaining the OS X port and deprecating support might be a way to negotiate this without face-to-face negotiations. I’m fine with that, frankly I could care less who provides the JVM, as long as one is provided and is relatively actively supported. Until that announcement happens, this is yet another bump in the future of the JVM. First the Oracle purchase of Sun, then the lawsuit, and now the decision by Apple, definitely creates unneeded distractions for this platform’s developers all over the globe.

So I started this not to gripe any more about Apple’s decision, I’m sure there are enough posts out there flooding your RSS streams to keep you busy, rather I wanted to question the future of languages/industry in regards to the language/platform of the future.

Last week I attended the StrangeLoop conference in St. Louis. It was a gem of a conference, definitely the best I’ve been too in a long time. Alex, besides seeming like an overall awesome guy, has some extraordinary “brilliant people herding” abilities. How he managed to bring together a group of brilliant speakers and then convince another group of awesome developers to attend, is beyond me. The conference had some great talks and panels about the latest/greatest and bleeding edge tech stuff. One of the best panels was about the future of programming languages. The panel consisted of (Guy Steele, Alex Payne, Josh Bloch, Bruce Tate, and Douglas Crockford), all whom I have great respect for. One prevailing factor in most discussions in this panel as well as throughout the conference, has been concurrency. In the mutli-core/cpu world, what language/platform will allow for this paradigm transition to happen seamlessly. The fact is that although there are some awesome innovations/implementations going on in this area, STM, Actors, fork/join, and various others, none have yet abstracted the concurrency model away from the developer, as seamless as memory management and garbage collection is done in today’s runtime environments. But this is an exciting time to be in, many ideas are flowing around and something will appear on the horizon sooner or later. This something, as Guy Steele pointed out, will most likely be a model that will allow for divide/conquer, map/reduce operations to happen through language idioms and possible seamless abstractions. Accumulators are evil 🙂

There are many languages/platforms out there today, but none have been as predominant and as overall polished as Java and the JVM. From the language perspective Java’s getting stagnant and to some, boring, but the fact that it has an ecosystem of wonderful libraries and products is hard to ignore. The fact that all of these are bytecode compatible is even more to rant about, as with the advent of numerous great languages built on top of the JVM, it makes the transition to a different language and programming paradigm, much easier. It is truly hard to think of any current platform/VM that’s more prevalent and better suited for large scale enterprise development than the JVM. .NET comes to mind, but I doubt anyone from the non-Microsoft camp will be switching :-). There are other platforms, most notably Python and Ruby, but although both are credible, the presence of GIL on both, make the choice of using them in a concurrency model very difficult. You can architect and deploy your system as multi-process vs. multi-thread and arguably that model has its benefits, mostly by getting rid of the shared state model concurrency issues, but we (at least I do) like to have a choice. This decision shouldn’t be shoved down our throats because the language development camp doesn’t want, doesn’t think one is necessary, or [add your own excuse here] to produce a thread-safe non-GIL thread model.

The other issue with most of these languages/platforms, as well as the other ones I like, is the deployment options. They suck! From providing modular builds to deploying production applications, they just aren’t as polished and in most cases as stable/supported as the JVM ones. Common Lisp, one of my favorites as of late, for example, is an awesome language with numerous compilers/interpreters. Lisp doesn’t have a good packaging, dependency resolution, and build story, but even if you can get past that with some of the available half-baked solutions, then when it’s time to build/compile/deploy your app, you’re fucked, unless you want to build one yourself. I enjoy such challenges on Friday/Saturday nights, but not when time is limited and milestones are due (which is most of the time).

Ruby and Python for example, have a decent package managers gem and easy_install/pip respectively, but two problems lurk. First, lots of modules are written in C and in many cases, in my experience, are a big pain in the ass to compile, especially with today’s heterogeneous architectures i386, x86_64, etc… Lots of incompatibilities arise, forcing more time away from doing what I should be doing. Somehow my milestones never include the 2+ day derailments due to such issues. Maybe that’s what’s left of my optimism. The second problem only applies if you’re writing a web app and if you are, then you know the issues. Where are those stable/supported app servers? WSGI and rack should provide answers soon, for now, there are many options and none are without major issues as well. Some are a pain to install/deploy, some aren’t actively maintained. I mean, am I just being anal and asking for way too much or am I eternally spoiled by the JVM. Is it too much to ask to bundle the application into some archive or directory structure and just drop it in somewhere or point your server config towards it. Either way, even if they ease the pain of deploying webapps, the fact that [Python/Ruby] are not suitable in multi-core/cpu environments where threads are needed, is a show stopper for lots of apps I write. I know I can architect around the issues, but again, why should I have to program to the platform vs. the other way around. Give me the choices and trust me to make the best decision.

The next things is native GUI development. It is true that lots of interesting apps today are developed and deployed as web apps, but that doesn’t discount the fact that there is still a need for a native GUI in lots of use cases. Swing provides a good and in some instances really good, cross platform GUI library which allows to deploy your GUI across most popular platforms with 95% or more cross platform consistency. That sounds pretty good to me.

There are other toolkits, wxWidgets, QT, etc…, which also have bindings to python and ruby, but again, with today’s multicore, it would be a shame to not be able to utilize these cores simultaneously due to GIL. The bindings in languages that due provide a better concurrency story, work great, but these languages still suffer from the other pain points I mentioned before (i.e. deployment, build, package management, etc…). It’s a Catch-22.

So maybe I’m missing something here, but I think the JVM is the best option we have at this time that allows for multiple platforms, languages, paradigms, and comes with a great success story in the enterprise (build tools, deployment/modularity, enterprise grade servers, etc…). Languages implemented on top of the JVM benefit from this quite successful ecosystem. Ah, and might I mention that great libraries exist for about anything you’re trying to do. This is also true of Python, but I can’t say the same for Ruby. Ruby has numerous gems for most tasks, but they all seem half-baked at most. There are frameworks like Rails and Sinatra, which are great and fully supported with active communities, though as long as you don’t venture too far off the traditional path.

JVM has it’s own set of issue, the fact that it was written with static languages in mind and lacks support for dynamic bindings, tail call optimizations, and other things that make writing languages on top of it more difficult. It’s future is now also in question due to the new Oracle stewardship and the legal obstacles it chose to pursue rather than spend that time and money on the platform. Nevertheless, the ecosystem is still flourishing, kept afloat but tons of great developers and supporting companies who care about the platform and greatly benefit from it. JVM allows us to program in different languages while being concentrated on the task at hand, not peripheral issues like compiling for different architectures, battling the deployment inadequacies, not being able to utilize cores efficiently, and a variety of other issues. JVM ecosystem might not have the most ideal solutions to these problems, but they are far better than anything out there right now. If people that spend their time bashing the JVM platform would spend as much time making their platform better, maybe we’d have other choices.

I’d love to hear other’s thoughts on this topic. What do you think about the JVM and what’s your language/platform of choice. How do you build, deploy, distribute your applications? What concurrency options are available on that platform and how they compare to others? I’m familiar with most JVM options, especially Clojure and Scala, so I’m mostly asking for anything outside of the JVM ecosystem. I hope to someday compile a list of these and present them in an objective manner, for now, all I have is my empirical opinions.

Sep 10

Agile Intifada

This blog post was inspired by a link my friend sent me titled “Agile Ruined My Life” as well as a conversation with my friends/co-workers and just wanting to clear up a few things about my previous “Agile Dilemma” post.

Some of this is taken verbatim from an email exchange between myself and a few friends I greatly respect.

I agree/like agile in theory and practice. I agree and like TDD in theory and practice. Anything that makes it mainstream can be scathed and unfairly criticized. So now that I have this out of the way, and I will clarify it later, the rest of the post won’t be so nice for some. (WARNING: It will sound a bit harsh to some ears, with the hope of being constructive.) With that I wish that if nothing else, the few that read this blog post will reflect upon it and critique constructively.

I wrote the Agile Dilemma post mostly while I was letting off steam about computer science itself becoming extinct in lieu of a bunch of bullshit consultants pushing agile and TDD 90% of the time without teaching people how to actually design and write good software. I think lots of failures in our industry are due to the fact that most folks don’t know shit about programming and computer science, they just know a language or few and eventually learn how to express some intention utilizing these languages. That’s like comparing a native speaker or someone who’s learned and practiced the art of a foreign language for years, with someone who’s learned how to ask for directions after listening to RosettaStone audios. From a business perspective most pointy-haired bosses don’t give a shit. It works and the business development teams can do their “magic”, but from a perspective of a computer scientist (the real software developer) it stinks. It smell of amateur manure (no matter how many tests your write to prove that the action button works).

Now, there are those that go even as far as saying that non-TDD written code is “stone age” or that programmers not practicing agile, TDD, pair-programming and all the other process bullshit are not “good” programmers or shouldn’t be programming. Well, then shut down your linux/unix operating systems, stop using emacs or most other editors, as a matter of fact, stop using 90% of the stuff that you’re currently using, because most of it was written by these “stone age” programmers who didn’t give a fuck about formalities of agile or TDD, but created masterpieces because they were smart, motivated, and knew how to program with common sense.

Before I get into detail, I’d love for people to stop petitioning for turning opinions formed by so called “experts”, into commandments. Just because Martin Fowler, Robert Martin, or name your own Agile Mythology Deity say it is so, doesn’t mean much more than anyone else in the field with experience. They are human just like me and you and form their own opinions just like me and you. The only thing they are better than the average at, is marketing. Yes, they are sales folks with large investments in agile, TDD, etc… in their consulting business, so I don’t think I go overboard by saying that they are a bit biased. Spreading FUD brings them more business. As one of the above links pointed out, Peter Norvig, Linus Torvalds, and hundreds of other brilliant programmers (far more brilliant individually than Fowler and Martin combined), have been programming successfully for decades not using any formal methodologies and techniques and not following TDD and have succeeded beyond their wildest dreams. I’ll take one Norvig over 50 Fowlers any day.

Writing tests is not innovative, it’s been around for decades. It was just never formalized. Yes, there weren’t as many tests written or sometimes none, programmers actually got to decide whether a test was needed or not. Programmers are sometimes overly optimistic, so yes, mistakes were made, code had bugs, stuff was rewritten. Decades later, TDD and Agile at hand, mistakes are made, code still has bugs, stuff still gets rewritten. Now put that on your t-shirt along with your favorite TDD blurb.

So I had an interview about two years ago, where I was asked two questions that after reading this post, folks should immediately take off their interview questionnaire if they ever want to hire someone good. The first question, wasn’t as bad, but it wanted me to recite some design patterns. I’m all too well familiar with those, probably more than I want to as this point, but familiarity with patterns doesn’t in any way exclude anyone from the “good” programmer group. The second question was the worst, they wanted to know how many lines of test code I write per week. I had to pause and then ask them to ask again, as I was puzzled. WTF does that mean? I don’t know how many lines of non-test code I write per week, you want me to count/average the amount of tests I write? I mean, I see if the question was if I practice TDD, but lines of fucking test code? Either way, just to clarify that I’m not bitter and that’s why I’m mentioning this, the interview went very well, we just couldn’t come to terms on salary.

Agile is also not innovative. I’d like to think of agile as an abstract set of empirical ideas (patterns), with implementations left to the people/companies. Lately though, because there is really not much more one can write about the few agile principles, most literature about agile is about concrete agile practices through author’s experiences. These are all great reads only if people would read and take them for what they are “experiences”. We should learn from experiences, not try to recreate them. Agile approach is common sense that has been practiced for decades in different circles. Iterations are common sense, they were practiced for as long as programming existed, tests are also common sense, actually most of agile is just good common sense approaches to building products. Formalizing it actually helped quite a bit, as the industry was able to reason about it and transfer the empirical knowledge to others with less experience. But then, as with anything mainstream, the bureaucrats took over, and it’s IMO been down the hill ever since. Actually, some recent attempts at quantifying agile’s success have failed to show it to be any better (within a margin of error) than any other process or no process at all. That’s not to say that it’s not successful, it’s been very successful for many, including me and the companies I worked with. The failures that folks are seeing are due to many reasons, but partly because in most companies agile is practiced as a bureaucratic rule of thumb, without any common sense. Folks are forced to write comprehensive test suites. The “comprehensive” is something inferred and quantified by quality control tools, that apply bullshit heuristics. But managers love it, it gives them numbers and pie charts. So programmers (even the ones that love programming as more than just a day job), do what ever it takes to keep the job, they write comprehensive tests and bullshit software. At the end of the day, who gives a crap if your algorithm is exponential complexity and mine is logarithmic, the tests pass and the sales can go on. But what about folks that actually love what they do (computer science)?

So let’s get back to basics. Learn computer science, algorithms, data structures, language theory and practice. You’re never done learning. Go out and create your own masterpieces. Don’t let the agile Deities full you into thinking that your software isn’t worthy if you didn’t pass a TDD heuristic or if you don’t hold daily standup meetings. No one knows you and your team better than you. DO WHAT MAKES SENSE FOR YOU AND YOUR TEAM!

DISCLAIMER: No agile enthusiasts were harmed in this experiment, including myself.

Aug 10

Powerful multi-method dispatch with Lisp

I’ve been doing quite a bit of common Lisp lately. I really like it. It’s very powerful and has a combination of simple syntax with quite powerful abstractions abilities. Abstractions comes in different flavors in Lisp (functional, OO, etc…), but the most powerful one is the macro system. I won’t touch more on that in the post, as I’m quite a beginner and probably don’t even grasp its full power and potential.

Unlike many languages which define the abstractions you can use, Lisp gives you all of them and also allows you to build most abstractions you want that are not a part of the language. Some of these powerful features are also present in other languages, as many languages that came after, took notice of some features, but the combination of all of these and the ability to build your own core language abstractions is what makes Lisp probably the most powerful language out there.

Multimethods is the ability to do runtime method dispatch based on arguments and their specialization. Multimethods are not supported in languages like Java with single dispatch, but some other languages support it either through libraries or similar facilities (i.e. pattern matching). Although pattern matching is different, in some context is can provide similar levels of abstraction/flexibility.

Here is an example of multimethods from my cl-kyoto-cabinet project, where I found quite a use for them.

Update: Thanks to Drewc for pointing out that my original example was single dispatch. I was trying to demonstrate the conciseness of polymorphic method definitions inline with defgeneric and completely overlooked the fact that the method dispatch was only occurring on one specialization. The original version is here. Below is the updated version…

The multimethods dispatch is very flexible and concise for accomplishing method-level polymorphism. Same can be accomplished in a less flexible OO language like Java using interfaces and the Strategy pattern, but that usually ends up being more verbose and ceremonious in most non-rudimentary scenarios.

Scala has a similar ability through pattern matching, and so does Erlang.

Jun 10

Lazy cheap flight calculations with priority queues

There is an interesting problem of utilizing priority queues to figure out the best price combination in a set of flight legs. The problem is as follows:

We need to calculate the cheapest combination of flight legs (connections) for a flight to a particular destination. We’re given a price ordered N set of flight legs and we need to find the winning combination. Each combination would be evaluate for eligibility and would either pass or fail, so the cheapest combination doesn’t necessarily reflect the cheapest possibly combination of prices from the legs. A black box predicate function is consulted to ensure the combination is eligible. This reflects various airline rules, like overlapping times, specials that are only available to certain people, routes, or connections.

Solution: A naive approach for say a two leg flight is to say construct a (n x m) ordered matrix and evaluate each priced ordered combination through the black box predicate routing until one passes. The problem with this approach is that we unendingly construct a full matrix when in many cases one of the first combinations is enough to present the cheapest “valid” price. The key to reducing this is to construct a lazy data structure which will prioritize the cheapest flights and can then be iterated to find one that’s valid. We do so at runtime while constructing matrix combinations. The solutions is generalized, so the same can be used for n leg flights.

The algorithm goes something like this…

Construct the first set of combinations which can reflect the cheapest flight. The first cheapest combinations is always n1 + m1. If that doesn’t pass, the next possible set of cheapest combinations is either n2 + m1 or n1 + m2. We then continue to n1 + ma and na + m1, where a is incremented until the end of the route leg set for either leg.

The worst case running time is quadratic O(n2), but because of the lazy data structure, the algorithm runs in rather constant time, depending on how lucky we are that the first few combinations will yield a “rule valid” price combination.

This problem idea came from reading The Algorithm Design Manual by Steven S. Skiena. I recommend this book for anyone wishing to delve into the world of more advanced algorithm design.

Here is the solution in python. You’ve probably noticed I’ve been using a lot of python. Besides the fact that I like the language, python is an incredibly good language for conveying algorithmic ideas in a concise but very readable way.

The only two functions that matter, are cheapest_price and _pick_combo, the rest are just auxiliary functions used to support an OO structure and running a sample.

  import heapq, random, time

  class Route(object):
      """docstring for TicketFinder"""
      def __init__(self):
          self.heap = []
          self.unique = dict()
          self.legs = []
          self.max_leg_len = 0
          self._counter = 0
          self._loop_counter = 0

      def add_leg(self, leg):
          leg_len = len(leg)
          if leg_len > self.max_leg_len:
              self.max_leg_len = leg_len

      def cheapest_price(self, pred_func=lambda x: True):
          for i in range(0, self.max_leg_len):
              combo = self._pick_combo(i, pred_func)
              if combo: return combo

      def print_stats(self):
          print("""Legs: %s
  Combos examined: %s
  Loops: %s
  """ % (len(self.legs), self._counter, self._loop_counter))

      def _pick_combo(self, curr_idx, pred_func):
          num_legs = len(self.legs)
          price_combo = [ leg[curr_idx] for leg in self.legs if not curr_idx >= len(leg) ]
          cheapest_price = self._eval_price_combo(pred_func)
          if cheapest_price: return cheapest_price
          for idx in range(1, self.max_leg_len-curr_idx):
              for j in range(0, num_legs):
                  if len(self.legs[j]) &lt= (curr_idx+idx): continue
                  combo = []
                  for k in range(0, num_legs):
                      self._loop_counter += 1
                      if j == k:
                      elif curr_idx &lt len(self.legs[k]):

              cheapest_price = self._eval_price_combo(pred_func)
              if cheapest_price: return cheapest_price

      def _add_combo(self, combo):
          self._counter += 1
          if len(combo) == len(self.legs) and not self.unique.has_key(str(combo)):
              heapq.heappush(self.heap, combo)
              self.unique[str(combo)] = True

      def _eval_price_combo(self, pred_func):
          for i in range(0, len(self.heap)):
              least_combo = heapq.heappop(self.heap)
              if pred_func(least_combo):
                  print("Winning combo: %s" % [ "%.2f" % l for l in least_combo ])
                  return sum(least_combo)
          return None

  ############### Samples below ##################

  def sample_run(num_legs, pred_func):
      print(("#" * 30) + " Sample Run " + ("#" * 30))
      route = Route()
      for i in range(0, num_legs):
          route.add_leg( [ random.uniform(100, 500) for i in range(0, 100) ] )

      start = time.clock()
      price = route.cheapest_price(pred_func)
      calc_time = time.clock() - start

      if price:
          print("Cheapest price: %.2f" % price)
          print("No valid route found")
      print(("#" * 72) + "\n")

  if __name__ == '__main__':
      sample_run(2, lambda x: True)
      def pred(x):
          for price in x:
              if price &lt 150: return False
          return True
      sample_run(3, pred)

I haven’t thoroughly tested this for correctness besides numerous runs and some basic validation so let me know if you see anything apparently wrong here.

Running the above yields

    ############################## Sample Run ##############################
    Winning combo: ['103.62', '106.40']
    Cheapest price: 210.03
    Legs: 2
    Combos examined: 1
    Loops: 0


    ############################## Sample Run ##############################
    Winning combo: ['150.74', '150.25', '173.95']
    Cheapest price: 474.95
    Legs: 3
    Combos examined: 2852
    Loops: 8523


For the first sample run, we use a predicate function which yields True, so we never examine anything other than the first combo n1 + m1. For the second sample, I add a predicate function which only accepts any price combination where all legs are above $150. (Of course this is not anything resembling airline rules, just good enough to simulate some sample cases, where the first n combinations are rejected). In the second sample run, we utilized 3 legs and examined 2852 combinations before coming up with the winning leg combination for the route. Each price within the combination is the smallest possible price above $150 for each leg.

May 10

Random points in polygon generation algorithm

I needed to generate a set of random points within a polygon, including convex and concave. The need arouse in a geospatial domain where polygons are rather small (on a geo-scale) and wouldn’t span more than say 10 miles, though the benefit of employing more complex algorithms to deal with spheroid properties are negligible. Plane geometry provided enough to meet this requirement. Point-in-Polygon tests are rather simple and are used to test whether a point exists in a polygon. The test is performed using a Ray casting algorithm which test the intersections of a ray across the x-axis starting from the point in question.

Another concept is the Minimum Bounding Rectangle (Bounding Box), which is the minimal rectangle needed to enclose a geographical object (i.e. polygon).

So, one can generate random points within a polygon by…

  1. Generating a bounding box
  2. Generating a point within the bounding box. This is a simple algorithm.
  3. Using Point-in-Polygon to test whether this point exists within the polygon.

Because of the random sampling nature and false positives from step 2, which must be tested in step 3, the above must be performed in a loop until the Point-in-Polygon test passes.

This works quite well for generating test data, as there are no tight bounds on the performance characteristics of random generation. One could also use the above algorithm in production as long as the ration of polygon to bounding box is rather large, which is usually the case for convex polygons. The ratio might be too small convex polygons, though causing a more than acceptable number of false positives in step #2.

I’ve implemented this in the geo-utils python package and made available on github. Feel free to use and provide any feedback.

To utilize the geo-utils to generate random points within a polygon, you would do the following:

  from vtown import geo
  from vtown.geo.polygon import Polygon

  polygon = Polygon(  geo.LatLon(42.39321,-82.92114),

  point = polygon.random_point()

The above polygon is generated using lat/lon coordinates, but you can generate them using simple x/y coordinates with geo.Point(x,y)

Here are some code snippets from the implementation. I only pasted the relevant parts. For boilerplate and relevant data structures, see the geo-utils package.

class BoundingBox(object):

    def __init__(self, *points):
        """docstring for __init__"""
        xmin = ymin = float('inf')
        xmax = ymax = float('-inf')
        for p in points:
            if p.x < xmin: xmin = p.x
            if p.y < ymin: ymin = p.y
            if p.x > xmax: xmax = p.x
            if p.y > ymax: ymax = p.y
        self.interval_x = Interval(xmin, xmax)
        self.interval_y = Interval(ymin, ymax)

    def random_point(self):
        x = self.interval_x.random_point()
        y = self.interval_y.random_point()
        return Point(x, y)

class Polygon:
  ## __init__ omitted here...

  def contains(self, point):
        seg_counter = private.SegmentCounter(point)
        for i in range(1, len(self.points)):
            line = Line(*self.points[i-1:i+1])
            if seg_counter.process_segment(line):
                return True
        return seg_counter.crossings % 2 == 1

  def random_point(self):
        bb = BoundingBox(*self.points)
        while True:
            print("GENERATING RANDOM POINT...")
            p = bb.random_point()
            if self.contains(p):
                return p

class SegmentCounter(object):

    def __init__(self, point):
        self.point = point
        self.crossings = 0

    def process_segment(self, line):
        p, p1, p2 = self.point, line.point1, line.point2
        if p1.x < p.x and p2.x < p.x:
            return False

        if (p.x == p2.x and p.y == p2.y):
            return True

        if p1.y == p.y and p2.y == p.y:
            minx = p1.x
            maxx = p2.x
            if minx > maxx:
                minx = p2.x
                maxx = p1.x
            if p.x >= minx and p.x <= maxx:
                return True
            return False

        if ((p1.y > p.y) and (p2.y <= p.y)) \
                or ((p2.y > p.y) and (p1.y <= p.y)):
            x1 = p1.x - p.x
            y1 = p1.y - p.y
            x2 = p2.x - p.x
            y2 = p2.y - p.y

            det = numpy.linalg.det([[x1, y1], [x2, y2]])
            if det == 0.0:
                return True
            if y2 < y1:
                det = -det

            if det > 0.0:
                self.crossings += 1
Page 4 of 13« First...23456...10...Last »