Posts Tagged: python


6
Jan 14

Cyclical dependency detection in the database

We recently had a need to find cyclical dependencies in our database. This happens to be a rather straightforward graph algorithm issue. The database foreign key constraints form a directed graph. Finding a cycle in a directed graph is mostly detecting an already visited node in a DFS algorithm (back-edge). We mark nodes as visited and if the ancestor of a node in the tree is already visited, then a back-edge (cycle) exists.

In order to do this on our own, we’d have to read the metadata from the database for each table, construct a directed graph using the foreign keys and then run the algorithm discussed above, rather straightforward. Most of the complexity comes from the cross cutting concerns of database metadata munging. We can easily accomplish all of the above using sqlalchemy and its ability to perform a topological sort on the reflected tables. Topological sort fails in there is a cycle detected and the exception thrown includes the nodes that produce the back-edge. Using this simple trick, we allow sqlalchemy to detect the cycles for us.

You’ll need to install sqlalchemy (and your db driver), networkx and graphviz (for visualization).


21
Jun 10

Lazy cheap flight calculations with priority queues

There is an interesting problem of utilizing priority queues to figure out the best price combination in a set of flight legs. The problem is as follows:

We need to calculate the cheapest combination of flight legs (connections) for a flight to a particular destination. We’re given a price ordered N set of flight legs and we need to find the winning combination. Each combination would be evaluate for eligibility and would either pass or fail, so the cheapest combination doesn’t necessarily reflect the cheapest possibly combination of prices from the legs. A black box predicate function is consulted to ensure the combination is eligible. This reflects various airline rules, like overlapping times, specials that are only available to certain people, routes, or connections.

Solution: A naive approach for say a two leg flight is to say construct a (n x m) ordered matrix and evaluate each priced ordered combination through the black box predicate routing until one passes. The problem with this approach is that we unendingly construct a full matrix when in many cases one of the first combinations is enough to present the cheapest “valid” price. The key to reducing this is to construct a lazy data structure which will prioritize the cheapest flights and can then be iterated to find one that’s valid. We do so at runtime while constructing matrix combinations. The solutions is generalized, so the same can be used for n leg flights.

The algorithm goes something like this…

Construct the first set of combinations which can reflect the cheapest flight. The first cheapest combinations is always n1 + m1. If that doesn’t pass, the next possible set of cheapest combinations is either n2 + m1 or n1 + m2. We then continue to n1 + ma and na + m1, where a is incremented until the end of the route leg set for either leg.

The worst case running time is quadratic O(n2), but because of the lazy data structure, the algorithm runs in rather constant time, depending on how lucky we are that the first few combinations will yield a “rule valid” price combination.

This problem idea came from reading The Algorithm Design Manual by Steven S. Skiena. I recommend this book for anyone wishing to delve into the world of more advanced algorithm design.

Here is the solution in python. You’ve probably noticed I’ve been using a lot of python. Besides the fact that I like the language, python is an incredibly good language for conveying algorithmic ideas in a concise but very readable way.

The only two functions that matter, are cheapest_price and _pick_combo, the rest are just auxiliary functions used to support an OO structure and running a sample.

  import heapq, random, time

  class Route(object):
      """docstring for TicketFinder"""
      def __init__(self):
          self.heap = []
          self.unique = dict()
          self.legs = []
          self.max_leg_len = 0
          self._counter = 0
          self._loop_counter = 0

      def add_leg(self, leg):
          leg.sort()
          self.legs.append(leg)
          leg_len = len(leg)
          if leg_len > self.max_leg_len:
              self.max_leg_len = leg_len

      def cheapest_price(self, pred_func=lambda x: True):
          for i in range(0, self.max_leg_len):
              combo = self._pick_combo(i, pred_func)
              if combo: return combo

      def print_stats(self):
          print("""Legs: %s
  Combos examined: %s
  Loops: %s
  """ % (len(self.legs), self._counter, self._loop_counter))

      def _pick_combo(self, curr_idx, pred_func):
          num_legs = len(self.legs)
          price_combo = [ leg[curr_idx] for leg in self.legs if not curr_idx >= len(leg) ]
          self._add_combo(price_combo)
          cheapest_price = self._eval_price_combo(pred_func)
          if cheapest_price: return cheapest_price
          for idx in range(1, self.max_leg_len-curr_idx):
              for j in range(0, num_legs):
                  if len(self.legs[j]) &lt= (curr_idx+idx): continue
                  combo = []
                  for k in range(0, num_legs):
                      self._loop_counter += 1
                      if j == k:
                          combo.append(self.legs[k][curr_idx+idx])
                      elif curr_idx &lt len(self.legs[k]):
                          combo.append(self.legs[k][curr_idx])
                  self._add_combo(combo)

              cheapest_price = self._eval_price_combo(pred_func)
              if cheapest_price: return cheapest_price

      def _add_combo(self, combo):
          self._counter += 1
          if len(combo) == len(self.legs) and not self.unique.has_key(str(combo)):
              heapq.heappush(self.heap, combo)
              self.unique[str(combo)] = True

      def _eval_price_combo(self, pred_func):
          for i in range(0, len(self.heap)):
              least_combo = heapq.heappop(self.heap)
              if pred_func(least_combo):
                  print("Winning combo: %s" % [ "%.2f" % l for l in least_combo ])
                  return sum(least_combo)
          return None


  ############### Samples below ##################

  def sample_run(num_legs, pred_func):
      print(("#" * 30) + " Sample Run " + ("#" * 30))
      route = Route()
      for i in range(0, num_legs):
          route.add_leg( [ random.uniform(100, 500) for i in range(0, 100) ] )

      start = time.clock()
      price = route.cheapest_price(pred_func)
      calc_time = time.clock() - start

      if price:
          print("Cheapest price: %.2f" % price)
      else:
          print("No valid route found")
      route.print_stats()
      print(("#" * 72) + "\n")

  if __name__ == '__main__':
      sample_run(2, lambda x: True)
      def pred(x):
          for price in x:
              if price &lt 150: return False
          return True
      sample_run(3, pred)

I haven’t thoroughly tested this for correctness besides numerous runs and some basic validation so let me know if you see anything apparently wrong here.

Running the above yields

    ############################## Sample Run ##############################
    Winning combo: ['103.62', '106.40']
    Cheapest price: 210.03
    Legs: 2
    Combos examined: 1
    Loops: 0

    ########################################################################

    ############################## Sample Run ##############################
    Winning combo: ['150.74', '150.25', '173.95']
    Cheapest price: 474.95
    Legs: 3
    Combos examined: 2852
    Loops: 8523

    ########################################################################

For the first sample run, we use a predicate function which yields True, so we never examine anything other than the first combo n1 + m1. For the second sample, I add a predicate function which only accepts any price combination where all legs are above $150. (Of course this is not anything resembling airline rules, just good enough to simulate some sample cases, where the first n combinations are rejected). In the second sample run, we utilized 3 legs and examined 2852 combinations before coming up with the winning leg combination for the route. Each price within the combination is the smallest possible price above $150 for each leg.


27
May 10

Random points in polygon generation algorithm

I needed to generate a set of random points within a polygon, including convex and concave. The need arouse in a geospatial domain where polygons are rather small (on a geo-scale) and wouldn’t span more than say 10 miles, though the benefit of employing more complex algorithms to deal with spheroid properties are negligible. Plane geometry provided enough to meet this requirement. Point-in-Polygon tests are rather simple and are used to test whether a point exists in a polygon. The test is performed using a Ray casting algorithm which test the intersections of a ray across the x-axis starting from the point in question.

Another concept is the Minimum Bounding Rectangle (Bounding Box), which is the minimal rectangle needed to enclose a geographical object (i.e. polygon).

So, one can generate random points within a polygon by…

  1. Generating a bounding box
  2. Generating a point within the bounding box. This is a simple algorithm.
  3. Using Point-in-Polygon to test whether this point exists within the polygon.

Because of the random sampling nature and false positives from step 2, which must be tested in step 3, the above must be performed in a loop until the Point-in-Polygon test passes.

This works quite well for generating test data, as there are no tight bounds on the performance characteristics of random generation. One could also use the above algorithm in production as long as the ration of polygon to bounding box is rather large, which is usually the case for convex polygons. The ratio might be too small convex polygons, though causing a more than acceptable number of false positives in step #2.

I’ve implemented this in the geo-utils python package and made available on github. Feel free to use and provide any feedback.

To utilize the geo-utils to generate random points within a polygon, you would do the following:

  from vtown import geo
  from vtown.geo.polygon import Polygon


  polygon = Polygon(  geo.LatLon(42.39321,-82.92114),
                      geo.LatLon(42.39194,-82.91669),
                      geo.LatLon(42.39147,-82.91796),
                      geo.LatLon(42.39090,-82.91974),
                      geo.LatLon(42.39321,-82.92114))

  point = polygon.random_point()

The above polygon is generated using lat/lon coordinates, but you can generate them using simple x/y coordinates with geo.Point(x,y)

Here are some code snippets from the implementation. I only pasted the relevant parts. For boilerplate and relevant data structures, see the geo-utils package.

class BoundingBox(object):

    def __init__(self, *points):
        """docstring for __init__"""
        xmin = ymin = float('inf')
        xmax = ymax = float('-inf')
        for p in points:
            if p.x < xmin: xmin = p.x
            if p.y < ymin: ymin = p.y
            if p.x > xmax: xmax = p.x
            if p.y > ymax: ymax = p.y
        self.interval_x = Interval(xmin, xmax)
        self.interval_y = Interval(ymin, ymax)

    def random_point(self):
        x = self.interval_x.random_point()
        y = self.interval_y.random_point()
        return Point(x, y)

class Polygon:
  ## __init__ omitted here...

  def contains(self, point):
        seg_counter = private.SegmentCounter(point)
        for i in range(1, len(self.points)):
            line = Line(*self.points[i-1:i+1])
            if seg_counter.process_segment(line):
                return True
        return seg_counter.crossings % 2 == 1

  def random_point(self):
        bb = BoundingBox(*self.points)
        while True:
            print("GENERATING RANDOM POINT...")
            p = bb.random_point()
            if self.contains(p):
                return p

class SegmentCounter(object):

    def __init__(self, point):
        self.point = point
        self.crossings = 0

    def process_segment(self, line):
        p, p1, p2 = self.point, line.point1, line.point2
        if p1.x < p.x and p2.x < p.x:
            return False

        if (p.x == p2.x and p.y == p2.y):
            return True

        if p1.y == p.y and p2.y == p.y:
            minx = p1.x
            maxx = p2.x
            if minx > maxx:
                minx = p2.x
                maxx = p1.x
            if p.x >= minx and p.x <= maxx:
                return True
            return False


        if ((p1.y > p.y) and (p2.y <= p.y)) \
                or ((p2.y > p.y) and (p1.y <= p.y)):
            x1 = p1.x - p.x
            y1 = p1.y - p.y
            x2 = p2.x - p.x
            y2 = p2.y - p.y

            det = numpy.linalg.det([[x1, y1], [x2, y2]])
            if det == 0.0:
                return True
            if y2 < y1:
                det = -det

            if det > 0.0:
                self.crossings += 1

17
May 10

Divide and conquer for exponentiation

Here is an awesome way to demonstrate divide and conquer algorithm performing exponentiation. Naive exponentiation algorithms xn would perform n-1 multiplications as n x n … x n-1. This has an algorithmic complexity of O(n) which of course scales poorly for any significantly large number. This is not even including the overhead of performing integer multiplication beyond CPUs capacity is slower than staying within the CPU integer range. Now, do that n times and you have a problem.

Logarithmic performance O(log n) is one of the best common algorithmic complexities there is (outside of constant complexity of course, which is rare). One can achieve calculating power by utilizing the power of logarithms, which are clearly apparent in divide and conquer problem solutions.

Logarithms grow very slow compared to number of inputs, though for a calculating a power of say n1000000, with the naive algorithm, you’d have to perform 999,999 multiplications. With a logarithmic complexity algorithm this drops to log21000000 = ceil(19.93) = 20 steps. 20 steps with a few extra operations for step compared to 1million multiplications.

Here is an example of both exponentiation algorithms, the logarithmic complexity and linear complexity (called naive), as well as built in python pow() function. Both our logarithmic power function and python’s built in one perform the same, where the naive linear function starts to truly deteriorate once any reasonable number is used as the exponent.

_Note: this function is recursive though you can run out of stack space for very large exponents (you can also easily reimplement it as recursion). On a system with a 1024 stack limit, this would mean your exponent would have to be above 21024 or

17976931348623159077293051907890247336179769789423065727343008 11577326758055009631327084773224075360211201138798713933576587 89768814416622492847430639474124377767893424865485276302219601 24609411945308295208500576883815068234246288147391311054082723 7163350510684586298239947245938479716304835356329624224137216

before you run out of stack space._

Here is a benchmarked python implementation. The relevant algorithm part is highlighted.

#!/usr/bin/env python
import math
import time
import sys

def power(b, e):
    """logarithmic divide/conquer algorithm"""
    if e == 0: return 1
    x = power(b, math.floor(e/2))
    if e % 2 == 0: return pow(x, 2)
    else: return b * pow(x, 2)

def naive_power(b, e):
    """linear power algorithm"""
    x = b;
    for i in range(1, e):
        x *= b
    return x

def perform(name, base, exp, pfunc):
    print("%s: %d^%d: %d" % (name, base, exp, pfunc(base, exp)))

if __name__ == '__main__':
    if len(sys.argv) != 3:
        sys.exit("You must provide a base and an exponent.  (Usage: exp.py base exp)")
    base = int(sys.argv[1])
    exp = int(sys.argv[2])
    for func in (power, naive_power, pow):
        print("Benchmarking %s..." % func.__name__)
        bench = []
        for i in range(0,5):
            start = time.time()
            ans = func(base, exp)
            end = time.time()
            bench.append(end-start)
        print("\tCalculated in: %s" % min(bench))
]]>

Running above to calculate 2200000

$ python exp.py 2 200000
Benchmarking power…
    Calculated in: 0.0042099952697753906
Benchmarking naive_power…
    Calculated in: 6.078423023223877
Benchmarking pow…
    Calculated in: 0.0041148662567138672

Hmmm, both pow() (python’s built in power) and power() (logarithmic complexity) calculated the power in 4 millis (above is in seconds) and our naive_power() function calculates the same result in 6 seconds.

I tried running the script to calculate 21000000, which calculated using logarithmic functions in 25 milliseconds and I killed the naive_power() calculation after a few minutes of impatiently waiting for it to complete.

Power to the logarithms!!! 🙂


12
Apr 10

Python and mod_wsgi on Mac OS X Snow Leopard (oy vey)

I’ve been dabbling with Python Turbogears in the last week. Turbogears is a great framework so far. My biggest like is the loose coupling, allowing you to choose the best component for the job. I’ll write a more detailed blog about the other RAD web development frameworks out there sometime this week, but my biggest dislike of say Rails, Django, and some others, is how they tie you into using their integrated components. You can of course hackishly disable their use/dependency, but not without losing many other features of the framework. In my experience, these type of frameworks are great at getting something up quickly, but they suck when it comes to long term scalability and growth, as you basically end up rewriting the framework to integrate other 3rd party or your own components to the point that it marginalizes the benefits of the framework.

I found out that building Python on a Mac (OS X Snow Leopard) is nearly impossible. You can definitely compile it, but I needed it compiled to work with mod_wsgi, and various modules. I also needed to compile mod_wsgi against a particular version of apache, which required me to compile python as a universal binary to support i386 and x86_64 architectures. That’s where it started to get painful. Snow Leopard is distributed with Python 2.6.1 and supports a 3-way architecture (ppc7400, i386, and x86_64), but the latest python release is 2.6.5, so I tried to compile it. I used a myriad of options…

./configure --prefix=/usr/local/python-2.6.5 \
--enable-framework=/usr/local/python-2.6.5/frameworks \
MACOSX_DEPLOYMENT_TARGET=10.6 \
--enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk \
--with-universal-archs=3-way

I also tried to use intel for –with-universal-archs. Both CCFLAGS and LDFLAGS were being set correctly in the Makefile. I even tried setting them explicitly, with no luck: Python’s executable was compiling in all architectures except 64-bit. I wasn’t able to find any references to any such issue anywhere in the user forums and/or on the web. Every reference I saw in compiling python in 64-bit, I tried, with no luck. Evidentally, the Python distributed with Snow Leopard was compiled using some special compile procedures by Apple due to the fact that some packages lack 64-bit compatibility. I couldn’t find any reference to this procedure, nor did I really want to engage is such activity. Come on Python folks, WTF??? Can you either provide compilation instructions, or distribute MacPython as a universal binary including x86_64?

I ended up resorting (unhappily) to using Snow Leopard’s distributed Python and Mac’s web-sharing apache. I compiled mod_wsgi with:

./configure --with-apxs=/usr/sbin/apxs \
--with-python=/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python

and voila, we have a working mod_wsgi install. I really dislike the fact that I wasted days trying to get it to work with Python 2.6.5 and custom apache install, but at least I have something that works now and hopefully won’t slow me down any more.

I’m loving python, as I have sporadically for quite a while, but I’m really missing the JVM with its problem-free installs, jar/war archives, and things just working.