Posts Tagged: web development


12
Apr 10

Python and mod_wsgi on Mac OS X Snow Leopard (oy vey)

I’ve been dabbling with Python Turbogears in the last week. Turbogears is a great framework so far. My biggest like is the loose coupling, allowing you to choose the best component for the job. I’ll write a more detailed blog about the other RAD web development frameworks out there sometime this week, but my biggest dislike of say Rails, Django, and some others, is how they tie you into using their integrated components. You can of course hackishly disable their use/dependency, but not without losing many other features of the framework. In my experience, these type of frameworks are great at getting something up quickly, but they suck when it comes to long term scalability and growth, as you basically end up rewriting the framework to integrate other 3rd party or your own components to the point that it marginalizes the benefits of the framework.

I found out that building Python on a Mac (OS X Snow Leopard) is nearly impossible. You can definitely compile it, but I needed it compiled to work with mod_wsgi, and various modules. I also needed to compile mod_wsgi against a particular version of apache, which required me to compile python as a universal binary to support i386 and x86_64 architectures. That’s where it started to get painful. Snow Leopard is distributed with Python 2.6.1 and supports a 3-way architecture (ppc7400, i386, and x86_64), but the latest python release is 2.6.5, so I tried to compile it. I used a myriad of options…

./configure --prefix=/usr/local/python-2.6.5 \
--enable-framework=/usr/local/python-2.6.5/frameworks \
MACOSX_DEPLOYMENT_TARGET=10.6 \
--enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk \
--with-universal-archs=3-way

I also tried to use intel for –with-universal-archs. Both CCFLAGS and LDFLAGS were being set correctly in the Makefile. I even tried setting them explicitly, with no luck: Python’s executable was compiling in all architectures except 64-bit. I wasn’t able to find any references to any such issue anywhere in the user forums and/or on the web. Every reference I saw in compiling python in 64-bit, I tried, with no luck. Evidentally, the Python distributed with Snow Leopard was compiled using some special compile procedures by Apple due to the fact that some packages lack 64-bit compatibility. I couldn’t find any reference to this procedure, nor did I really want to engage is such activity. Come on Python folks, WTF??? Can you either provide compilation instructions, or distribute MacPython as a universal binary including x86_64?

I ended up resorting (unhappily) to using Snow Leopard’s distributed Python and Mac’s web-sharing apache. I compiled mod_wsgi with:

./configure --with-apxs=/usr/sbin/apxs \
--with-python=/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python

and voila, we have a working mod_wsgi install. I really dislike the fact that I wasted days trying to get it to work with Python 2.6.5 and custom apache install, but at least I have something that works now and hopefully won’t slow me down any more.

I’m loving python, as I have sporadically for quite a while, but I’m really missing the JVM with its problem-free installs, jar/war archives, and things just working.


19
Feb 10

Extension-based content negotiation and nested routes with Restlet

I’ve been working with Restlet to expose a RESTful api interface to the data model for one of my projects. Restlet is a super flexible library allowing one to configure and access all the properties of HTTP through a REST-oriented API.

The application bootstrapping options are also super flexible, allowing one to configure how routes are resolved and nest routes for cascading capabilities. I ran into a small caveat when I tried to configure extension based content negotiation. Basically, the idea of extension based content negotiation, is that instead of using “Accept” headers, one can append a mime extension to their request URI to request a particular return format. Say, we have a http://localhost/api/resource uri, one can request xml or json formats by simply doing http://localhost/api/resource.xml or http://localhost/api/resource.json. Of course your resource has to support these formats. The documentation on this type of content negotiation is non-existent. I had to scour a bunch of users group messages and javadocs before I figured it out. I figured I’ll shared if someone else is interested.

My applications is written in Scala, so examples will be provided as such. I’m sure any experienced developer can easily discern the java equivalent.

First, in your application bootstrapping, you must turn on the extensionsTunnel option. Here is my code, which also demonstrates nested routes. Then, in your resource you must conditionally infer the MediaType provided and emit the representation of this resource based on it.

import org.restlet.{Restlet, Application => RestletApplication}
import scala.xml._
//... other imports excluded

class TestApplication extends RestletApplication {
  override def createInboundRoot: Restlet = {

    val apiRouter = new Router(getContext)
    apiRouter.attach("/test", classOf[TestResource])

    val rootRouter = new Router(getContext)
    rootRouter.attach("/api/v1", apiRouter).getTemplate.setMatchingMode(Template.MODE_STARTS_WITH)

    getTunnelService.setExtensionsTunnel(true)

    return rootRouter
  }
}

class TestResource extends ServerResource {

  @Get("xml|json")
  def represent(v:Variant):String = {
    return v.getMediaType match {
          case MediaType.TEXT_XML | MediaType.APPLICATION_XML => <response><message>Hello from Restlet!</message></response>.toString
          case MediaType.APPLICATION_JSON => "{\"message\": \"Hello from Restlet\"}"
        }
  }
}

First, the root router’s matching mode must be set to Template.MODE_STARTS_WITH, otherwise it will try to match based on full absolute uri path and not find any nested resources. So the matching mode is very important in the case where you’re working with nested resources.

Second, you set the extensions tunnel property to true: getTunnelService.setExtensionsTunnel(true). This will turn on the extension tunneling service and perform content negotiation based on the URI’s extension. Note: if an extension is not provided, it will resort to first available representation supported by the resource. It can get more complicated I believe based on other configurations, but this is what happens in the most simple scenario.

Now, with content negotiation on, the resource has to conditionally infer the proper MediaType requested and provide its representation for the MediaType. In Scala this is very elegantly done using the super flexible match/case construct. This construct can be used as Java’s switch statement, but it is way more powerful and allows for advanced pattern matching. As you can see, I check for both xml and json media types and provide the proper representation. The supported media types are handled through @Get annotation. For more info, see Restlet’s annotations and Resource documentation.

Now, accessing the resources yields the following results:

  $ curl http://localhost:8080/api/v1/test.xml
  Hello from Restlet

  $ curl http://localhost:8080/api/v1/test.json
  {"message": "Hello from Restlet"}

5
Feb 10

NOSQL Databases for web CRUD (CouchDB) – Shows/Views

There are many applications that easily land themselves to CRUD paradigm. Even if only 80% of the application’s functionality is pure CRUD, one can benefit from a simpler storage model. For so many years many (including myself) thought that storing application state which needs to be used in an enterprise-grade applications, meant we had one option, an RDBMS. Not that other models weren’t available, but their prevalence was not as high which made one question the quality/stability and long term health of such software. So we’ve gotten accustomed to approaching state persistence by sticking everything into one hole. If it didn’t fit, we trimmed it, cut it, squeezed it, stumped on it, but we made it fit. Then when it was time to pull out, ah, we repeated the procedure. ORMs are one of the most popular remedies for such procedures. But if one has ever developed a complex data model and actually took the time to think about the data access strategy, on both ends, application and RDBMS, you’d quickly run into many limitations of the ORM model. I guess you can abstract away the impedance mismatch only so much, but watch out for these leaky abstractions. So if you’re still rusty on your SQL and the relational theory because the great Gods promised that you’ll never have to worry about it if you use ORM, you better get to learning, unless you’re planning maintaining a ToDo List application for the rest of your life.

So, with this out of the way, let’s talk about real data persistence. So there are many applications (especially web applications), that don’t lend themselves very well into the relational persistence model. There are many reasons for this, but those who have ever had to beat their heads against the wall to bend the relational model to persist data know what I mean. By the time you’re done, you’re using and RDBMS to store non-relational data and all the benefits of the relational model are moot. You might as well store your data in an excel spreadsheet. So what are some of these reasons?

  1. Highly dynamic structure (relational schemas are rather static, if you’re doing it the relational way (no tall/skinny tables))
  2. Data model is not very relational. That speaks for itself, but many don’t really know when and how to identify this, as we’ve spend so much time identifying relations that don’t exist or are irrelevant to the application.
  3. Your relational schema is denormalized to the point where you’re no longer benefiting from relational database features like enforcing consistency and reducing redundancy in the data.
  4. Your relational database is bending backwards to accommodate your read/write throughput even after you denormalized (which itself is a reason to look elsewhere) and optimized, forcing you to continuously have to scale up to allow for increased load.
  5. You continuously marshall/unmarshall data (ORM???) to persist it and then to transform it to another data format.

Touching a bit more on bullet #5; Lots of software is written using good OO principles of encapsulation. Encapsulation is the heart of software abstractions and is probably the most important principle. But people tend to abuse it. Abstractions are good when they add value, but marshalling and unmarshalling one data structure into another without a good apparent reason, other than you don’t have to learn how to deal with a particular data structure, I’m not sure is such a good reason. So many software projects use ORM for the sake of not learning SQL, but how far can you actually get? ORM is a perfect example of a leaky abstraction. So many projects retrieve data from a web view in JSON or url-encoded format and marshall that into objects, only to validate the data and persist it using an ORM. So now you’ve unmarshalled the data from JSON to an object graph to just then marshall it again into a SQL query to send to the database. Do we really need these superfluous steps?

I’m sure there are other reasons I haven’t mentioned here. These reasons I personally faced when making my decisions.

A rational way of deciding on data persistence is not to automatically start writing a DDL script or grab your ER diagram tool, but rather look at what data you have, how would this data persist in a “natural” way, how does the client software need to access this data, what are the performance/scalability considerations and then go out and look at different persistence models to find the best match.

In my latest project, I had to think about a way to persist hierarchical data. This data will be accessed through some web medium (browser, mobile client, etc…) majority of the time. One of the web interfaces will be an ajax enabled web app, another will be an iphone and/or adroid app. JSON is communication protocol lingua franca of the web. Some will argue it’s XML, but I’ll keep my XML opinions to myself at this point.

CouchDB is a document database which one would call key/value store. It allows for storage of JSON documents that are uniquely identified by keys. Sounds interesting, not really. There are tens of other databases that have same capabilities, so why CouchDB? Well in one short sentence: CouchDB is build one the web and for the web. So what does that really mean? Well, besides the JSON storage structure and its innate ability to scale horizontally, they’ve build some pretty awesome features that makes it very appealing for a particular type of an application. The task is to decide whether the application you’re building is that application. So not to make this post any longer than it already is with my rant, let me describe and demonstrate some of the features that I’ve used over the last few days and why they are relevant.

Please make sure you have couchdb 0.10.* version installed as well as curl command line utility. For installation instructions see http://wiki.apache.org/couchdb/Installation

Once couchdb is installed, you can start it using the couchdb command. Depending on your setup, you can run the following command…

sudo couchdb

A little bit of a background though before we get any further…

We’re going to store hierarchical data, which JSON is a natural fit for. One of the issues we have, is that in our industry, there are numerous data standards and they are all defined either in XML or in some delimited rectangular format. One major use case involves performing CRUD operations on the data from variety of sources (web app, mobile app, etc…) as well as being able to emit this data in one of the industry standard formats for integration purposes.

CouchDB exposes a RESTful API, so it’s rather easy to use it from any language which supports HTTP. Most popular languages have abstraction libraries on top of that, to abstract away the HTTP abstraction. Here is a list of available clients: http://wiki.apache.org/couchdb/Basics. For our purposes we’re going to use curl, a command line utility which allows us to make HTTP requests. So let’s see how we can easily accomplish this with CouchDB.

Now that CouchDB is successfully running, let’s create a database and insert some sample data…

curl -X PUT “http://localhost:5984/sample_db”

Above line create a database called sample_db. If the command is successfule, you will see the following output: {“ok”:true}. Now lets add three files to this database. The JSON data files which we’re sending below are found in code snippets below labeled accordingly, so make sure they are in the directory from which you’re running the below commands.

curl -X PUT -d @rec1.json “http://localhost:5984/sample_db/record1” curl -X PUT -d @rec2.json “http://localhost:5984/sample_db/record2” curl -X PUT -d @rec3.json “http://localhost:5984/sample_db/record3”

Again, each command should yield a JSON response with “ok” set to true if the add succeeded. Here is what one would expect from the first command: {“ok”:true,”id”:”record1″,”rev”:”1-7c15e9df17499c994439b5e3ab1951d2″}. Again, ok is set to true making this a success response. The id field is set to the name of the record which we created. You can see that names are set through the URL as they are just resources in the world of REST. The rev field displays the revision of this document. CouchDB’s concurrency model is based on MVCC, though it versions the documents as it updates them, so each document modification gets it’s unique revision id. You can read more about this in CouchDB’s architecture and API documentation.

rec1.json

  {
    "name": "John Doe",
    "date": "2001-01-03T15:14:00-06:00",
    "children": [
      {"name": "Brian Doe", "age": 8, "gender": "Male"},
      {"name": "Katie Doe", "age": 15, "gender": "Female"}
    ]
  }

rec2.json

  {
    "name": "Ilya Sterin",
    "date": "2001-01-03T15:14:00-06:00",
    "children": [
      {"name": "Elijah Sterin", "age": 10, "gender": "Male"}
    ]
  }

rec3.json

  {
    "name": "Emily Smith",
    "date": "2001-01-03T15:14:00-06:00",
    "children": [
      {"name": "Mason Smith", "age": 3, "gender": "Male"},
      {"name": "Donald Smith", "age": 2, "gender": "Male"}
    ]
  }

Now that we have the data persisted, let’s talk about some strategies for getting the data out.

CouchDB supports views. They are used to query and report on the data stored in the database. Views can be permanent, meaning they are stored in CouchDB as named queries and are accessed through their name. Views can also be temporary, meaning they are executed and discarded. CouchDB computes and stores view indexes, so view operations are very efficient and can theoretically (and I believe practically) span across remote nodes. Views are written as map/reduce operations, though they land themselves well for distribution. Here is an example of a map function in a view. (Reduce functions are optional if your query requires aggregating result sets)

  function(doc) {
    if (doc.name == "Ilya Sterin") {
      emit(null, doc);
    }
  }

There are other two really cool features, which allow more effective data filtering and transformation. These features are shows and lists. The purpose of shows and lists is to render a JSON document in a different format. Shows allow to transform a single document into another format. A show is similar to a view function, but it takes two parameters function(doc, req), doc is the document instance being iterated and request is an abstraction over CouchDB request object. Here is a simple show function…

  function(doc, req) {
    var person = <person />;
    person.@name = doc.name;
    person.@joined = doc.date;
    person.children = <children />;
    if (doc.children) {
      for each (var chldInst in doc.children) {
        var child = <child />;
        child.text()[0] = chldInst.name;
        child.@age = chldInst.age;
        child.@gender = chldInst.gender;
        person.children.appendChild(child);
      }
    }
    return {
      'body': person.toXMLString(),
      'headers': {
        'Content-Type': 'application/xml'
      }
    }
  }

The xml function and inlines you see here is the e4x which adds native support as a part of ECMAScript and is implemented in the embedded javascript engine Spidermonkey, which CouchDB uses.

This show function, takes a particular JSON record and turns it into XML. Creating a show is pretty simple, you just encapsulate the function above into a design document and create the record through PUT.

Here is the design document for the show above…

xml_show.json

  {
    "shows": {
      "toxml": "Here you inline the show function above.  Make sure all double quotes are escaped..."
    }
  }

Once you have the design document, create it…

curl -X PUT -H “Content-Type: application/json” -d @xml_show.json “http://localhost:5984/sample_db/_design/shows”

Note: in (…./_design/shows), shows is just a name of the design document, you can call it what ever you want

Now let’s invoke the show

curl -X GET “http://localhost:5984/sample_db/_design/shows/_show/toxml/record1”

Here is the output

<person name="John Doe" joined="2001-01-03T15:14:00-06:00">
  <children>
    <child age="8" gender="Male">Brian Doe</child>
    <child age="15" gender="undefined">Katie Doe</child>
  </children>
</person>

So, that was super easy, we stored our document which required no code on our behalf and then we retrieved it with minimal effort by using ECMAScript’s e4x facilities.

So how would I transform a record collection or view results into a different format? Well, this is where lists come in. Lists are similar to shows, but they are applied to the results of an already present view. Here is a sample list function.

  function(head, req) {
    start({'headers': {'Content-Type': 'application/xml'}});
    var people = <people/>;
    var row;
    while (row = getRow()) {
      var doc = row.value;
      var person = <person />;
      person.@name = doc.name;
      person.@joined = doc.date;
      person.children = <children />;
      if (doc.children) {
        for each (var chldInst in doc.children) {
          var child = <child />;
          child.text()[0] = chldInst.name;
          child.@age = chldInst.age;
          child.@gender = chldInst.gender;
          person.children.appendChild(child);
        }
      }
      people.appendChild(person);
    }
    send(people.toXMLString());
  }

Again, you encapsulate this list function into a design document, along with a simple view function…

xml_list.json

  {
    "views": {
      "all": {
        "map": "function(doc) { emit(null, doc); }"
      }
    },
    "lists": {
      "toxml": "Here you inline the show function above.  Make sure all double quotes are escaped as it must be stringified due to the fact that JSON can't store a function type."
    }
  }

Now, we create the design document

curl -X PUT -H “Content-Type: application/json” -d @xml_list.json “http://localhost:5984/sample_db/_design/lists”

Once the design document is created, we can request our xml document listing all person records

curl -X GET http://localhost:5984/sample_db/_design/lists/_list/toxml/all

And the output is

  <people>
    <person name="John Doe" joined="2001-01-03T15:14:00-06:00">
      <children>
        <child age="8" gender="Male">Brian Doe</child>
        <child age="15" gender="Female">Katie Doe</child>
      </children>
    </person>
    <person name="Ilya Sterin" joined="2001-01-03T15:14:00-06:00">
      <children>
        <child age="10" gender="Male">Elijah Sterin</child>
      </children>
    </person>
    <person name="Emily Smith" joined="2001-01-03T15:14:00-06:00">
      <children>
        <child age="3" gender="Male">Mason Smith</child>
        <child age="2" gender="Male">Donald Smith</child>
      </children>
    </person>
  </people>

So you can see how shows and lists are very useful and provide a convenient way to transform views into different formats.

As you can see, we created the database and stored data. No code was required to make that happen, just collect the data through your application and make a CouchDB REST request. We also added some custom functionality of transforming the data for multiple client consumption by using shows and views. In my opinion, CouchDB is a great step towards what one could call web/cloud scale database. It has awesome abilities to integrate with web technologies and it can scale to support the ever increasing web scale of data. In other words, it fits some application models like a glove.

I barely even scraped the tip of the iceberg of what CouchDB can do. We haven’t talked about result aggregates which can be achieved with map/reduce, we also haven’t discussed data validation and security. These features might be a top of some future posts.


5
Jan 10

Annoying javax.net.ssl.SSLHandshakeException exception

This exception has to be the most annoying one I’ve faced over the years with Java. I’m not sure which moron’s wrote the SSL library, but did they think about providing an option to disable ssl certificate validation? I wasn’t sure it was a requirement to have a valid certificate. I mean sure, it’s nice and provides that worm fuzzy security feeling, but when I’m developing and/or testing, can you please provide some way to disable this annoying thing? Either way, I dug into this today and figured it out. It’s actually as anything else in standard JDK, 100+ lines of code which they could of provided out of the box a simple boolean switch, instead your have to implement factories, interfaces, etc… WTF? Just to turn off certificate validation? Talk about over-engineering stuff.

So here is the code, which you can copy and paste into your project, instructions on how to use it are below…

import org.apache.commons.httpclient.protocol.Protocol;
import org.apache.commons.httpclient.protocol.ProtocolSocketFactory;

import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;

import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketAddress;
import java.net.UnknownHostException;

import javax.net.SocketFactory;

import org.apache.commons.httpclient.ConnectTimeoutException;
import org.apache.commons.httpclient.HttpClientError;
import org.apache.commons.httpclient.params.HttpConnectionParams;
import org.apache.commons.httpclient.protocol.SecureProtocolSocketFactory;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public static class TrustAllSSLProtocolSocketFactory implements ProtocolSocketFactory {

    public static void initialize() {
        Protocol.registerProtocol("https", new Protocol("https", new TrustAllSSLProtocolSocketFactory(), 443));
    }

    private SSLContext sslcontext = null;

    private static TrustManager trustAllCerts =
            new X509TrustManager() {
                public java.security.cert.X509Certificate[] getAcceptedIssuers() { return null; }
                public void checkClientTrusted( java.security.cert.X509Certificate[] certs, String authType) {}
                public void checkServerTrusted(java.security.cert.X509Certificate[] certs, String authType) {}
            };

    /**
     * Constructor for TrustAllSSLProtocolSocketFactory.
     */
    private TrustAllSSLProtocolSocketFactory() {
        super();
    }

    private static SSLContext createSSLContext() {
        try {
            SSLContext context = SSLContext.getInstance("SSL");
            context.init(null, new TrustManager[]{trustAllCerts}, null);
            return context;
        } catch (Exception e) {
            throw new HttpClientError(e.toString());
        }
    }

    private SSLContext getSSLContext() {
        if (this.sslcontext == null) {
            this.sslcontext = createSSLContext();
        }
        return this.sslcontext;
    }

    public Socket createSocket(String host, int port, InetAddress clientHost, int clientPort)
            throws IOException, UnknownHostException {
        return getSSLContext().getSocketFactory().createSocket(host, port, clientHost, clientPort);
    }


    public Socket createSocket(final String host, final int port, final InetAddress localAddress,
                               final int localPort, final HttpConnectionParams params
    ) throws IOException, UnknownHostException, ConnectTimeoutException {
        if (params == null) throw new IllegalArgumentException("Parameters may not be null");
        int timeout = params.getConnectionTimeout();
        SocketFactory socketfactory = getSSLContext().getSocketFactory();
        if (timeout == 0) return socketfactory.createSocket(host, port, localAddress, localPort);
        else {
            Socket socket = socketfactory.createSocket();
            SocketAddress localaddr = new InetSocketAddress(localAddress, localPort);
            SocketAddress remoteaddr = new InetSocketAddress(host, port);
            socket.bind(localaddr);
            socket.connect(remoteaddr, timeout);
            return socket;
        }
    }

    public Socket createSocket(String host, int port) throws IOException, UnknownHostException {
        return getSSLContext().getSocketFactory().createSocket(host, port);
    }

    public Socket createSocket(Socket socket, String host, int port, boolean autoClose)
            throws IOException, UnknownHostException {
        return getSSLContext().getSocketFactory().createSocket(socket, host, port, autoClose);
    }

    public boolean equals(Object obj) {
        return ((obj != null) && obj.getClass().equals(TrustAllSSLProtocolSocketFactory.class));
    }

    public int hashCode() {
        return TrustAllSSLProtocolSocketFactory.class.hashCode();
    }
}

Now all you have to do is call TrustAllSSLProtocolSocketFactory.initialize() anywhere in your application initialization code or right before you access any https resources, either through the URL class or through any other library, like HttpClient.

Hope this helps, though it’s still a pretty ugly hack IMO.


21
Sep 09

Grails 1.2 dependency management

Grails 1.2 comes with a great dependency management. You no longer have to manually manage the dependencies and don’t have to resort to using maven for this task. Grails uses Apache Ivy under the hood for transitive dependency management. Ivy allows to set up multiple resolvers and resolution patterns though allowing you to adapt to maven repositories and many other, even custom repository schemes. Grails takes it a step further and allows you to do this with a pretty nice looking DSL.

Application dependencies are configured in BuildConfig.groovy file and they look something like this…

grails.project.dependency.resolution = {
  inherits "global" // inherit Grails' default dependencies
  log "warn" // log level of Ivy resolver, either 'error', 'warn', 'info', 'debug' or 'verbose'
  repositories {
    grailsHome()
    mavenCentral()

    mavenRepo "http://download.java.net/maven/2/"
    mavenRepo "http://repository.jboss.com/maven2/
  }
  dependencies {
    // specify dependencies here under either 'build', 'compile', 'runtime', 'test' or 'provided' scopes eg.
    runtime 'com.mysql:mysql-connector-java:5.1.5'
  }
}

By default, grails pulls dependencies from your grails installation library. You can also uncomment (as you see above), the mavenCentral() repository. These are the defaults and don’t require any more configuration. You can also specify a custom maven-compatible repo URL using mavenRepo.

There is also a flat dir resolver, allowing you to resolve dependencies to some local directory. You can do this as:

flatDir name: ‘localRepo’, dirs:’/some/path/to/repo’

So I naively tried using flatDir to resolve a local maven repo, which of course didn’t work, since it’s not defined with the maven artifact resolution pattern. In order for that to work, you have to define your own resolver for now. I just submitted a patch to add a mavenLocal() resolver. Should make it to HEAD shortly.

So, for now, you can do

  private org.apache.ivy.plugins.resolver.DependencyResolver createLocalMavenResolver() {
    def localMavenResolver = new FileSystemResolver();
    localMavenResolver.local = true
    localMavenResolver.name = "localMavenResolver"
    localMavenResolver.m2compatible = true
    localMavenResolver.addArtifactPattern "${userHome}/.m2/repository/[organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]"
    def ivySettings = new IvySettings()
    ivySettings.defaultInit()
    localMavenResolver.settings = ivySettings
    return localMavenResolver
  }

Then use this method as…

resolver createLocalMavenResolver()

resolver method basically allows you to specify your own Ivy resolver. But that seems like too much work for something as common as local maven repo, especially since anyone who’s going to use maven dependency artifacts, probably will install some locally.

So once the patch makes it in, you can just uncomment…

mavenLocal()

By default this resolver will resolve dependencies in home_dir/.m2/repository, but you can also specify your own path to the repository by just passing it as an argument…

mavenLocal(“/opt/maven/repository”)

Grails 1.2 is beginning to look very exciting.