Posts Tagged: Java

Oct 12

Handling GSM 03.38 encoding

We recently internationalized our application and out requirement was to send SMS messages in different languages. SMS supports GSM 03.38 7 bit encoding as well as you can send messages using UTF-16 for characters that you can’t represent using the pseudo-ascii.

Our messages come in and have to be dispersed on the fly, though although sometimes a message is meant to be in ascii, it contains enough data in there in say Japanese, that would require it to be encoded in UTF-16 to make any sense.

The solution is pretty straightforward. Below is a java code snippet that first checks to see if the message is encodable as ISO-8859-1 and if so, transliterates the message to the GSM 03.38 and strips out any characters that are left out and didn’t transliterate properly.

Of course, there are other things that need to happen that I didn’t include, like trimming the string to 140 characters. For the UTF-16 hex string, it allows 280 hex characters, since characters are represented in either one or two byte encoding.

Nov 11

Our experience with distributed computing using Gridgain


I’ve been doing distributed computing in various forms for many years. Some at the higher level, some at the lower level and can tell you that with all the research advances and toolkits out there, it’s getting easier, but it’s still not as straight forward as it should be. Similar issues exist in multi-[process|thread] implementations. Although abstraction toolkits exist and they definitely make it easier to perform such actions without knowing much about implementing distributed algorithms, they are still leaky abstractions that in most non-trivial cases lead to having to have knowledge of the memory model, synchronization semantics, mutability, network topologies, etc… I’m not saying this is bad, I’m just saying that we haven’t yet reached the point where distributed or multi-[process|thread] computing is a cross cutting application concern. We have to actively bake it into our applications. I’m not arguing that abstractions should make developers ignorant of the underlying mechanisms, it’s just that they should be introduced at different levels of abstraction. It’s good to know what makes things tick (i.e. algorithms, data structures, etc…). Just look at the ORM world. The promise of not having to know SQL and just programming using OO took the OO world by storm. The naive thought that if you didn’t know the latest/greatest in ORM, you weren’t worthy. Years later, it turned out to be just a fad. Most have now turned back to SQL or some abstraction that’s flexible enough to allow you to work as low level or as high level as needed. In some cases, people are turning away from SQL data stores completely, but that’s another story.

A Little History

About 3 years ago, I was in the process of starting a company with my partners and we had a dilemma. We needed to process large amounts of data in near real-time. We also needed to be able to scale this horizontally in a “near” linear fashion. Data throughput was not temporally predictable, if predictable at all. After doing some searching and trying to fight the urge to implement things from scratch, we came upon the tuple-space programming model. It’s similar to the blackboard system for those that have an AI background. This is exactly what we needed, some shared distributed memory model that we didn’t have to think about (it presented itself as a single unified tuple space), and a programming API that allowed us to distribute jobs to work on the data stored in that model. Javaspaces is the java specification for Tuple Space Model. At the time, the only company that implemented this spec was GigaSpaces. We took their toolkit for a spin and it worked. The model was pleasant to program to and things were great. That’s until they didn’t work. Debugging was difficult, it leaked distributed abstraction. Deployment was also not very straightforward. None of that was the limitation of the Tuple Space Model, rather it was the implementation. I’m not saying GigaSpaces didn’t have a good implementation. I actually think it was rather nice at the time and am sure it’s way better now. At one point, we wrote an intermediary home-brewed rewrite of the system, so that we didn’t have to rush with the main implementation and can flush it out without harsh time constraints. In a few months, we ended up folding the plans to use GigaSpaces not because of the software, but rather because the company [GigaSpaces] had financial difficulties and their software, not being open source, was in flux in our opinion and we didn’t want to bet the success of our company on a commercial product of a company that looked like they were going to fold. Years later, they are still in business, great for them, but I don’t particularly regret our decision, especially looking at today’s landscape.

Most of our backend software is written in Java, Scala, and Python. Our web CRUD front end is written in PHP. The front end has a model that reflects our business domain, though it already encapsulates all of the business rules for our relational backend data store. We have a calculation process that utilizes these business rules to perform a bunch of operations and in the process reads/writes to the database. This process is very involved as it crunches hundreds of thousands of data points and will go up to millions in the next few months. It was written in PHP. We want to rewrite it using the model of distributing the data and computation using data affinity (collocating data and computations together). We’ve done it before and it works. So we’re happy to do it again, but we want to do this right and that might take some time. In the meantime, we wanted to take an intermediary step of distributing the workload amongst multiple servers (brought up and down as needed). I’ve been looking at numerous distributed toolkits for a while, from Hadoop to Akka to Gridgain. One that always stood out in the crowd has been Gridgain and in the last 6 months I’ve tried to find some place where it would pay its dividends. This project was it. We had a distributed scheduling service running on ec2 within a week, not bad, being that I had to learn the APIs and various ec2 deployment ins and outs.


Our implementation has a scheduler that decides what computations need to be performed. We then schedule these computations by pushing a job to a distributed worker node. Because our job is run as a shell script (invoking PHP) and outputs statistics after it successfully runs, we run the job using java’s ProcessBuilder class. We then run the process, and capture its output (in json). The output is then returned to the scheduling node, evaluated, and logged. The scheduler then knows that this job can run again (we have a need to ensure the job is a singleton when it comes to running in a grid).

Our implementation is in Scala. Gridgain has a rather nice Scala DSL. We used it as much as we could, but in some cases resorted to java API for reasons I’ll explain later.

First, here is our simple task that scheduler (once it figures out a job needs to run), pushes to remote nodes…

The above is pretty self explanatory. I kept of bunch of irrelevant things around, like inferring the return of the process and parsing/returning json.

Our scheduler is more complex, so I won’t show it, but it’s all custom business logic, nothing that has to do with scheduling a job on the grid. To scheduler the job, all you have to do is…

val retVal = grid.remoteProjection().call(
      new GridTask(scriptCommand, scriptEnv, scriptDir))

retVal is now the JValue instance returned from the remote job. If you get rid of the custom business logic in the GridTask implementation, the whole thing is a few lines of code. Talk about an “abstraction”! Also, one mention is that don’t let simplicity in their examples fool you. Their API is full blown and gives you the level of granularity you need, jus ask, and ye shall have. For example, grid.remoteProjection() returns a projection of all remote nodes (not including the current node, which is local). This is important for us because we didn’t want the local node (scheduler) doing any computations as it’s running on a box not able to support it.


One great thing about Gridgain, is that it works the same on a single node as it does on multiple nodes (same physical box), as it does on multiple physical nodes. You can also start multiple nodes within a single JVM. When I first heard this, I thought to myself, sounds great, but why? Nikita mentioned debugging and then a light came on. I remembered debugging GigaSpaces stuff and what worked on a single node, sometimes didn’t on multiple nodes. Mind, it was almost always my mistake, but debugging it was not very easy.

Our infrastructure runs on EC2. Gridgain provides EC2 images, but besides the fact that they run CentOS I believe, which I’ve grown to dislike, I’m also a control freak when it comes to my servers. I want them clean and mean:-). I prefer debian/ubuntu boxes, though I opted to create my own AMI. Installing Gridgain was easy, configuring is also a 2 minute task. It took me a few hours to figure it out and with the help of the forum, the configuration was a few lines of XML. We’re using the community edition, which comes with rudimentary IP discovery SPI. They have much more robust discovery SPIs available in their enterprise edition. One which I think makes the most sense on EC2 is S3 discovery. Basically, it uses S3 to write node information, and all nodes communicate using a S3 bucket. Makes sense. We weren’t ready to dish any money out for enterprise version yet, so I had to settle for IP discovery. In our case, it wasn’t hard. Basically, the scheduler in a single node that runs behind an elastic IP address that never changes. That means that the other boxes only have to know the IP address of the scheduler to make the initial communication. Once it can connect to one node, it joins the full grid. Because we have a single scheduler, if the scheduler goes down, the workers are no longer a part of the grid until the scheduler comes back up. This is OK for us, since due to some domain details, we can only have a single scheduler at this time and we’re OK with that single point of failure, especially being that we can bring it back up in no time and the worker nodes patiently retry the join operation at an interval and then rejoin the grid once the scheduler is back up. This is out topology. Gridgain supports pretty much what ever you want, including P2P no single point of failure topology. Below are the relevant configurations for our stuff…


The is the local IP address of the scheduler box. The ip is repeated in the “addresses” section, telling gridgain that this can be the sole server and it doesn’t have to join a grid topology before it goes live. Also, shared=”true” is important, as it tells gridgain to share configurations amongst the boxes in the grid. Without it, you’ll have an “order of operations” issue, where a master must be started first before the worker. With it, that issue is moot and you can start/stop things as you please. I wish they would make this the default.

Right now, Gridgain cannot bind to a wildcard, though you have to specify the private IP address. If it changes (box reboots), you have to change it too. They promised a solution in their next release, which will allow to listen on private IP and communicate over public IP. This will help in other NAT topologies. Being able to listen to a wildcard will also help in having a config you never have to change. But even with this caveat, this is quite a breeze.

The worker config is similar, except it only needs to know about the scheduler and does not need to operate until it has joined a topology…


There are small caveats I found, none of which created much of a hurdle.

First is serialization. In my case, I’m was using logback for logging, gridgain uses log4j. We both use slf4j, which takes the first in the classpath. If you’re going to distributed a job that references something that clashes with classpaths, you have to do some classpath mangling. Removing log4j from gridgain’s lib directory would fix the issue, but I didn’t want to customize the install. I was originally using a Scala closure as a job unit, which had no references to the log object. In theory, if that’s the unit that gets serialized and sent over the wire, the other end should not have to worry about any logback references, since they aren’t a part of the serialized closure references. In my case, that didn’t work. Somehow the serialization decided to serialize logback related stuff, because the top level class where the closure was being created used the logger. I’m not sure if this is a problem with serialization or a leaky abstraction of the JVM and the fact that functions aren’t first class citizens. I think the lowest level of serialization is at the class level, though I had to extract it to a class and implement a GridTask instead. Because GridTask class extension didn’t reference any logger object, it was serialized as needed and sent over without causing a classpath conflict. I haven’t had the time yet to figure out whether it’s the fault of Gridgain’s optimized serializer or whether this is a side effect of the JVM (as I mentioned above). I’ll try to find some time to test this later.

Second, Gridgain Community Edition has discovery that works great in a homogeneous topology, but for EC2 (NAT, ephemeral private IP addresses, etc…), configuration is transient in terms of if any of the things I listed changes, the config must change. This can be remedied by startup scripts, but Nikita said they’ll add better support for it in the next version (public IP communication would be a good first step, binding to wildcard interfaces would be a great second).


Overall we had an awesome experience with Gridgain. The grid application ran flawlessly during our busiest time of the year (Thanksgiving weekend). It ran so flawlessly, that this morning, I forgot which boxes it was physically running on.

I plan on using Gridgain in the future and hopefully utilize their data grid to rewrite our computation system to utilize in memory data/compute collocation (data affinity).

Nikita, thanks for all the help getting things sorted out in the first few days.

Sep 11

jconsole/jvisualvm rmi on ec2

I finally figured this out, thanks to Google of course. No single post or documentation solved the issue, but after a 2 hour battle and various options, I finally have it working.

If you are running a java app on ec2 and want to remotely connect to it using jconsole or jvisualvm, you need to start your java app with a few options. Here is my configuration. Also, note that disabling authentication, opens this up for everyone. Not good, so don’t do this in production or on a box that matters. Also, this doesn’t work with a restrictive firewall. Since RMI port is chosen randomly, you must have a rather loose firewall policy. There are ways around it, with ssh tunneling I believe, but this post won’t cover it as this point, I might do it at some point later.

First you need a policy file. Again this can be fine tuned, the example below shows a dangerously loose one…

grant {

Place this file in some directory. In my example it’s sitting in my home dir and is named .java.policy

java \ \ \ \ \ \ \
  -jar test.jar Runner

This starts the app and an jndi service listening on port 9001.

In jvisualvm, you now can connect to You can tune your parameters as needed, but two are crucial in my experience: and java.rmi.server.hostname. I had to specify these in order to make things work. Your mileage may vary.

Oct 10

State machine with Clojure macros and runtime argument inference

I few years ago, before I delved into functional programming, I had a small stint with Flex/ActionScript. ActionScript is an imperative language very similar to Java. At the time, I needed a very simple state machine, which had a single path of execution (basically a chain of commands). The design included a chain object, which joined command objects and executed them sequentially as long as no exceptions where thrown. Because these chains where also used for transformations and had dependencies (one command might compute something that is needed by another command), the commands had to keep state, though a global context object was used to store/retrieve state. I’m sure there are other ways of designing such a system, but it turned out to be pretty maintainable and rather clean. One thing that bothered me at the time were the implicit dependencies amongst the command objects, which relied on certain context information to be there in forms of map keys, which means if a command changed how it stored a particular results, its dependents would have to be modified as well. Because of the lack of static typing and runtime inference (unless done at each command object level), there was no way to ensure that something wasn’t silently failing. The problem was due to utilization of map structures for context storage which besides not having any static typing abilities, also didn’t allow the chain invocations to perform runtime inference of argument matching. The implementation was very functional and wasn’t too badly designed, but definitely not very pretty.

I don’t have access to the exact code at this time, but below is a simple example that demonstrated similar issue in Java.

Running the above yields:

    result1: 1234
    result2: 2468

Besides the mandatory java ceremony, it’s also not apparent to me that this can be accomplished any better without the use of reflection, which of course would add yet more boilerplate.

Macros to the rescue. If any of you aren’t familiar with what makes lisp (besides its simple syntax allure) so powerful, you should familiarize yourself with macros. The example I give below doesn’t even make a dent into the possibilities of macros.

With a simple macro

The above can now be utilized with the following api…

Yielding (in SLIME REPL):

Executing first
Executing second: first arg
Executing third: second arg random-arg

Running a similar script with an added parameter in the third command that doesn’t exist in the chain

Throws an exception (in SLIME REPL):

'does-not-exist' argument is required!
  [Thrown class java.lang.Exception]

Of course the above can be much improved, one off the top of my head improvement is to allow default values for arguments that don’t exist and possibly be able to specify arguments who’s lack of should throw and exception an in other cases either bind a nil or default value. But the concise example demonstrates similar abilities of my java program but with added runtime argument name matching. I’d love to see if the same can be accomplished with a statically typed language with type compile time vs. runtime argument checking. Going to investigate this with Haskell and Scala this week.

Oct 10

On JVM, languages, platforms, and frameworks

Today apple announced through their Java update that their support for the JVM is now deprecated and will possibly be removed from the future OS releases. The blogosphere is flaming, mostly with Java supporters who are either pissed off at Apple, worried about the future of their investment in the Java platform, or both. I don’t think the future of the Java platform should be in question due to any apple decisions, at the end of the day, there aren’t many production Java deployments on OS X, but it is a fact that a large portion of Java developers utilize OS X as their primary development platform. These developers, without proper support for their environments, will move to Linux and maybe even Windows. This move alone will probably not hurt apple in the short run, but their behavior towards isolating different developer groups, will eventually come back on them. Developers from any environments will cautiously approach, as the tamed Leopard and eventually Lion might bite them in the ass at the least expected moment.

It might be that Apple wants Oracle to take the charge in maintaining the OS X port and deprecating support might be a way to negotiate this without face-to-face negotiations. I’m fine with that, frankly I could care less who provides the JVM, as long as one is provided and is relatively actively supported. Until that announcement happens, this is yet another bump in the future of the JVM. First the Oracle purchase of Sun, then the lawsuit, and now the decision by Apple, definitely creates unneeded distractions for this platform’s developers all over the globe.

So I started this not to gripe any more about Apple’s decision, I’m sure there are enough posts out there flooding your RSS streams to keep you busy, rather I wanted to question the future of languages/industry in regards to the language/platform of the future.

Last week I attended the StrangeLoop conference in St. Louis. It was a gem of a conference, definitely the best I’ve been too in a long time. Alex, besides seeming like an overall awesome guy, has some extraordinary “brilliant people herding” abilities. How he managed to bring together a group of brilliant speakers and then convince another group of awesome developers to attend, is beyond me. The conference had some great talks and panels about the latest/greatest and bleeding edge tech stuff. One of the best panels was about the future of programming languages. The panel consisted of (Guy Steele, Alex Payne, Josh Bloch, Bruce Tate, and Douglas Crockford), all whom I have great respect for. One prevailing factor in most discussions in this panel as well as throughout the conference, has been concurrency. In the mutli-core/cpu world, what language/platform will allow for this paradigm transition to happen seamlessly. The fact is that although there are some awesome innovations/implementations going on in this area, STM, Actors, fork/join, and various others, none have yet abstracted the concurrency model away from the developer, as seamless as memory management and garbage collection is done in today’s runtime environments. But this is an exciting time to be in, many ideas are flowing around and something will appear on the horizon sooner or later. This something, as Guy Steele pointed out, will most likely be a model that will allow for divide/conquer, map/reduce operations to happen through language idioms and possible seamless abstractions. Accumulators are evil 🙂

There are many languages/platforms out there today, but none have been as predominant and as overall polished as Java and the JVM. From the language perspective Java’s getting stagnant and to some, boring, but the fact that it has an ecosystem of wonderful libraries and products is hard to ignore. The fact that all of these are bytecode compatible is even more to rant about, as with the advent of numerous great languages built on top of the JVM, it makes the transition to a different language and programming paradigm, much easier. It is truly hard to think of any current platform/VM that’s more prevalent and better suited for large scale enterprise development than the JVM. .NET comes to mind, but I doubt anyone from the non-Microsoft camp will be switching :-). There are other platforms, most notably Python and Ruby, but although both are credible, the presence of GIL on both, make the choice of using them in a concurrency model very difficult. You can architect and deploy your system as multi-process vs. multi-thread and arguably that model has its benefits, mostly by getting rid of the shared state model concurrency issues, but we (at least I do) like to have a choice. This decision shouldn’t be shoved down our throats because the language development camp doesn’t want, doesn’t think one is necessary, or [add your own excuse here] to produce a thread-safe non-GIL thread model.

The other issue with most of these languages/platforms, as well as the other ones I like, is the deployment options. They suck! From providing modular builds to deploying production applications, they just aren’t as polished and in most cases as stable/supported as the JVM ones. Common Lisp, one of my favorites as of late, for example, is an awesome language with numerous compilers/interpreters. Lisp doesn’t have a good packaging, dependency resolution, and build story, but even if you can get past that with some of the available half-baked solutions, then when it’s time to build/compile/deploy your app, you’re fucked, unless you want to build one yourself. I enjoy such challenges on Friday/Saturday nights, but not when time is limited and milestones are due (which is most of the time).

Ruby and Python for example, have a decent package managers gem and easy_install/pip respectively, but two problems lurk. First, lots of modules are written in C and in many cases, in my experience, are a big pain in the ass to compile, especially with today’s heterogeneous architectures i386, x86_64, etc… Lots of incompatibilities arise, forcing more time away from doing what I should be doing. Somehow my milestones never include the 2+ day derailments due to such issues. Maybe that’s what’s left of my optimism. The second problem only applies if you’re writing a web app and if you are, then you know the issues. Where are those stable/supported app servers? WSGI and rack should provide answers soon, for now, there are many options and none are without major issues as well. Some are a pain to install/deploy, some aren’t actively maintained. I mean, am I just being anal and asking for way too much or am I eternally spoiled by the JVM. Is it too much to ask to bundle the application into some archive or directory structure and just drop it in somewhere or point your server config towards it. Either way, even if they ease the pain of deploying webapps, the fact that [Python/Ruby] are not suitable in multi-core/cpu environments where threads are needed, is a show stopper for lots of apps I write. I know I can architect around the issues, but again, why should I have to program to the platform vs. the other way around. Give me the choices and trust me to make the best decision.

The next things is native GUI development. It is true that lots of interesting apps today are developed and deployed as web apps, but that doesn’t discount the fact that there is still a need for a native GUI in lots of use cases. Swing provides a good and in some instances really good, cross platform GUI library which allows to deploy your GUI across most popular platforms with 95% or more cross platform consistency. That sounds pretty good to me.

There are other toolkits, wxWidgets, QT, etc…, which also have bindings to python and ruby, but again, with today’s multicore, it would be a shame to not be able to utilize these cores simultaneously due to GIL. The bindings in languages that due provide a better concurrency story, work great, but these languages still suffer from the other pain points I mentioned before (i.e. deployment, build, package management, etc…). It’s a Catch-22.

So maybe I’m missing something here, but I think the JVM is the best option we have at this time that allows for multiple platforms, languages, paradigms, and comes with a great success story in the enterprise (build tools, deployment/modularity, enterprise grade servers, etc…). Languages implemented on top of the JVM benefit from this quite successful ecosystem. Ah, and might I mention that great libraries exist for about anything you’re trying to do. This is also true of Python, but I can’t say the same for Ruby. Ruby has numerous gems for most tasks, but they all seem half-baked at most. There are frameworks like Rails and Sinatra, which are great and fully supported with active communities, though as long as you don’t venture too far off the traditional path.

JVM has it’s own set of issue, the fact that it was written with static languages in mind and lacks support for dynamic bindings, tail call optimizations, and other things that make writing languages on top of it more difficult. It’s future is now also in question due to the new Oracle stewardship and the legal obstacles it chose to pursue rather than spend that time and money on the platform. Nevertheless, the ecosystem is still flourishing, kept afloat but tons of great developers and supporting companies who care about the platform and greatly benefit from it. JVM allows us to program in different languages while being concentrated on the task at hand, not peripheral issues like compiling for different architectures, battling the deployment inadequacies, not being able to utilize cores efficiently, and a variety of other issues. JVM ecosystem might not have the most ideal solutions to these problems, but they are far better than anything out there right now. If people that spend their time bashing the JVM platform would spend as much time making their platform better, maybe we’d have other choices.

I’d love to hear other’s thoughts on this topic. What do you think about the JVM and what’s your language/platform of choice. How do you build, deploy, distribute your applications? What concurrency options are available on that platform and how they compare to others? I’m familiar with most JVM options, especially Clojure and Scala, so I’m mostly asking for anything outside of the JVM ecosystem. I hope to someday compile a list of these and present them in an objective manner, for now, all I have is my empirical opinions.