mere technology …its all just ones and zeros

28Nov/110

Reflections on LJCConf 2011

This last weekend I had the pleasure to get along to the London Java Community's annual Unconference held in the swish IBM South Bank offices.

With a new baby in the house I dont get much chance to get out and get my geek on, so it was a great oportunity to catch up with folks and listen to some top notch tech talks delivered for and by members of the LJC.

Its been a pretty big year for the Java ecosystem as Martijn and Ben's keynote detailed - not least of which was the eventual delivery of Java 7 and the rejuvination of an Oracle led JCP, in which the LJC is seeking to play its own part.

I have to say I was pretty impressed by the calibre of the sessions that I attended, and even the lightning talks had some useful takeaways for me.

For the most part,  I found myself attending sessions a little outside of my regular interests. Firstly Peter Lawrey gave some detail on achieving impressive performance benchmarks over socket communication.

Next up I caught Jim Gough and Richard Warburton trying to sell us on joining the Adopt a JSR programme on JSR-310. ThreeTen, as it is otherwise known is the much needed effort to standardise on a long awaited replacement for the Java date and time API. This looks very much to be the logical progression to the Joda-Time project, whilst Jim and Richard are engaged in fleshing out an effective TCK.

Later I caught Ged Byrne going into a bit of a breakdown on byte code generation, or more specifically how the assembly of java byte code is actually pretty straightforward when you get right down to it. Ged's particular take was that while getting down to this level and generating byte code is one area that doesnt seem to get that much attention these days, it could be a rich solution space for some of the trickier JVM problems whose solutions in high level languages arent necessarily that desirable. To back this up he encouraged  us to go back to the future with some old computer science texts relating the Forth programming language and fundamentals of stack based computing. Great talk, with some really original angles.

In the afternoon I caught @sleepyfox talking about the state of Agile and what could be the next 'big thing' in Agile, or a post Agile world. This talk really was a bit of a gem for me, even if I found some of the implications a little depressing. Fox started out by discussing that agile could now be considered to be past the peak on the adoption curve, but that its usage has become so debased from its intended meaning when the term was originally coined ten years ago. Furthermore he went on to describe the conceptual lifecycle of a paradigm such as Agile in terms similar to those of a religious movement which eventually breaks down as it gets further away from the founder's vision. The solution, if there is one could be the rediscovery of the original values and principles that those founding agilists put forth.

Last talk of the day I got to hear Zoe Slattery give some tongue in cheek advice about managing developers. This prompted some discussion about just how far traditional management thinking is when it comes to managing developers.

So on reflection, a couple of overriding themes seemed to shine through for me during the day.

  • Getting back to basics.From Ged's exporation of what we could do with byte code, to Fox's encouragement to rediscover the agile foundations, it seems that despite the myriad of new technologies and toys, some of the greatest insights for today could be found in looking with new eyes at what we already have. Indeed it seems there is pretty much no new thing these days in Computer Science that cannot with a little digging be found to have actually been discovered in the 1970s.
  • Inertia in the enterprise. On a more downbeat note, I couldnt help coming away feeling that  despite the seeming advances of the hip new JVM languages like Scala, Groovy and Clojure - the barriers to these, or any alternative JVM language ever becomming mainstream in the enterprise are as big now as ever before. The forces at play that make large corp enterprises what they are, seem to be the same forces that drive for highly conservative technology choices. Similarly I couldnt help feeling that the methodological advances discovered by agile, whilst with some luck and dedication could produce a successful outcome in an enterprise environment, really are so at odds with general corporate culture that they are unlikely to ever become self sustaining.

But all in all, a great day out, and congratulations to Barry and the team for a superb unconference. Look forward to next year!

24May/110

Not a hobbit but a pirate

I often cast my eye over infoq for technical news and other bits and pieces. When I saw the thumbnail for Mike Lee's session at a recent QCon, I thought I was looking at a guy dressed as a hobbit, presenting at a technical conference. Actually it turns out on larger inspection that he is fact dressed as a pirate, not a hobbit.

Quite a good talk actually. He makes an interesting point about the distinction between User Interface and User Experience. The former being the expectation that is set before you actually use it, and the second being the actual experience.

Filed under: Uncategorized No Comments
14May/110

Thoughts on the Typesafe stack

Some interesting news yesterday with the announcement of Typesafe, a new company founded by the coming together of Martin Odersky (father of Scala) and Jonas Boner (founder of the Akka project).  While there had already been some moves by Odersky towards the foundation of some sort of corporate entity around the Scala language, this move has taken in not just Odersky and friends from the academic world, but also the Akka central players (Boner and Klang), as well as Mark Harrah the creator of the elegant Simple Build Tool (sbt). There seems to be some significant funding also, which I think is great news for Scala developers, as they have taken on the advancement of the eclipse plugin - Scala IDE.

Other than the announcement of the commercial formation, Typesafe has made available a 1.0 release of the 'Typesafe Stack' which is reported to be "A 100% open source, integrated distribution offering Scala, Akka, sbt, and the Scala plugin for Eclipse". However having downloaded it, I couldnt find sbt or the eclipse plugin, but a cobbling together of Scala 2.9, with Akka 1.1. How these technologies really form a stack is in my opinion really a bit loose, but I guess if you are looking to support some of these technologies commercially, it makes sense to pin down some fixed versions of the underlying technologies and slap a global version on it, as they have done with 1.0. Perhaps notably missing from this bundling (depending on your definition of stack) would have been a cut of Lift to represent the frontend, but I guess that is outside the commercial scope.

Working through the getting started guide you are taken through a simple actor based implementation of the cpu intensive calculation of Pi. (Thankfully not another fibbonacci!)  In the eclipse variant, you are taken through the installation of the Scala IDE plugin, and optionally configuring your project to also run with sbt. Unfortunately the source code is presented in Git, which could be an immediate barrier to entry for some, but as the example is trivially small, its no problem to enter it manually. Entering a few simple case classes to represent the messages, and creating a few actors has you up and running with Scala and Akka pretty quickly, and Im happy to discover the Scala IDE seems pretty stable, at least for this limited workout.

Looking at the makeup of the company, as well as the composition of the stack, Im left wondering how this will go forward. Will Typesafe be the vehicle that takes Akka forward as the new middleware with Scala on its coat tails? Is Scala picking Akka as its designated 'killer app' in this sense?

Overall, I think the big news here is that the Scala IDE is getting some sound commercial support. Improving the tooling for Scala, as for any new language is probably the single biggest thing that will encourage its adoption. I think its probably also good to see the Scala and Akka guys coming together around a single actor model. Reading between the lines Im guessing this probably means redundancy in some fashion for the actor implementation built into Scala itself. Lastly Im not sure the stack represents any great technical leap forward. Anyone interested in addressing the concurrency problem using actors on the JVM has probably already surmounted any installation issues with Scala, Akka, and probably even sbt. Whether it presents the start of a more commercially supportable concept remains to be seen.

Update: So it seems that Play rather than Lift has been given the nod to fill out the web framework hole in the 'stack'. Interesting times

Filed under: Tools No Comments
10Jul/100

The Pragmatic Programmer

Only 10 years late, but I finally got around to reading Andy Hunt and Dave Thomas' modern classic The Pragmatic Programmer.

This great book - subtitled From journeyman to master is written with a clear intent to impart as much practical wisdom, values and insight, as a master might to a student.

To get this across, they introduce you one by one to 70 numbered bite sized tips that you can take away and apply to yourself, your approach, or your project and your environment.

Tip 1 Care About your Craft, is really the megatheme for the whole book.  Its the one tip that doesnt really get any direct explaination. They rest of the book does that very well. Nevertheless, the scene is set that it is about you, and your attitude to your craft. Craft? Was that a new idea?

The rest of the book, and the other 69 tips flesh out the philosophy or pragmatism, approaches to take, tools that can help and ultimiately how to scale up the pragmatism to whole teams.

Another theme that jumps out as I read this, is that its a book about the long haul. As professional programmers we need to be planning to learn and continue learning as long as we are programming. Several tips deal directly with how we learn and continue to invest in ourselves. As project contributers we need to be writing code and contributing in such ways that are benefical for a projects long term health, not detrimental. There are tips that deal with fixing problems when you see them, refactoring, making reversible decisions, and others. Many technology books read like a sales pitch from a fly by night salesman who you know will be long gone once you are sold and using their product. But because the Pragmatic Programmer takes such a long haul look at ourselves and our work, you feel like Dave and Andy are sitting in the seat next to you when you are digging for requirements, or firing your tracer bullets.

You probably need to be a programmer to appreciate what this is all about, and it would probably help to have at least some practical experience, but beyond that, Dave and Andy's writing is very accessible. I got through most of it with what little attention span I can muster squeezed between the elbows and armpits of my fellow Londoners on the underground this last month, so that tells me its pretty readable.

This is an old book though. Ten years have passed since it was written, (an age in software) so I was pretty interested to see how much of its content is still applicable today.

Of that applicable content, how much of it still needs to be explicitly taught today, rather than just assumed as now universal industry knowledge?

To my suprise almost the entirety of the book is both applicable and needful to be taught. There are the few odd areas (such as page 148 which talks about EJB as a good example of a distributed system leading the way into a new world!) where it shows a bit of age, but this is mostly around specific tools, which was probably always going to date. But the discussions around the Law of Demeter, Orthogonality and the DRY principle are some of the best I have read anywhere on these subjects, and still areas not well understood by the majority of developers.

Another reason to read this classic is to help gain an understanding of the direction and speed our industry is travelling. Many of the ideas contained werent even new at the time of writing, but it doesnt take much to see in them the DNA of many of todays hip ideas such as Test Driven Development, Domain Driven Design, as well as much of what we now consider Agile. It really goes to show how little the fundamentals have changed or generally how slow real change happens even in a nascent field like software.

Its definitely worth reading this book. I think it will make you a better programmer.

27Jun/100

Taking charge of frameworks

Who is in control?

Seattle Library Framework - photograph Jan Tik - Creative Commons License

Modern software projects are largely built on the shoulders of third party frameworks and tools. But the tools can sometimes become more hinderance than help. How does this happen? What can we do to make sure that we drive the frameworks and they dont drive us?

End of the affair

I was coding on a recent project when it dawned on me that I had fallen out of love with the build tool Maven. Once upon a time I have to admit I did love Maven. As a build tool it had made my life easier. I could now do many common things quickly and easily, and I was appreciative. At least initially. Then the day came when I wanted to do something a little different with my build. Maven, rather than being there to help me, suddenly became an obstacle to that new but in no way onerous task. For the first time I had to register both the costs and the benefits of using this tool and question if it was still worth using it.

How does it happen?

Lets take a look at how we come to be using a framework. We have some problem, we see that framework X makes it easier. We take a punt (meaning of course exhaustive research and due diligence), and decide to adopt it. We then have a honeymoon period as we go through and apply our shiny new framework acquisition to the many low hanging fruit around our project that fall well within the framework's domain or area of expertise. So far its benefits all the way. Every step seems to validate further the decision to bring the tool on board. (Note this is probably the best time to ask your boss for that pay rise you were hoping for).
But slowly we see something new creep in. Things get a little harder. The tasks that we now want to accomplish don't seem to fall so easily within framework X's reach. Its not to say that they were in any way difficult tasks. Before we had introduced X, it would have been easily enough accomplished. But with X involved we find that in order to get the job done, we either have to bend X to allow it to do the new task, or take a big step to get around X in order to do it ourselves. What has happened here? Our project has become so enamoured with the perceived benefits of X that we were quite happy to streamline it front and centre into our project when we were picking off the easy stuff. It was only once we got to some of the harder stuff that we discovered that there might be some cost involved. If only we had known about the costs of using X up front back when we making the decision to use it a year ago!

This is not to say that we should never have used X in the first place. For my recent project the tool was Maven, and although we began to discover a lot more cost down the road than I for one ever anticipated, I still feel it was the right choice to make at the time. Even now on the balance of the benefits and the costs, it was I still believe we come out well on top. But this does little to soothe my disappointment, and disillusionment with the tool.

What can we do?

So what could I have done differently, way back then, when the decision was being made? Is there anything I could have done to have been able to more accurately gauge the costs and benefits of applying this tool to my project? Well, almost certainly yes.

A little more homework

The more you can learn about a frameworks costs and benefits, the better. But this is not always so easy. To fully appreciate both sides of the story, you need to look a lot further than just the framework's own web site and documentation. Invariably, its own literature will go to town on its strengths, showing you how you can begin picking all those low hanging fruit for yourself and how easy it will be. To get the other side of the story we really need to rely on outsider's experience in lieu of any of our own. What competing tools are there out there that we could use? Who has used them? Why did they choose them over X? Does X have a user forum or mailing list? What sort of questions are being asked? Are there any that don't seem to be getting satisfactory answers? Perhaps it is difficult to find much of anything written about X that you can take on board. Does this suggest that you would be a very early adopter (with an increased risk profile)? Or perhaps X is a technical outlier for a very good reason?

A more demanding test drive

Even with as much of other people's experience that we can take on board, we have to remember that every project is different. The costs and benefits of using X on one project will differ from those experienced on the next. Understanding our own project's requirements now, and having some sense of where these may lead in the future is vital for establishing what X applied may look like. We should put this to the test in a practical sense and try to test drive X within our project (or a branch of) on a range of different use cases - not just the low hanging fruit. This is much like the Tracer Bullet idea that the Pragmatic Programmers speak about, or the Walking Skeleton that Pryce/Freeman describe, except that we may be talking about an existing project already at some point of maturity. Regardless, the feedback we get from trying X on some of our own project's more demanding use cases will give us a good idea how easy things will be, and where we are likely to find pain.

Going forward

So our research comes up ok, and our test drives are looking good. We decide that we want to use X in anger. What can we do to protect ourselves from the decision we have just made backfiring? There are a couple of things.

Remember what its there for

It seems like a trivial idea, but we must remember why we considered using X in the first place. What problem were we trying to solve? It was that component Y couldn't service 1000 concurrent requests without falling over, right? We should right this down somewhere where it wont get lost. Somewhere permanent, where we have a record of the decision we are making. That way if we ever need to revisit it, we can see why and how the decision got made.

The first reason that we need to remember this is that if the problem goes away, (ie Y becomes redundant, or no longer needs to scale to 1000 concurrent requests) then we no longer have the original basis for using basis for using X. We may have also decided in the meantime to use X to solve some problem that component Z has, but that is a separate concern. In other words, if the requirement disappears, so should the solution.

Scope its use

X may have what we need to solve Y's scalability problems, but it may also come with all lots of other whizzy bells and features that look great on the glossy marketing sheet, but have nothing to do with Y's requirement. Now that we have onboarded X, it can be awfully tempting to start using some of these bonus features elsewhere in our project. The trouble with this is that X has managed to bypass our technical due diligence. These features may be good for us, or more likely they may be seriously compromising. (We've all seen the web frameworks that despite being MVC oriented, 'just happen' to have SQL support right out of the view layer!). I'm not pretending its easy, particularly on larger projects with many developers, but without scoping the usage of X to just the problems that we have decided to use it for, we may find that we are committed to the wrong tool for the job, even later if we no longer need it for its original purpose.

Having a backout plan

If there is one thing for certain in software it is that nothing can be taken for certain. Today's critical requirement could be negated by tomorrows, business change. As Dave Thomas and Andy Hunt (of Pragmatic Programmers fame) put it: "There are No Final Decisions". Retaining the reversability of the technical decisions that we make is not always easy, but every aspect of our software needs to be ready to adapt to change. This includes any frameworks that we choose to use. We should always be ready and able to back away from any frameworks or infrastructural code that we use to support our domain specific development work. At the end of the day it is our domain specific code that is most important, not the frameworks or infrastructure that we use to support it.

Theres probably no getting away from the reality that modern programming is to a large degree about stitching together frameworks (http://reprog.wordpress.com/2010/03/03/whatever-happened-to-programming/) We may not like it but we owe it to ourselves to make sure that we are discriminiating in the frameworks that we use. We need to make sure, that we control the frameworks and tell them what to do, rather than they telling us what we cannot.

[photograph Jan Tik - Creative Commons License]

27Apr/104

Enterprise OSGi and Apache Aries in London

I dont too often manage to get along to LJC events, but I really enjoyed last night's talks by Neil Bartlett (on Enterprise OSGi) and Zoe Slattery (on Apache Aries).

OSGi seems to have been on a slow but steady ascendency curve for a really long time, but there does now seem to be quite a few more areas of growth beyond the usual eclipse project and set top box builders. With most of the app server vendors and middleware players investing in some substantive way into modularisation via OSGi, the technology is encroaching much further now into the traditional enterprise java space.

Neil firstly ran through the basics of what OSGi is all about, before unpacking what the recently released OSGi 4.2 Blueprint is all about. Many of the traditional areas of the enterprise java stack (JDBC, JTA, Web containers) are addressed in some way by this recent release, but there are still a number of key areas (such as JMS and JCA) which have not made it in, this time round.

Most of the OSGi fundamentals werent new to me, but one thing I did take away from Neil's explaination was the notion of component adaptability. When considering Component Oriented Programming, Neil likened a component to a biological unit and its relationship with its environment. A bio unit can be placed (read reused) into different environments. In an ideal environment it will thrive. In a suboptimal one, it will try to adapt. The key here being that adaptability promotes reusability, particularly when considering environments that change dynamically at runtime.

Zoe Slattery, who works for IBM, and contributes to the Apache Aries project spoke about the short history of Aries and gave a quick demonstration of one of the simpler sample projects that she looks after as part of her Apache project role. It was good to see this practical demonstration to back up Neil's more theoretical talk. Interestingly, Zoe mentioned how the recent inception of the Eclipse Gemini project had come as quite an unexpected suprise to many in the Apache/Geronimo space from which Aries has emerged.

I havent played with Aries yet, but the talk was enough to convince me to give it a go sometime soon (watch this space). Its still really early days for Aries (its in the Apache Incubator), but it will be interesting to see how it grows, and particularly areas that compliment or cross over with Eclipse Geronimo.

For me, I think the 4.2 spec looks like a great step forward, making available in the OSGi space many technologies that one really cannot work without in the enterprise. But the ommissions (notably JMS and JCA) also seem pretty big, and with the tectonic slowness that these specifications have moved, it will likely be sometime before these gaps are filled.

Tagged as: 4 Comments
8Apr/102

Java resolving libraries on Windows

Doing a bit of work with JNI earlier today, I came across an interesting 'feature' of Java running on Windows when trying to load dependent libraries.

If you are working with JNI, chances are at some point in your code you will need to load the native library. In Windows, this is likely to be a dll file. Generally you would do it like this:

System.loadLibrary(myLibrary);

This mechanism relies on the library specificed by myLibrary being present in the directory specified by the java.library.path System property. If indeed your library is present here, and it has no further dependencies, then it should load fine.

But what if you are unfortunate enough to see the following exception thrown when you try to load myLibrary


Can't find dependent libraries
at java.lang.ClassLoader$NativeLibrary.load(NativeMethod)
...

Huh? What dependent libraries? This is one of those times when a java stacktrace really could be a little more helpful and tell us a bit more about what is missing. Lets say however, that we happen to know that myLibrary contains a further dependency on, lets say myDependentLibrary.

If myDependentLibrary were than also present in java.library.path, then this should be found also, right?

Actually no. At least on Windows you are still faced with:

Can't find dependent libraries
at java.lang.ClassLoader$NativeLibrary.load(NativeMethod)
...

Now we really do have a dilema. Because the exception message isnt more helpful, we now cannot tell if:

  • myDependentLibrary still couldnt be found
  • myDependentLibrary was found, but contains some further dependency
  • myDependentLibrary was found, but myLibrary contains some further dependency

So what to do? Firstly we need to get some understanding of the entire dependency tree. A great tool for this is Dependency Walker which can show an expanded view of what depends on what, and highlight anything missing.

Lets say that using Dependency Walker we discover that the only unresolved dependency in the system is just myLibrary depending on myDependentLibrary. So why didnt this resolve if it is present on java.library.path?

It turns out that Java on Windows has a nasty gotcha in this area. Under Windows, the only place that will be searched for myDependentLibrary, or any other transitively dependent libraries, will be the Windows PATH - which may not be ideal, depending what you are doing.

One way around this is to explicitly preload a library's dependencies into the java runtime beforehand. In other words:


System.loadLibrary(myDependentLibrary);
System.loadLibrary(myLibrary);

Of course to do this, you need to know the exhaustive list of all your dependencies, which is rather unfortunately tight coupling, but without putting everything on the PATH, that may be the only option.

Filed under: Tools 2 Comments
3Jan/100

"The maven, osgi, & spring combo, is about to happen"

.... taking a quick quote from Jilles Van Gurp, that I couldnt help but agree with.

The combination of dependency injection (Spring), runtime modularisation (OSGi) and compile/package time modularisation (Maven2) seems powerful, interrelated, and somewhat inevitable. The degree of crossover between these tools however suggests that some of the architectural design flaws in Java itself with respect to packaging and reuse are now bubbling to the surface in several places at once.

With the lack of language support for modularisation, it will be interesting to see which (if any) of these three technologies becomes the central tool in a developers toolkit. Considering the degree to which the Spring guys have embraced OSGi, my money would be on Spring.

Tagged as: , , No Comments
18Dec/090

Spring 3 goes G.A

So Spring 3 was finally released this week. Its been a fairly long while coming, but I guess the guys at SpringSource wanted to make sure they had dotted their i's and crossed their t's.

I expect there will be some fairly brisk uptake on this release given its been in the pipeline so long, and the feature set is fairly well known.

Im still interested to see how the MVC styled REST implementation competes in the wild with the JAX-RS alternatives such as Jersey. In many respects Spring are quite late to market (if you consider the GA release) with a RESTful offering, given REST's recent progress up the hype curve.

Filed under: Tools No Comments
8Oct/090

Steve Freeman – Test Driven Development book draft freely available

True to his name, Steve Freeman has been good enough to make available the draft text for his soon to be released book on Test Driven Development.

http://www.mockobjects.com/book/

Looks like it has a lot of good content on motivation, and examples using the awesome JMock2 library

Hat tip: Jason @ http://www.alittlemadness.com

Update: So it turns out that the final outworking of this effort was the book Growing Object-Oriented Software, Guided by Tests, which which Steve Freeman co-authored with Nat Pryce (of JMock and Hamcrest fame). This book - currently half-read on my desk, is a pretty good distillation of good object oriented lessons, testing techniques, and general wisdom built from a lot of experience on real world domains.

Structurally, the book breaks down into three parts:

  1. The first 70 odd pages contain some of the best power-weight ratio text on test driven development that I have read since Kent Beck's somewhat definitive work pushing 10 years ago now. One of the great bits for me was the explanation of the inner and outer feedback loops that you create when driving development out of both acceptance and unit testing.

  2. The middle section, and bulk of the book contains a long thorough worked example, where the authors (following a robust TDD approach), attempt to walk us through the thoughts and decision making process at each stage, as they grow out a hypothetical application. I have to say that this section is tough going, and where I have currently run somewhat out of momentum.
    The problem is not the quality of the writing, nor the validity of the approach, but the sheer difficulty in effectively serialising down to book form this kind of project progression. Perhaps it would be easier to read if done in one sitting, but for me coming back to it multiple times in a week, I lose track real fast on where we are at. As the code is grown out, and the changes to different files are shown, its hard to quickly recognise which file we are looking at, along with its context to what we are doing.

  3. The final section - which in the interests of full disclosure, I have yet to read - contains several chapters on advanced topics including some dealing with areas that are typically problematic to test, eg. persistence, concurrency and asynchronous code. I hope I get some time to read this properly, but simply knowing that these are there for future reference has value in itself.

The only other thing that I would like to point out is that this book is also quietly mascarading as the missing manual for the JMock and Hamcrest libraries. If you are have used either or (more likely) both of these libraries, then the example usage it contains is pretty handy.

So not much of a review here unfortunately, but if I finish the book, and get a chance, maybe I will post one.

Tagged as: , No Comments