Java Code Geeks

Friday, June 4, 2010

Its all about the Neo - the singleton

Singleton is one of the most commonly used design pattern. However, a great deal of attention is required to make it right.
The invariant of singleton is to make sure - there is one instance of the object present always. This sounds simple and looks
like make the constructor private and have a private static final instance member variable is enough to create a singleton. But what if the object needs to be Serializable ? The moment someone makes the singleton object serializable, upon deserialization, you get another instance of it. So we need to provide readResolve() throws ObjectStreamException method to fix that.
The simple implementation of singleton is

public class SingleObject {

private static SingleObject INSTANCE = new SingleObject();
private SingleObject()
{
}
public static SingleObject getInstance() {
return INSTANCE;
}
}

This is proper but as the INSTANCE is a static field, so the instance of the Singleton object will be created always.
The other implementation is lazy initialization (anything that is lazy, pay special attention to the threading)

public class SingleObject {

private static SingleObject INSTANCE = null;
private SingleObject()
{
}
public static SingleObject getInstance() {
if (null == INSTANCE ) {
INSTANCE = new SingleObject();
}
return INSTANCE;
}
}

Looks to be okay but this one is broken in multi threaded system, if two threads calls the same getInstance method, the check-and-create pattern might produce two Instances.

One simple fix is -

public static synchronized SingleObject getInstance() {
if (null == INSTANCE ) {
INSTANCE = new SingleObject();
}
return INSTANCE;
}

but then, the penalty of getInstance calling is high now, whether the object is already created or not, always synchonization is required here and it will make the application slow. Ideally, only when object is not present, it needs to be created and hence synchronized. Instead in this case, the getInstance method is synchronized, so there is a performance penalty always
whether the object is already created or not.
The fix is little tricky and now the singleton coding becomes interesting -

public class SingleObject {

private volatile static SingleObject INSTANCE = null;
private SingleObject()
{
}
public static SingleObject getInstance() {
if (null == INSTANCE ) {
synchronized (SingleObject.class) {
if (null == INSTANCE) {
INSTANCE = new SingleObject();
}
}
}
return INSTANCE;
}
}


As we can see now, there are plenty of things to worry about, the volatile variable and the double check to ensure multiple threads work properly.

If we do not want to use synchronized at all and yet make it thread safe - use the "lazy initialization holder class" idiom as mentioned by Joshua Bloch.


public class SingleObject {

private SingleObject() {
System.out.println("It got created");
}
private static class ClassHolder {
static final SingleObject instance = new SingleObject();
}
public static SingleObject getInstance() {
return ClassHolder.instance;
}
public static void someOtherFunc() {
System.out.println("Some other func can also be static ?");
}
}

In this way, we can achieve thread safety, because for the first time, the getInstance method will be called, it will access
the class ClassHolder causing it to load and initiaze the instance variable.
This is a powerful method and can be very useful for any lazy property initialization of a class.

However, there is now an easy way to create singleton since JDK 1.5 and it is called Single value enum strategy. People who follows Joshua Block and Effective Java knows what I am talking about.


public enum SingleObjectEnum {

INSTANCE;
Public void othermethods…
}

Now, SingleObjectEnum.INSTANCE is definitely a singleton and multi-threading, serialization etc is not developers headache as it comes from JVM itself.

Thursday, May 13, 2010

Think of context before Design

Anyone related to software performs some design decision or other in their daily life. Sometimes we know that we are designing or sometime while coding we take those decisions naturally/unknowingly. Point is, any constructive work will have to choose among various choices to create/design something new, software is no exception.
I have realized over the years that the choices that we make can not be bound to very hard and fast rule cause they tend to change w.r.t context. And this is why it is so interesting.

For example, if we find some list of numbers to be sorted, we can say, we shall use QuickSort. Now think that these numbers are stored in a large file of 4 GB size, can we read all the numbers in memory and perform quick sort? We probably need to split the large file in several smaller files and perfom something like a mergesort. So the context really changed the decision.

I remember one time we wrote a perl function which will read all the lines in a file and perform some operations between lines. Lets say in one line, we have the cost of journey from A to B and in other line we have cost of journey from B to A. This program will combine the both way journey cost and produce one line (out of 2) after adding the costs in both ways. It was working everywhere, suddenly one day it stopped at some places. Problem is - if the file size grows large, out-of-memory error was coming.
How did the file size grow large? This program is supposed to run every after 15 min, but at some sites, customer did not run it for a week and when they re-enabled it again, it failed....
Again, context broke the design decision we took.

Around a year back I was reading some site and it was mentioned how one of the top most architect of microsoft takes interview. It was very interesting to see that he asks "design a house" type of question. The moment candidate draws some square box on the board and mentions the specifications as 16feet long, 13 feet wide, 12 feet high living room, he is almost out ...
Why ? –
Because the candidate did not ask the "context". That house was for a giraffe and not for the interviewer. So a 12 feet high becomes too less for the animal.
A good developer will always ask these questions before writing a single line of code. If he assumes something, he would also write down the assumptions.

Agile methodology tries to solve this problem. At first, it produces a basic (you can read it as crap) version of code with loads of assumptions. Show this to the user, and give them a shock, oh my god, these assumptions are not gonna work in my env! Now we are talking, we know, it is not going to work in your env, let’s note down those assumptions on which we shall work for next 6 months and give us more money :-)

Over the years I am trying to rectify myself on this -- design something and also think of breaking that design cause either today or tomorrow it will be broken by someone else in support/field/customer. Instead they find it, its better to find the problems by the creators themselves.

Wednesday, March 17, 2010

JSF and PopUP

By default JSF has no support for PopUps, unless you are using any custom JSF component library like Tomahawk (http://myfaces.apache.org/tomahawk/index.html), it is tough to achieve pop ups in JSF.
Lets say we want to create a web page which shows a link and on click of that link, one popup should appear with details of the element clicked. This is a very standard requirement like on click of "username" the pop up window should display the details of the user. In JSF it becomes tough because onclick event we can fire a javascript but that is not in sync with the backend action bean.
There is however an easy solution for this problem. The solution is to do the followings
1. Create one controller and bind it to the request scope
2. On click, form a URL which will look like this "newpage.jsp?param1=val"
3. The controller should read the params passed in the URL and process accordingly, for example in this case, the param would be the name of the user and the controller should contact LDAP or backend database server to fetch details data.
The controller can get the details of the parameters passed along with the URL by using FacesContext.getCurrentInstance().getExternalContext().getRequestParameterValuesMap() - this will return the map whose key is the parameter name passed in the URL.

Friday, March 12, 2010

My Status update

wow just by looking at my blog, I am figuring out how much busy I was in last couple of months. Well, 2010 is here with lots of activity for myself. The first major change was breaking up with HP. It was a long relationship - around 7 years, found myself becoming very comfortable, so decided to leave and come to US as a consultant.
As a consultant, I am doing some pretty good work in last three month or so. I worked on JSF, worked on Webservices - Axis2 (and SwA).. its cool. I want to write about JSF 1.1 and PopUp, will post it as soon as I get some free time.

Monday, December 7, 2009

JAX-RPC and JAX-WS - how does the SOAP look?

Lets consider a method like the following –

public void purchaseOrder(String item, int quantity, String description)

When this method is serialized to SOAP packet using JAX-RPC engine, the packet looks like the below





This is called SOAP encoding. The method name is the root element under the Body tag of the soap packet. Each parameter is having type defined and the value. This type will be used by the SOAP brokers/engines to de-serialize to String or int values.

The same method needs to be written like “public void someMethod(PurchesOrder po)”
to be serialized to the similar SOAP packet using JAX-WS Document literal engine .
The same method can be left unchanged to be serialized to the similar SOAP packet using JAX-WS Document literal Wrapped (wrapped means, the parameters are wrapped inside the method name) engine .
The sample SOAP packet for JAX-WS is shown below-




In case of JAX-WS, the entire XML inside the Body is deserialized to an JavaBean following JAXB. The WSDL will contain the XSD for the PurchesOrder Bean.

Monday, August 24, 2009

Anti Patterns

Just finished reading this book on anti patterns. It is a very nice book which describes how we (the developer as well as the architects and managers) fail to deliver a software project properly even after all of our good intentions. Sometimes we follow wrong architecture, sometime the deadly sins of sloth, narrow-mindness, Apathy towards technology etc etc..
I quite like the anecdotal evidences that the authors have come up with for each of the well known anti patterns. While reading the book, sometimes I just jumped out from chair and shouted "oh god, I have experienced such problems before". I am sure, as a developer or lead or budding architect, I will be very careful now onwards to look for anti patterns while designing as well as while doing coding or reviews.

Sunday, May 31, 2009

Futuristic Build Process

Build – by definition is to compile all the source code of a project and create the deliverables. For example, build is to compile all Java code using ant or maven and then create the executable jar files or war files or ear files.

An effective build system goes long way to help the project. Nightly or at least weekly development build bits are essential to check the status of the project. Lets divide build systems in its effectiveness.

Step 0 – Build system checks the syntax errors. This is really basic and a must do for build systems. If there is a syntax error in any code , build should fails. It usually does as the compiler stops if there is a syntax error.

Step 1 – Build fails if the unit test fails on the code. This is where most of current projects are. Build fires the compilation, followed by creation of deployable jars/wars. As the last step, build fires the JUnit test cases. If the JUnit test case fails, build breaks. This makes sure the build output is sane and no unit test cases have failed on the build output.

Step 2 – Now lets consider of a build system which breaks if the code checkin has more complexity than the standard, or if the code coverage of the code checkin is less. There are several software metrics that are available now, like “Code Coverage”, “Cyclomatic complexity of code”, “CRAP Metric (Change Risk Analyzer and Predictor)”. [ there are already plugins available on the CRAP and Cyclomatic Metric, where can be found here]

If Build system is integrated with calculation of these metrics, that would guarentee a uniform code complexity and if anytime code complexity numbers come high from the code analysis tools, build should break. This is surely a cool thing to do considering the fact that now-a-days much of the code is getting auto generated. Example, you have a webservice which was using Framework version 1 {like axis1} and you were compiling WSDL file to generate the stub code. You move to Version 2 and build breaks because the auto generated code is more complex and hard to maintain.

It should be a collective decision of the project team whether to go for higher complexity of the code as a team.


This is where at Step2, I think the next generation build systems will go.