Java Code Geeks

Wednesday, July 23, 2014

Discussion around TDD

Recently there have been quite a few discussions on TDD, its usefulness and whether it helps in design or not. It first started from a blog post from David Heinemeier Hansson(creator of Ruby on Rails) - TDD is dead. Later it became more interesting when David, Martin Fowler and Kent Beck (writer of Junit) started a discussion on google plus. The videos for this series can be found here .
Whether you are practicing TDD or not, I think this is a very good discussion to listen to.  

Sunday, August 19, 2012

When to use java refection

Recently I faced with this question when to use java reflection? Many frameworks use it but we still keep saying do not use it in day to day work.  What's the real deal with it?
There are some real problems with using reflection,  java reflection is a hack from java compiler perspective. Reflection takes away all the compile time type checking of java and makes it more like a dynamic programming language. So compiler does not able to catch errors at the compile time but errors are thrown at the runtime. No IDE can refactor code written with reflection. If you refactor and change the signature of  the method, build will be fine but at runtime the code can fail because some code was using reflection to invoke the method with its old signature. These are some real reasons why we try to stay away from reflection. Above all of these there are performance issues also. Direct method call may be two to fifty times faster than the reflection counterpart, this is however very use case specific but in general, code with reflection runs slower on JVM.
But reflection is a powerful technique and it has a very clear use case. At the time of compilation if component A has no knowledge of component B but at the runtime, A wants to use classes of B, then A needs to use reflection. Example, Junit framework has no idea what test classes we are going to develop, so it needs to use reflection to understand the test class and methods. Hope this will clear the doubts in your mind as when to use reflection and not.

Tuesday, July 10, 2012

Client Side rendering - the new MVC

Client side rendering is getting popular now a days. Let me give a little bit of background and share my view that client side rendering is a good thing.
In pre historic days when MVC was not much popular, people used to write simple javascript to render a HTML page. However, managing this form of page is hard, because if you want to do a simple test like give 10%age of your users a new look and feel of the page to test whether people will like that new UI, will make all if - then - else statements inside of the javascript. As the requirements get complex, the UI get complex and javascript becomes unmanageable.
Then comes MVC world, where view layer is clearly a seperate layer managed in the server. Like some uses JSP as the view layer or template engines like Velocity as the view layer. For a velocity type of engine, UI developer writes the skeleton of the HTML page, server sends the model (data) and the template to the template engine and template engine renders the final HTML by merging the template with the data. Spring provides integrations with such template engines to resolve the views. This was a good strategy untill Ajax came into picture. In case of Ajax, the basic webpage loads fast and then lots of small fragmented request goes to the server and server sends back smaller data set back to the client. Now usually serer sends back small JSON objects and there needs to be some unit at the client side which will read this JSON and render the HTML. This pushed the concept of template engines to the client side.
Client side template engine like dust.js or mustach has similar concept like velocity template, they take the template and the data as the input and renders the HTML. Only difference is that it happens at the front end and not at the back end. One can push this concept little further and instead of the using Ajax, from the first request itself, they can use client side rendering. The flow goes something like this -

  1. User request the webpage on the browser
  2. webpage sends the request to the server
  3. server send back a basic html which has reference to the client side template js and the precompiled js files
  4. Server does not close the connection, but keep flushing JSON data as and when ready to the same client connection
  5. On the client side, template engines take the template name and the JSON data and renders them
Some useful things are happening here -
  1.  Load on the server reduces as the server is not merging the template with the data, its the client side browsers that are doing it. So this solution is scalable as the server load goes down
  2. Since javascripts can be cached by the browsers, so once the page is loaded, sub sequent operations on the page becomes faster as the template engine and the pre compiled javascript templates are already loaded in the browser memory.

Monday, July 9, 2012

Why business logic should not be in database stored procedures

Couple of years back I worked on a project which is database intensive. It was so much database intensive that most of the business logic were written in database stored procedures. Java code was a thin wrapper on this which used to call these procedures. There are lot of things that go wrong with this model of application development. While in all these years I had this in mind that it is bad to develop software whose all the knowledge is in the procedures but I could not itemize my thoughts exactly whats wrong with this approach. Fortunately Pramod Sadalage from ThoughtWorks have given it the details (http://www.sadalage.com/)- here are the below reasons why
  • Writing stored procedure code is fraught with danger as there are no modern IDE's that support refactoring, provide code smells like "variable not used", "variable out of scope".
  • Finding usages of a given stored procedure or function usually means doing a text search of the whole code base for the name of the function or stored procedure, so refactoring to change name is painful, which means names that do not make any sense are propagated, causing pain and loss of developer productivity
  • When coding of stored procedures is done, you need a database to compile the code, this usually means a large database install on your desktop or laptop the other option being to connect to the central database server, again this leads to developers having to carry a lot of dependent systems just to compile their code, this can to solved by database vendors providing a way to compile the code outside of the database.
  • Code complexity tools, PMD metrics, Checkstyle etc type of tools are very rare to find for stored procedures, thus making the visualization of metrics around the stored procedure code almost impossible or very hard
  • Unit testing stored procedures using *Unit testing frameworks out there like pl/sql unit, ounit, tsql unit is hard, since these frameworks need to be run inside the database and integrating them with Continuous Integration further exasperates the problems
  • Order or creation of stored procedures becomes important as you start creating lots of stored procedures and they become interdependent. While creating them in a brand new database, there are false notifications thrown around about missing stored procedures, usually to get around this problem, I have seen a master list of ordered stored procedures for creation maintained by the team or just recompile all stored procedures once they are created "ALTER RECOMPILE" was built for this. Both of these solutions have their own overhead.
  • While running CPU intensive stored procedures, the database engine is the only machine (like JVM) available for the code to run, so if you want to start more processes so that we can handle more requests, its not possible without a database engine. So the only solution left is to get a bigger box (Vertical Scaling)

Sunday, March 6, 2011

Spring Session Scoped Bean

Somehow my experience with spring was limited so far in my career. Its just that for last one year, I have been using Spring extensively. I recently used Spring's Session Scope beans. We most of the time use Singleton bean or the Prototype beans but session scoped beans are very useful in those use cases where you need to create an object per httpsession. For example, the shopping cart object which holds the list of shopping goods that a customer is buying. We can of course do this manually by creating a new Cart object per user session, however what if the lifecycle of such objects can be controlled by outside IOC containers like Spring? Yes, Spring Session and Request scoped beans are just for that, to create an object and bind that to each unique session/request.
Now the question really comes how to test such a bean in JUnit test cases? One way to test is - test just the functionality of the bean, but if I was more interested in to find out that Spring really does create object per sessions and can maintain the lifecycle of such beans. I created test cases to create a mock httpseesion and httprequest objects and fake one http user session.

Here are the steps to get session scoped beans in Junit test cases
1) Create one Session aware Application context which is XmlWebApplicationContext
2) Create mock session and mock httprequest
3) Write test cases to get the session scoped beans and write the necessary test cases
4) tear down the httpsession and request.




public class AbstractSessionTest extends TestCase {
protected XmlWebApplicationContext ctx;
protected MockHttpSession session;
protected MockHttpServletRequest request;

protected String[] getConfigLocations() {
return new String[] { "file:src/test/resources/test-context.xml" };
}

@BeforeTest
protected void setUp() throws Exception {
super.setUp();
ctx = new XmlWebApplicationContext();
ctx.setConfigLocations(getConfigLocations());
ctx.setServletContext(new MockServletContext(""));
ctx.refresh();
createSession();
createRequest();
// set the session attributes that the bean might look for?
session.setAttribute("varname","value")
}
protected void createSession() {
session = new MockHttpSession(ctx.getServletContext());
}

protected void endSession() {
session.clearAttributes();
session = null;
}

protected void createRequest() {

request = new MockHttpServletRequest();
request.setSession(session);
request.setMethod("GET");
RequestContextHolder.setRequestAttributes(new ServletRequestAttributes(
request));
}

protected void endRequest() {
((ServletRequestAttributes) RequestContextHolder.getRequestAttributes())
.requestCompleted();
RequestContextHolder.resetRequestAttributes();
request = null;

}

@AfterTest
public void tearDown() {
endRequest();
endSession();
}
}


One more useful tips here is - how to access the session variables from the Session Scoped beans? Well the friend here is RequestContextHolder. Using this - we can get hold of the httpsession inside of the bean and we can access any variables that was bound to this session in some http filter or any upper layer which might want to put session scoped variables to indicate one user specific data.


ServletRequestAttributes attr = (ServletRequestAttributes) RequestContextHolder.currentRequestAttributes();
HttpSession session = attr.getRequest().getSession(false); // true == allow create
String value= (String)session.getAttribute("varname");

Saturday, September 4, 2010

My experience with some of the famous anti patterns in Software

There are two anti patterns that most of the software companies suffer from. These two anti patterns are so deep in the lifecycle of a software company that at times it becomes impossible to get rid of them.
If you ever work with a DBA who has more than 10 years of experience, you will notice one thing - they usually think anything and everything can be solved using Relational database system. I am not against relational database, it is the best thing that has ever happened to software. The concepts of relational database is solid and full proof, although this does not mean we should always think of database to solve any problems. If the data that is going to get stored in the database is not of relational in nature, why to force it to be stored in a relational database? This is what exactly google found out and came out of BigTable which is essentially a key/value pair (or hashtable). This anti pattern is known as "Golden Hammer" anti pattern. I can bet even many of us are suffering from this anti pattern. For example, in the next team meeting when someone tells you a problem, watch out what is your response? Are you offering the same solution to many different problems? Like one of my ex-colleague knew Hibernate very well, any problem you mention to him, there is a large chance he will apply hibernate to solve that problem !! Like one core Java developer will try to use Java to parse log files instead of 10 lines of Perl code ? Like one JEE developer uses EJB always whether it is required or not?One of my friend told me once that he is always using handful of "design patterns" though he knowns them all !!

One can be free from this anti pattern only if she knows many languages, many technologies and more importantly if she follows Richard Feynman's philosophy of leading life with an open palm. If we do not lock ourselves into favorites and make room for new ideas, then only we can be free from this anti pattern.

There is one more anti pattern which can be found at the team or organization level than that of at the individuals. It is called "vendor lock-in". Like one shop might be completely into Oracle database and using Oracle pl/sql procedure with business logics into it. Once I worked for a team where everyone from director to junior developer were comfortable writing business logics in the pl/sql code instead of at the application layer using Java or .Net. When I mentioned this to the director that it might become very costly if tomorrow your organization thinks they are not going to do any business with Oracle anymore, what will happen to all these code ? You might have to port them to Sybase or some other database vendor, instead, keep the logic at the application layer. He almost looked at me as if I am an alien and I do not know anything about how software works.

These are the reasons for which I like google. This is the only company that tries to keep an "open palm" strategy and comes up with their own systems - like Bigtable for their database.
To be a better software developer or architect, we should learn not to create favorites and keep an open mind so that we can welcome new ideas.

Sunday, July 25, 2010

Java Lazy initialization and Double Check Locking

According to the post at the javaworld, the double check locking is broken. I agree with the author, lets see what is double-check-locking idiom.
The below code snippet is usually used to initiate some private member variables lazily.



class SomeClass {
  private Resource resource = null;
   public Resource getResource() {
     if (resource == null)
     resource = new Resource();
     return resource;
   }
}

This is a very common use case, for example most of the time, your code might not need to use resource, so why to initiate this variable if it is too costly. Instead only initiate it when necessary.
This however will not work for multi-threaded applications, as there is a potential race condition. Two threads can see that resource is null and one thread override the resource variable initiated by another thread. Simple solution is to use Synchronized access, like

class SomeClass {
   private Resource resource = null;
   public Resource Synchronized getResource() {
     if (resource == null)
     resource = new Resource();
     return resource;
   }
}

But this is now slow because whether the resource is already being initiated or not, any calls to getReource will synchronized all the threads.
The smarter solution is to use Double Checking idiom something like this -



class SomeClass {
   private volatile Resource resource = null;
   public Resource getResource() {
     if (resource == null) {
     synchronized {
       if (resource == null)
       resource = new Resource();
     }
     }
     return resource;
   }
}

However, because of the Memory model of JAVA specification, the above mentioned code might work in one JVM but might fail at times. If you remove the volatile keyword, it is definitely broken, but even if you keep the volatile keyword, it is not certain that it will work.
So what is the solution ? The author at the javaworld mentions that we should avoid the lazy initialization and the following code -



class SomeClass {
   public static Resource resource = new Resource();
}


But the problem with this code is - whether you need or not, resource will be always initialized.

According to me, better way to use lazy initialization is to use "holder-class" idiom as explained in Effective Java by Joshua Bloch. It works like this -


class SomeClass {
   private static class FieldHolder {
     static final Resource field = new Resource();
   }
   public Resource getResource() {
     return FieldHolder.field;
   }
}


There is no synchronization, so no added cost. When the getResource method will be called for the very first time, it initiates the FieldHolder class for the very first time.