JAX-WS comes with JaXB. The raw SOAP packet does not come to the web service consumer, instead they get unmarshalled to the Java Objects by the JAX-WS runtime engines.
Lot of time it is however better not to convert the payload of the SOAP packet i.e. the XML to the java objects. Think of a situation where we are getting one XML and we are going to use that XML to another web service. This happens in case of SOA environment, where we talk to one web service, get some data and we pass that to another web service. If we are converting the received XML to another XML only then we should try to avoid the JAXB layer.
Lets consider the steps involved if we have the JAXB layer -
Response SOAP --> Response Object --> transferred to another object --> Send to another Web Service
Instead if we remove the XML to Object layer, it becomes -
Response SOAP --> XSLT transfer to another XML format and send to another web service. This is faster, only pain point is - we need to work with RAW XML like using StaX API instead of using the Objects of JaXB.
Monday, December 29, 2008
Friday, December 26, 2008
DAO layer in J2EE Apps
Phew.... its been a crazy time. There was a product release and things were pretty hectic. Hence did not get time to write anything on the post. Now that the Chrismas vacation is here, need to spend time with family also :-)
Anyway, I thought of writing this post on Data Access Layer of J2EE. How many times we write DAO layers yet we dont get it right at the first time. I was reviewing some of the codes and found out that people do not understand why they are writing a DAO and just write it cause its mentioned somewhere that it is a good practice.
okay, so as we all know that we introduce DAO layer to abstract the data access layer. For example, if your application runs on MySQL, SQL Server, Sybase and Oracle, the DAO developer makes sure that he introduces a nice abstraction layer which hides all of these database related differences. The client code of DAO - which is many times the Session Facade or Value List Handers, do not have to worry about loading the JDBC driver and creating DB specific SQLs.
Doing so, DAO layer needs to make sure that it opens a connection with database, prepare statement object, create resultset , retrieve data and close the connection and resultset. Closing the connection and resultset are important otherwise you will have leaked cursors at the database layer. It is also important to make sure DAO layer catches all the JDBC/SQL specific exceptions and convert them to application specific exceptions. Lot of DAO developers do not handle the exceptions from the DAO layer which defets the purpose of having another layer. The idea is that any code which is going to use the DAO, should not have to use java.sql.* or javax.sql.* packages at all.
There are many ways to create a DAO layer, if we refer to the Suns J2EE blueprint site -
http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html
Anyway, I thought of writing this post on Data Access Layer of J2EE. How many times we write DAO layers yet we dont get it right at the first time. I was reviewing some of the codes and found out that people do not understand why they are writing a DAO and just write it cause its mentioned somewhere that it is a good practice.
okay, so as we all know that we introduce DAO layer to abstract the data access layer. For example, if your application runs on MySQL, SQL Server, Sybase and Oracle, the DAO developer makes sure that he introduces a nice abstraction layer which hides all of these database related differences. The client code of DAO - which is many times the Session Facade or Value List Handers, do not have to worry about loading the JDBC driver and creating DB specific SQLs.
Doing so, DAO layer needs to make sure that it opens a connection with database, prepare statement object, create resultset , retrieve data and close the connection and resultset. Closing the connection and resultset are important otherwise you will have leaked cursors at the database layer. It is also important to make sure DAO layer catches all the JDBC/SQL specific exceptions and convert them to application specific exceptions. Lot of DAO developers do not handle the exceptions from the DAO layer which defets the purpose of having another layer. The idea is that any code which is going to use the DAO, should not have to use java.sql.* or javax.sql.* packages at all.
There are many ways to create a DAO layer, if we refer to the Suns J2EE blueprint site -
http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html
Friday, December 5, 2008
Evolution from Inheritance to Composition to Dependecy Injection (Inversion of Control)
Inheritance and compositions are the most basic concepts of Object Oriented design. If there is a is-a relationship between objects, we tend to use inheritance and if there is a has-a relationship, we tend to use compositions. For example, Manager is a Employee, so Manager will be a sub-class of Employee. Otherwise, Machine has disks, so Machine object can contain many Disk objects following Composition.
It is a usual practice to follow more of Composition rather than inheriance. Reason is - composition allows loose coupling than Inheriance. In case of Inheritance, any change is super-class methods can break the client code which are using the super-class or any of its sub-classes. Whereas in case of composition, any change in the back-end class (the class that is encapsulated within another class) might not lead to changes to the client code which uses the front-end class (the class that encapsulates the back-end class).
Hence when we develop enterprise applications, we develop complex class maps where one class holds refernece of many other classes. The problem is the front end class needs to explicitely get a hold of the back-end class. For example, class A has-a Class B. In that case, we can find some code like
Class A{
private B b;
A()
{
B b = new B():
}
....
Now it becomes a problem for class A sometimes to get these references of its composite classes. Think about Class A as one EJB bean which is going to class another EJB bean B. In that case, A needs to look for B's home interface from JNDI. Or think of B's reference is available in JNDI (like B is a JDBC Datasource), in that case also A needs to look for the reference of B from JNDI.
Instead of class A, getting the reference of Class B explicitely, if some framework can inject or set the reference of class B in class A, that becomes very easy. As a result class A can concentrate on other activities. This is called "Dependency injection" (previously known as Inversion of Control). Spring framework make use of this concept very much. Even EJB 3 uses dependecny injection to inject EJB references or Persistence Contexts. Dependency injection of these frameworks can really help developers to develop cleaner code.
It is a usual practice to follow more of Composition rather than inheriance. Reason is - composition allows loose coupling than Inheriance. In case of Inheritance, any change is super-class methods can break the client code which are using the super-class or any of its sub-classes. Whereas in case of composition, any change in the back-end class (the class that is encapsulated within another class) might not lead to changes to the client code which uses the front-end class (the class that encapsulates the back-end class).
Hence when we develop enterprise applications, we develop complex class maps where one class holds refernece of many other classes. The problem is the front end class needs to explicitely get a hold of the back-end class. For example, class A has-a Class B. In that case, we can find some code like
Class A{
private B b;
A()
{
B b = new B():
}
....
Now it becomes a problem for class A sometimes to get these references of its composite classes. Think about Class A as one EJB bean which is going to class another EJB bean B. In that case, A needs to look for B's home interface from JNDI. Or think of B's reference is available in JNDI (like B is a JDBC Datasource), in that case also A needs to look for the reference of B from JNDI.
Instead of class A, getting the reference of Class B explicitely, if some framework can inject or set the reference of class B in class A, that becomes very easy. As a result class A can concentrate on other activities. This is called "Dependency injection" (previously known as Inversion of Control). Spring framework make use of this concept very much. Even EJB 3 uses dependecny injection to inject EJB references or Persistence Contexts. Dependency injection of these frameworks can really help developers to develop cleaner code.
Wednesday, November 26, 2008
Sequence Diagram - when and why
Even till recent past, I was not a fan of Sequence Diagram. I always thought if I have proper class diagram and have a fair idea on the "Use Case", I do not really need to draw a Sequence Diagram.
Its only recently I discovered that if I draw sequence diagram of a system, sometimes its become very clear which are the "chatty-interfaces". This in turn can help us to improve performance by reducing "round-trips". If we draw a sequence diagram, the interactions between the components get clear which sometimes lead to identification of "chatty-interfaces". This might sound obvious that we know how many times we are sending requests to database or webservices and the request is going over the network, but in a complex application, it might not be very straight forward. It becomes very easy to see how many times we are going across network if we draw the sequence diagram. This in turn can help us to improve the design by introducing "Caching" and reducing round-trips across networks.
Its only recently I discovered that if I draw sequence diagram of a system, sometimes its become very clear which are the "chatty-interfaces". This in turn can help us to improve performance by reducing "round-trips". If we draw a sequence diagram, the interactions between the components get clear which sometimes lead to identification of "chatty-interfaces". This might sound obvious that we know how many times we are sending requests to database or webservices and the request is going over the network, but in a complex application, it might not be very straight forward. It becomes very easy to see how many times we are going across network if we draw the sequence diagram. This in turn can help us to improve the design by introducing "Caching" and reducing round-trips across networks.
Saturday, November 22, 2008
Cache vs Pool, when and why?
Cache and Pool both are very related topics in J2EE world. We use these techniques to improve performance. For example, we try to Pool the JDBC connection as opening a Database connection is costly. Similarly we Cache data in the form of objects in memory to reduce the "round-trip" time across network.
Note the way we used Pooling at one place but Caching in another place. Question is when we pool and when we cache ? We do Pooling when the objects we pool have no state in them. We do Caching when objects have states. For example, any JDBC connection object will be sufficient for the application to connect to the database. It does not matter which one is giving you the service. Hence we Pool them. But in another case, when we are looking for objects from database, lets say we have read the Customer information from database and stored them in the forms of objects in memory, in this case, we do not want any Customer object to see but we want a specific customer object whose customer id is X or whose name is 'XYZ Inc'. So the easier way to remember is - stateless information can be pooled and stateful information needs to be cached.
Note the way we used Pooling at one place but Caching in another place. Question is when we pool and when we cache ? We do Pooling when the objects we pool have no state in them. We do Caching when objects have states. For example, any JDBC connection object will be sufficient for the application to connect to the database. It does not matter which one is giving you the service. Hence we Pool them. But in another case, when we are looking for objects from database, lets say we have read the Customer information from database and stored them in the forms of objects in memory, in this case, we do not want any Customer object to see but we want a specific customer object whose customer id is X or whose name is 'XYZ Inc'. So the easier way to remember is - stateless information can be pooled and stateful information needs to be cached.
Friday, November 21, 2008
Command Design Pattern in J2EE
Command is one of the oldest design pattern which exists from the days of SmallTalk. It is popular for its simplicity and yet the power. The heart of this pattern is its command interface which can have simple methods like init (), execute(). The concrete command objects will implement this interface. This design pattern can take a request and pass that to the proper object for being processed without any knowledge from client. The client code does not need to know actually which concrete object is going to process this request.
In J2EE world, to adhere to MVC model, there are several design patterns available to wrap the business logic or model. For example, EJB uses Facade Design pattern and hides the model complexities from controller. Similarly Struts framework uses something called "actions". If you do not want to use EJB or Struts type of framework, yet want to adhere to MVC model, its best to use Command Design pattern. Here is a small description how -
Develop some command beans which needs to have series of Set methods to set the parameters, a series of get methods to get the results after the processing, one init method to initialize (like open JDBC connection etc) the environment before processing start and finally one execute method to really perform the processing.
Develop one command interface which will have init and execute method so that the Controller Servlet can use this interface to send the processing request without really knowing which concrete command object will process it.
Lot of time we can use Class.forName to initialize the concrete command object. For example, in the controller servlet we can have some code like this -
DBCommandBean command = (DBCommandBean)Class.forName("com.mycom."+requestType+"CommandBean).newInstance();
command.set(hashtable) -- hashtable contains some parameter and value which will be used for processing.
command.execute() -- this will perform the real processing
command.get() -- this will give back one hashtable with key and value pairs which need to be accessed to see the result of the processing.
In J2EE world, to adhere to MVC model, there are several design patterns available to wrap the business logic or model. For example, EJB uses Facade Design pattern and hides the model complexities from controller. Similarly Struts framework uses something called "actions". If you do not want to use EJB or Struts type of framework, yet want to adhere to MVC model, its best to use Command Design pattern. Here is a small description how -
Develop some command beans which needs to have series of Set methods to set the parameters, a series of get methods to get the results after the processing, one init method to initialize (like open JDBC connection etc) the environment before processing start and finally one execute method to really perform the processing.
Develop one command interface which will have init and execute method so that the Controller Servlet can use this interface to send the processing request without really knowing which concrete command object will process it.
Lot of time we can use Class.forName to initialize the concrete command object. For example, in the controller servlet we can have some code like this -
DBCommandBean command = (DBCommandBean)Class.forName("com.mycom."+requestType+"CommandBean).newInstance();
command.set(hashtable) -- hashtable contains some parameter and value which will be used for processing.
command.execute() -- this will perform the real processing
command.get() -- this will give back one hashtable with key and value pairs which need to be accessed to see the result of the processing.
Sunday, November 16, 2008
Distributed Transaction in J2EE framework
Distributed transaction is a fascinating topic. Think about developing a software application which needs to manage multiple databases. We usually develop software that talks to only one database in a single transaction. This is called as local transaction. If one transaction encompasses multiple databases, it is known as Distributed transactions. The distributed transaction is possible through the usage of JTA - Java Transaction API.
Before we discuss more, lets get the definitions of some terms correct -
Resource - Thnk about this as a database or JMS Queue.
Resource Manager - Its the JDBC Driver which manages the Resource - like database.
Transaction Manager - It manages the transactions across resource managers.
Transaction originator - its the client code that starts the transaction.
The transaction originator is the EJB beans that kicks off the transaction. EJB containers support distributed transactions through a protocol called "Two Phase Commit". In case of two phase commit, the Transaction Manager sends a request to all the resources before really committing the transaction. If the response is OK from all the resource managers, the transaction gets committed in the second phase or else get rolled back.
The beauty of distributed transaction is that, the transaction manager can propagate the "Transaction Context" from one Resource Manager to other Resource Manager. And all this happens in the background at the EJB Container level. Of course, your container needs to support this feature, like WebLogic Server supports distributed transactions and the two-phase commit protocol for enterprise applications.
Before we discuss more, lets get the definitions of some terms correct -
Resource - Thnk about this as a database or JMS Queue.
Resource Manager - Its the JDBC Driver which manages the Resource - like database.
Transaction Manager - It manages the transactions across resource managers.
Transaction originator - its the client code that starts the transaction.
The transaction originator is the EJB beans that kicks off the transaction. EJB containers support distributed transactions through a protocol called "Two Phase Commit". In case of two phase commit, the Transaction Manager sends a request to all the resources before really committing the transaction. If the response is OK from all the resource managers, the transaction gets committed in the second phase or else get rolled back.
The beauty of distributed transaction is that, the transaction manager can propagate the "Transaction Context" from one Resource Manager to other Resource Manager. And all this happens in the background at the EJB Container level. Of course, your container needs to support this feature, like WebLogic Server supports distributed transactions and the two-phase commit protocol for enterprise applications.
Subscribe to:
Posts (Atom)