Monday, March 1, 2010

Deployment architecture checklist 1

A piece of code in a dev environment is in a nascent state ; it needs  a production environment to generate the cashflow for the company. This is the time when architecting deployments becomes a crucial task .

1. Never underestimate the importance of a staging environment . Dev-UAT resemble each other closely ,
and Staging - Production environments should resemble even closer. Every deployment model, script, scalability, failover, high availability solution that needs to be there in production, needs first to be there for testing and optimization in the staging server . Once it stabilizes , it can be safely deployed to production.

2. Anwers to the following questions should be clear to you  because these answers will determine how your deployment diagram will look like.


 What are my hardware / software limitations
        
 How critical is the system

  How many physical servers/boxes do I have
   Minimum two if you want failover
  Do I have virtualization technology?

   What does my software license limit me to ?
   Is it a per cpu license, per virtual machine license or per server license.
             
  How scalable should my system be ?
   If it supports x systems now, in y year/s might it require to  support z more?

  How much load should my system handle and how much CAN it actually handle ?

  How robust is my system  
   If some day the Transaction per second tripled, can it still work smoothly?

  How will my system have a clean high availability / failover startegy
   Without losing or duplicating data if one server goes down.

 What existing systems will yours need to interact/integrate with? 

  How do you intend to monitor the system once it has been deployed?

  How secure does the system need to be              

 Identify the distribution architecture
   Will your application have two tiers, three tiers, or more?       


Here is a stepwise guide to build a small deployment diagram of an integration system.

1. Identify the scope and make separate models:
Are you planning to deploy a version of a single application or scale it out to the deployment of the system within your organization .

2.Identify the nodes and their connections

Think about separate modules of your application and which servers they will be deployed in .
Eg . Make a digram of all the different kind of servers your system might need :Web Servers, Application Servers, Database Servers, JMS Server, Mail Server etc. You may represent them as boxes. In this step you need to make platform decisions such as the hardware and operating systems to be deployed, including how the various nodes will be connected

Add to the diagram which module will be hosted in which environment .

Connect these  with the kind of interaction these modules will have ( eg messages, ack, RMI) .

Distribute software to nodes.

What you have here is the basic deployment diagram .

In my next post I will discuss further how to factor in different considerations like sizing, resource optimization , redundancy etc into the basic diagram so that you can reach the next level , ie, getting your deployment architecture ready for the enterprise level.



Friday, February 19, 2010

Using Mule with Xstream, tweaking the Map converter

In the Mule enterprise edition, we have the option of using jdbc:transports, and oh boy! does that make our lives easier. But like every technology, you have to jump through some hoops to make technologies talk to each other. Here is a little help for those who want to use Xstream to convert a JDBC transport generated map into xml using Xstream.


Xstream in itself provides various flavours of converters, the list of which is available here  http://xstream.codehaus.org/converters.html

The Map converter that come by default does a great job of generating XML out of maps but there are a couple of constraints
1. It will only convert maps of the type HashMap, Hashtable, java.util.LinkedHashMap, sun.font.AttributeMap (Used by java.awt.Font in JDK 6).
This causes a  problem because Mule jdbc:transport uses  a customized map which doesnt conform to any of the types that XStream Map converter supports.
 
2.The output XML will look like









However , we would like to get the output as,









mainly because Mule data integrator likes the input that way.

To address the above , I created a customconverter which takes a map and iterates through it generating tags with "Key" values and node values with "Value" fields . This solution is pretty easily available on the internet and looks like this

public class CustomMapConverter implements Converter {

public boolean canConvert(Class type) {
   return type.equals(HashMap.class) ||
             type.equals(Hashtable.class) ||
             type.getName().equals("java.util.LinkedHashMap") ||
             type.getName().equals("sun.font.AttributeMap") // Used by java.awt.Font in JDK 6
           ;
}
public void marshal(Object source, HierarchicalStreamWriter writer, MarshallingContext context) {
      Map map = (Map) source;
      for (Iterator iterator = map.entrySet().iterator(); iterator.hasNext();) {
         Map.Entry entry = (Map.Entry) iterator.next();
         writer.startNode(entry.getKey().toString());
         writer.setValue(entry.getValue().toString());
         writer.endNode();
       }
}
public Object unmarshal(HierarchicalStreamReader reader, UnmarshallingContext context) {
   //no implementation , not needed in my code
   Map map = new HashMap();
   return map;
} .........

My standalone test class looked like
public class XStreamStandalone {

    public static void main(String args[]){
        Map map = new HashMap();
        map.put("fname","Rupa");
        map.put("lname","Majumdar");
        map.put("age","90");
        map.put("addr","345");
        map.put("phn","567");
        XStream xs = new XStream();
       xs.alias("customer", Map.class);
       xs.registerConverter(new CustomMapConverter());
      String xml = xs.toXML(map);

      System.out.println("Result of tweaked XStream toXml()");
      System.out.println(xml);
    }
}
While this worked fine on the standalaone test program, the moment I started sending my converter the map generated by the mule JDBC transport (after a select query), the converter didnt get called at all.
The two reasons why the above customization doesnt work on the maps from mule jdbc transport select query are
1. The map is of type org.apache.commons.dbutils.BasicRowProcessor$CaseInsensitiveHashMap
2. The actual payload is nested.

To address the two issues, I had to modify my customConverter as follows
public class CustomMapConverter implements Converter {

public boolean canConvert(Class type) {
  return AbstractMap.class.isAssignableFrom(type);
}
public void marshal(Object source, HierarchicalStreamWriter writer, MarshallingContext context) {
  Set entrySet = (Set)((HashMap)source).entrySet();
  Iterator itr = (Iterator)entrySet.iterator();
  while(itr.hasNext()){
   Map.Entry entry = (Entry) itr.next();
   writer.startNode(String.valueOf(entry.getKey()));
   writer.setValue(String.valueOf(entry.getValue()));
   writer.endNode();
  }
}

And now it works ...

Hope this helps ...




Thursday, February 11, 2010

Using the Mule Data Integrator

I started using MDI last week, and stumbled over and over again over glitches. Very little documentation and a brand new product donot make a good couple. I managed to successfully do an object to object mapping and on the way solved quite a few errors
Here are some pointers. Other technologies in the mix are Mule ESB, Mule MQ , eclipse , Mysql and Oracle.

1. Download and install Mule Data Integrator. After the installation is complete , a data-int folder should be created inside your mule home folder (MULE_HOME variable definitely needs to be set).


2. Assuming that you are using eclipse with mule, go to help-> software update->install to install MDI IDE. All these steps are there in the installation guide so I am just going to skip over to the part where you have the MDI Examples (read-only) directory shows up in your eclipse project explorer.

3. As the first step what you need to do is copy the example folder , paste it using a different name in your workspace. I call it here as "mymapping". I have noticed that when you try creating an actual mapping file, it finds only this project in the file creation wizard.
4. Create a mule project and include examples from MDI ( there is a little checkbox in the bottom) . This creates a mule project for you with some example data-int configuration files uner the conf folder.
5. Have your java beans ready . Implement them as serializable . Otherwise sending them over Mule MQ will generate error.

6. Donot forget to include the MDI jar in your buildpath. Or else it will not recognise the namespace for data-int. This was my error number 1.
7. In the "mymapping" mdi project, import your java bean classes under the Structures folder. Copying and pasting them usually doesnt work. This is probably because they get converted into xml structures while MDI imports them. If you go to the actual folder on the disk, you will see xml representations of the jars/classes you imported. This was the second error I did, I copy pasted my class files under the Structures folder. They started giving and error when I tried to include them in the actual mapping.


8. In your mule configuration add the data-int transformer declaration and use this transformer in the transformer-ref .
I faced some very peculiar problems while using this transformer
a> ERROR : Make sure the document is in Xstream format .
This error occurs if you have serializable in the bean but you havent specified sourcetype/ resulttype parameters in your transformer mapping . Your mule transformer mapping should have the sourceType ="JAVA" and resultType ="JAVA" if you are doing a java bean to java bean mapping.
b> data-int:project should be specified and be the name of the the MDI project in my case, "mymapping". Make a zip of this folder once you are done creating your mapping, and place it under the mule_home->data int-> project folder . The data-int:project tag has an attribute called archive where you have to specify the path to the zip of your mdi project. If your MULE_HOME is set, by default it looks under the above specified path.

c> when I applied the transformer in my inbound enpoint I repeatedly got an error saying
the transformer need a valid endpoint. The DataIntregatorTransformer calls the ObjectToJmsMessageTransformer and endpoints like STDIO and JMS generate this error.
If this transformer is applied at the outbound endpoint , this error is removed.
I am not very sure if there is another workaround, but the above strategy worked for me.
Here is an example of my mule config

Hope this helps.