Thursday, December 17, 2009

EJB 3 SessionBeans vs. Spring

SessionBeans provide the following:
  • Instance pooling
  • Automated session state maintenance
  • Passivation/activation
  • Annotations to escape the XML hell(that was EJB2.x).
  • Native integration with JSPs, Servlets, JTA transactions, JMS providers, JAAS, etc being part of J2EE stack.
Spring doesn't have the above which was why those points were highlighted. But Spring is irreplaceable if you need DI into plain Java classes. JdbcTemplate and JmsTemplate are a couple of very important tools in this respect. One area where Spring scores above EJB3 is AOP which is more feature rich than EJB interceptors.

Monday, September 7, 2009

Java logging "message duplication" phenomenon

I've been stuck with this problem for a while and googled a lot on it before the resolution struck me. But firstly, what was the problem?

Problem Definition: There are times in multi-threaded java programming when you see duplicate messages in your log files.

What was I doing wrong: I had setup the logging infrastructure in a super class that all logging classes would extend. Everytime I was instantiating a class that would log it would initialize the logging infrastructure for itself. And everytime it did so it would add another Appender(log4j)/Handler(JUL) to the Logger class.

What is the resolution: Do this in the log setup code(specific for log4j):

Logger logger = Logger.getLogger("");
Enumeration appenders = logger.getAllAppenders();
if (!appenders.hasMoreElements()) {
//initialize an Appender class
//Add the appender object to the logger
}

When I was doing it wrongly, I did not have the if block. Hope you would benefit from this piece of wisdom :)

Monday, August 17, 2009

Spring-2 some learnings while reading a book

I was reading this book "Sping in Action" by Manning publishers to catch up with what's new in Spring 2.0. Agree, kinda late in doing it when Spring 3 is already in the market. But it's better to be late than never.
Personally, I feel that they've overloaded the bean creation with lots of convenience methods like these:
  • Autowiring: Use "byName", "byType", "constructor" and "autodetect" to autowire a bean by name and type, resp. Something like this:
<bean id="kenny"
class="com.springinaction.springidol.Instrumentalist"
autowire="byName">
<property name="song" value="Jingle Bells" />
</bean>
  • You can tell all the beans defined in the context file to autowire themselves with something like this:
<beans default-autowire="byName">
  • Autowiring, though it is convenient to use might lead to wrong wirings. So, I personally might resort to the old-fashioned manual wiring.
  • Bean Scoping: By default beans are created as Singletons. Therefore, spring provides the following "scope" options when you create a bean: singleton, prototype, request & session(only valid in spring mvc) and global-session(only valid if used in portlet context).
  • factory-method: If you already have a java implementation that returns a singleton, anyways. Then you can use it as a bean by declaring the method that spring might use to create an instance of the bean by using the factory-bean bean attribute.
  • Now, this is pampering the users to no end. You, now have a way to tell spring to run init and cleanup methods when a bean is created. This, you can do using the init-method and destroy-method bean attributes where you can mention the methods of that bean that must be executed for the resp. stages. If you have used the same named init and cleanup methods across all your beans then you can specify it by using the default-init-method and default-destroy-method beans attributes.
  • Parent-child concept in Spring! This is inheritance, the spring way. If you have a bean that will be created a multiple times in the context file, then you can use it like this:
<bean id="baseSaxophonist"
class="com.springinaction.springidol.Instrumentalist"
abstract="true">
<property name="instrument" ref="saxophone" />
<property name="song" value="Jingle Bells" />
</bean>

<bean id="kenny" parent="baseSaxophonist" />
<bean id="david" parent="baseSaxophonist" />
  • You can also abstract out the common properties into a parent-child relationship like this:
<bean id="basePerformer" abstract="true">
<property name="song" value="Somewhere Over the Rainbow" />
</bean>

<bean id="taylor" class="com.springinaction.springidol.Vocalist" parent="basePerformer" />

<bean id="stevie" class="com.springinaction.springidol.Instrumentalist" parent="basePerformer">
<property name="instrument" ref="guitar" />
</bean>
  • Method injection: Another sorcery provided by Spring-2 is method injection whereby, you can, during runtime, replace one method with another method. This is done in two ways:
  1. Method replacement: Here you would implement an interface called org.springframework.beans.factory.support.MethodReplacer and implement the method public class TigerReplacer implements MethodReplacer {
    public Object reimplement(Object target, Method method,
    Object[] args) throws Throwable;} Then you do something like this in the context file: <bean id="magicBox" class="com.springinaction.springidol.MagicBoxImpl">
    <replaced-method name="getContents" replacer="tigerReplacer" />
    </bean>

    <bean id="tigerReplacer" class="com.springinaction.springidol.TigerReplacer" />
  2. Getter injection: In this method you leave the getter method to be injected as abstract and then the lookup-method in the context file in the following way: <bean id="stevie"
    class="com.springinaction.springidol.Instrumentalist">
    <lookup-method name="getInstrument" bean="guitar" />
    <property name="song" value="Greensleeves" />
    </bean>

Wednesday, July 15, 2009

Java dependency walker tool

How many times you been stuck with a problem like this: You have numerous jars that cames with the external APIs you are using in your java project. When it is time to bundle the java application you do not know which jar files are the only ones that are absolutely needed for the application to run successfully. Most of the times to save time you bundle all the jars with the application with this unsettling feeling at the back of your mind that this is gonna be an inefficient application. Moreover, you bear the consequences of high upload times to the server of such a bloated application. In this situation haven't you felt if you had a tool that would tell you which are the absolutely necessary jar files to bundle?

Here it is: I've made this tool which is only one java class. I am also bundling the ant build file so that you do not have to spend too much time figuring out how to run this thing.

How to run this tool:
1. Pick one folder as a base folder for this tool. Make a folder structure src->com->gp. Copy the source code of the JavaDependencyWalker.java in a file with the same name in the gp folder.
2. Now, copy the code of build.xml into a file of the same name in the base folder.
3. If you already do not have an ant install please download it from here http://ant.apache.org and install it.
4. Edit the build.xml file and modify the three proerties mentioned at the top with the following guidelines:
  • scolon.separated.folders.ofjars = For the value of this property put a semi-colon separated absolute paths where the program can find all the jar files required for the class to run. For example, if the main class is a web service client that uses axis2 APIs then put the path to the lib folder of the axis2 install here. Also, do not forget to put paths to jar files of the custom classes you have built here. For example, you must put the path to the jar that contains the main class to run!
  • main.class = This is the main class to run, for which you are finding the exact jars requirements.
  • args.to.class = Here give all the arguments that will be required for the main class to run.
5. Run the ant command "ant run" in the base folder. If everything goes fine you must have a list of jar files at the end of the run which are the jar files you will need!

Below is the source code:
JavaDependencyWalker:
package com.gp;

import java.io.BufferedInputStream;
import java.io.DataInputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;

import java.util.Enumeration;
import java.util.Hashtable;
import java.util.StringTokenizer;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;

public class JavaDependencyWalker {
private Hashtable m_jarsHash = new Hashtable();

public JavaDependencyWalker(String jarPaths) {
StringTokenizer stTok = new StringTokenizer(jarPaths, ";");
while (stTok.hasMoreTokens()) {
File f = new File(stTok.nextToken());
if (f.exists() && f.isDirectory()) {
addAllJars(f);
} else {
System.err.println(f.getName() +
" :: Cannot find this folder.");
}
}
}

private void addAllJars(File f) {
if (f.isDirectory()) {
File[] files = f.listFiles();
for (int i = 0; i < files.length; i++) {
addAllJars(files[i]);
}
} else {
if (f.getName().toLowerCase().endsWith(".jar") ||
f.getName().toLowerCase().endsWith(".zip")) {
try {
JarFile jf = new JarFile(f);
Enumeration entries = jf.entries();
while (entries.hasMoreElements()) {
String entryName =
((JarEntry)entries.nextElement()).getName();
if (entryName.endsWith(".class")) {
entryName =
entryName.substring(0, entryName.indexOf("."));
StringTokenizer stTok =
new StringTokenizer(entryName, "/");
String str = "";
while (stTok.hasMoreTokens()) {
str += stTok.nextToken() + ".";
}
str = str.substring(0, str.length() - 1);
m_jarsHash.put(str, f.getCanonicalPath());
} else {
// skip it
}
}
jf.close();
} catch (IOException e) {
System.err.println(f.getName() + " :: " + e.getMessage());
}
} else {
// skip this file
}
}
}

public String getJarFilepath(String classname) {
return (String)m_jarsHash.get(classname);
}

public static void usage() {
System.out.println("java -Ddeps=<semi-colon seperated paths where all jars/zips may be found> -DclassToRun=<fully qualified classname to run> JavaDependencyWalker <all arguments to give to the class to run>");
}

public static void main(String[] args) {
String dependencies = System.getProperty("deps");
String classname = System.getProperty("classToRun");
if (dependencies == null || classname == null) {
usage();
System.exit(1);
}

JavaDependencyWalker javaDependencyWalker =
new JavaDependencyWalker(dependencies);

try {
String cp = "";
boolean noFound = true;
do {
Runtime rt = Runtime.getRuntime();

String cmdArr[] = null;
if (cp.equals("")) {
cmdArr = new String[args.length+2];
cmdArr[0] = "java";
cmdArr[1] = classname;
for (int i=0; i < args.length; i++) {
cmdArr[i+2] = args[i];
}
// cmd =
//"java " + args[1] + " ";
}
else {
cmdArr = new String[args.length+4];
cmdArr[0] = "java";
cmdArr[1] = "-classpath";
cmdArr[2] = cp;
cmdArr[3] = classname;
for (int i=0; i < args.length; i++) {
cmdArr[i+4] = args[i];
}
}
System.out.println("Command:" + cmdArr);
Process proc = rt.exec(cmdArr);
DataInputStream in =
new DataInputStream(proc.getErrorStream());
String line;
noFound = false;
while ((line = in.readLine()) != null) {
//System.out.println(line);
if (line.indexOf("ClassNotFoundException") != -1) {
String classNF =
line.substring(line.lastIndexOf(":") + 1).trim();
String cnfPackage = classNF.replace('/', '.');
if (javaDependencyWalker.getJarFilepath(cnfPackage) ==
null)
throw new Exception("No jar found for class " +
cnfPackage);
System.out.println("cnf: " + classNF + " found in " +
javaDependencyWalker.getJarFilepath(classNF));
noFound = true;
cp += javaDependencyWalker.getJarFilepath(cnfPackage) + ";";
} else if (line.indexOf("NoClassDef") != -1) {
String classNF =
line.substring(line.lastIndexOf(":") + 1).trim();
String ncdfPackage = classNF.replace('/', '.');
if (javaDependencyWalker.getJarFilepath(ncdfPackage) ==
null)
throw new Exception("No jar found for class " +
classNF);
System.out.println("NCDF: " + classNF + " found in " +
javaDependencyWalker.getJarFilepath(ncdfPackage) +
" " + ncdfPackage);
noFound = true;
cp += javaDependencyWalker.getJarFilepath(ncdfPackage) + ";";
}
}
} while (noFound);
StringTokenizer st = new StringTokenizer(cp, ";");
while (st.hasMoreTokens()) {
System.out.println(st.nextToken());
}
} catch (Exception ex) {
ex.printStackTrace();
} finally {
}

}
}


ant build.xml:
<project name="JavaDependencyWalker" default="compile">
<property name="scolon.separated.folders.ofjars" value="c:/FUPv5/lib;c:/FUPv5/DeleteFile/classes;c:/FUPv5/WSClient/classes;c:/FUPv5/common/classes"/>
<property name="main.class" value="com.oracle.orion.fupv5.ws.client.SRWSClient"/>
<property name="args.to.class" value="-sr sdfds34 -user sldkfjd -password sdlfkjd -endpoint http://wd2088.us.oracle.com:7778/gateway/services/SID0003321 -type ddf -comm sdfd -stat Done"/>

<path id="classpath">
<pathelement location="classes"/>
</path>

<target name="compile">
<mkdir dir="classes"/>
<javac destdir="classes" classpathref="classpath" source="1.5" target="1.5" >
<src path="src"/>
</javac>
</target>

<target name="run" depends="compile">
<java classname="com.gp.JavaDependencyWalker" classpathref="classpath" fork="yes">
<jvmarg line="-Ddeps=${scolon.separated.folders.ofjars} -DclassToRun=${main.class}"/>
<arg line="${args.to.class}"/>
</java>
</target>
</project>

Monday, June 1, 2009

Obtaining a write lock on a file in java over multiple JVMs

Why didn't we use the FileChannel & FileLock classes? This methodology locks the file also for read access. Which is kinda ugly.
I made a singleton class which handles the file that needs this sort of control, in our case it was the dirlist files. Why a singleton class? So that there is only one instance of the handler in the whole jvm and multiple threads in the same jvm can have synchronized access to this file. How does it obtain the write lock? By writing a .lck file in the same folder as this file. Below is the code we wrote for the handler:
package com.oracle.orion.fupv5.common;

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;

/**
* This class must be the only means to access with the dirlist files for reading or
* for writing.
*/
public class DirlistFileHandler {
private static DirlistFileHandler dfHandler;
private String dirlistFilepath;
private boolean locked = false;
private final String dirlistLockFile="FUPDirlistLock.lck";

private DirlistFileHandler(String filepath) {
dirlistFilepath = filepath;
}

/**
* This is the only method to get an instance of this class.
*
* @param filepath The absolute filepath to the dirlist file.
*/
public static DirlistFileHandler getInstance(String filepath) {
if (dfHandler == null) {
dfHandler = new DirlistFileHandler(filepath);
}
return dfHandler;
}

/**
* This method will return a BufferedReader instance to the dirlist file.
* One can then use this reader to access every line of the code by using
* the method readLine() of the BufferedReader class. Please close this
* reader instance after you are done with it.
*/
public BufferedReader getReader() throws FileNotFoundException {
BufferedReader in = new BufferedReader(new FileReader(dirlistFilepath));
return in;
}

/**
* This method will return a FileWriter instance to the dirlist file.
* One can then use this writer to write a new line at the end of file.
* Please close this writer instance after you are done with it. And
* call the releaseWriter() method of this class when you are closing
* the writer.
When the caller successfully gets a writer to the dirlist

* file then the dirlist file is write locked until the releaseWriter() method is
* called.
*/
public FileWriter getWriter() throws FUPException, IOException {
if (isFileLocked()) {
throw new FUPException(ErrorConstants.DIRLIST_FILE_LOCKED);
}
synchronized(this) {
writeLockFile();
return new FileWriter(dirlistFilepath);
}
}

/**
* Use this method to release the write lock held over the dirlist file.
*/
public void releaseWriter() {
synchronized(this) {
removeLockFile();
}
}

/**
* It is adviced to use this method to ascertain that there is no write
* lock over the dirlist file before writing a new line to it, rather than
* trying to get the writer to it directly because you will need to handle
* an excpetion if it is locked. Instead use this method to check a lock
* and if a lock is present sleep for sometime and try to check it again.
*/
public boolean isFileLocked() {
File f = new File(dirlistLockFile);
if (f.exists()) {
return true;
}
else {
return false;
}
}

/**
* This method creates a new lock file in the same folder as the dirlist file.
*/
protected void writeLockFile() throws FileNotFoundException, IOException {
File df = new File(dirlistFilepath);
FileOutputStream fos = new FileOutputStream(df.getParentFile().getAbsolutePath()+"/"+dirlistLockFile);
fos.write((System.currentTimeMillis()+"").getBytes());
fos.flush();
fos.close();
}

/**
* This method removes the lock file.
*/
protected boolean removeLockFile() {
File f = new File(dirlistLockFile);
f.delete();
}
}

How to do an "around" logging Spring AOP Advice on all methods of a class?

Define a LoggingInterceptor in this way :

package ro.vodafone.search.admin.common;

import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;

import org.apache.log4j.Logger;

public class LogInterceptor implements MethodInterceptor{

public Object invoke(MethodInvocation methodInvocation) throws Throwable {
Object result = null;
Logger logger = Logger.getLogger(methodInvocation.getMethod().getDeclaringClass());
try {

logger.info(methodInvocation.getMethod().getDeclaringClass()+ "."+ methodInvocation.getMethod().getName()+ " entered with parameters: " + methodInvocation.getArguments());
result = methodInvocation.proceed();
if (methodInvocation.getMethod().getReturnType() != null && result != null)
logger.info(methodInvocation.getMethod().getDeclaringClass() + "." + methodInvocation.getMethod().getName()+ " exitting with parameters: " + result);
else
logger.info(methodInvocation.getMethod().getDeclaringClass() + "." + methodInvocation.getMethod().getName()+" exitting");
} catch (Throwable ex) {
logger.error("Error while executing the method:"+methodInvocation.getMethod().getName(), ex);
throw ex;
}

return result;
}
}


Then in the spring app context file define the interceptors over the target in this way:

<bean id="logInterceptor" class="ro.vodafone.search.admin.common.LogInterceptor"/>
<bean id="searchAdminService" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="target">
<ref local="service"/>
</property>
<property name="interceptorNames">
<list>
<value>logInterceptor</value>
</list>
</property>
</bean>

Sunday, April 19, 2009

Shell/Perl scripting versus Java programming

Here's a little write-up I have written for a team for which I am consulting. I am helping this team to migrate their existing c-code driven cgi scripting to java. This team wanted everything they have i.e. c-code and scripts to be migrated to java. They had exposed CGIs developed using c so that other systems can call them. They rightfully wanted these to be migrated to Web Services using java. They also have scripts that work in silos that either generate reports or manipulate files/folders and execute some commandline executables. They want to migrate these too to java. This write-up is meant to be a heads-up for them to see that migrating to java blindly is not a good option. They can reduce their costs by letting somethings be the same without much impact to their overall goal.

What are shell scripts most suited for?

Typical operations performed by shell scripts include file manipulation, program execution, and printing text. In their most basic form, shell scripts allow several commands that would be entered "by hand" at a command line interface to be executed automatically and rapidly. Most shells implement basic pattern matching capabilities like this, which allow them to perform commands on groups of items with similar names and sometimes parse simple strings of text.

What are java program most suited for?

Like any other full-fledged programming language java is widely embraced by Enterprise architects for the following reasons:

  • Make programming easier by being simple, object-oriented and familiar.
  • Leave less ambiguity by being a strongly-typed language.
  • Ability to do any programmatic task like socket programming, file manipulation, streaming bytes, providing security, database operations, providing user interfaces among other things
  • Ability to interact with other totally disparate systems with the use of Web Services by providing APIs.
  • Ability to provide robust, fault tolerant, fail safe and scaleable systems with the help of industrial strength Application Servers and frameworks that promote distributed applications.
  • The ability to code quickly and easily by providing multiple coding platform options like Jdeveloper, Eclipse, etc.

Why are most tasks in the FUP module of Orion being ported to Java?

Following are the reasons, IMO, why FUP is being ported to Java:

  • Other systems in Orion need to call actions in FUP which require FUP to expose this functionality in a universally accepted way i.e. by exposing Web Services.
  • FUP calls functionality in other modules of the Orion system. This requires FUP to be able to call the Web Services exposed by the other modules in a universally accepted way.
  • To conform to the standards being adhered to by the other systems in Orion i.e. Java & JEE architecture.
  • To become easy to extend and maintain. Java & JEE are already used by other teams other than FUP. Use of Java/JEE by FUP will enable other teams to understand and extend it better.

Why must script files not be migrated to java?

Most of the existing FUP commandline scripts, which include shell and perl scripts, are performing the following operations:

  • Creating new folders.
  • Moving/Copying files from one folder to another.
  • Executing certain command-line commands.
  • Outputting some debug statements while executing.

Following are the reasons why a few FUP scripts must not be migrated to java:

  • These scripts are not being called by other modules of Orion. But they do call WS exposed by other modules. A java command line may be provided wherever an external WS is being called.
  • File manipulations like moving & copying of files and creation & removal of folders is something which is a strength of shell scripts.
  • Calling of command line executables too is a strength of shell scripts.

Therefore, as long as a piece of code is not being accessed by other modules in the Orion system one may keep using the same scripts. Places where other modules are being called in these scripts may be modified to use java functionality.

What are the scripts that do not qualify for porting to java?

  1. Daily/Weekly/Monthly reporting cronjob.
  2. ADR cronjob.

Wednesday, March 25, 2009

Cruisecontrol usage

Cruisecontrol dashboard can be accessed at the location: http://10.177.135.55:8080/dashboard/tab/dashboard
A reference for creating the config.xml file, which you will need soon, is present at this location:http://cruisecontrol.sourceforge.net/main/configxml.html

You can add a project for automated builds into CC by doing the following steps:

1. Add a project tag to the C:\CruiseControl\config.xml file. There are two types of builds that you may configure:
2a. Build that takes place within a certain interval i.e. hourly build. This usually checks that code checked in by various developers are compiling.
2b. Build that takes place once everyday at the night. This usually checks that the code in the source control compiles and deploys on the appserver. You may run the tests too in this build and build the project site which maven does very well.
2. Checkout the project in the C:\CruiseControl\projects folder of that machine. Yes, you need to manually check out the project in the projects folder for the first time. Later, CC does the updates. But, first, check #3.
3. It is a good idea to have two folders like this: C:\CruiseControl\projects\hourly and C:\CruiseControl\projects\nightly in which you check out the projects.
4. For the project tag of the type in 2a use the following template(The string between ##, including the hashes, is to be replaced with the suitable string):
<project name="#project-name#">
<listeners>
<currentbuildstatuslistener file="logs/#project-name#/status.txt"/>
</listeners>

<bootstrappers>
<!-- This is a sample for subversion source control. This command is different for different source controls. Add as many paths as you want updated.-->
<svnbootstrapper localWorkingCopy="projects/hourly/#the path to the checked out project where svn update needs to take place#" username="#username#" password="#password#"/>
</bootstrappers>

<!--This tag is to specify the paths that you want CC to track changes made in. i.e. If there is any change of a file in these paths a build will be triggered.-->
<modificationset quietperiod="30" >
<!--Again, this is for subversion source control. Add as many paths as you want to track.-->
<svn localWorkingCopy="projects/hourly/#the path to the checked out project where you want CC to detect changes for a build to take place#" username="#username#" password="#password#"/>
</modificationset>

<!--This tag species that changes must be checked every one hour. If changes are detected, then the commands specified must be run, here it is a maven command.-->
<schedule interval="3600">
<!--The maven command to run if a build must take place. This could be an ant command too. Check the CC reference for the right syntax.-->
<maven2 mvnhome="${mvn.home}" pomfile="projects/hourly/#the path to the build file in the checked out project.#" goal="clean install" flags="-e"/>
</schedule>

<publishers>
<currentbuildstatuspublisher file="logs/#project-name#/status.txt"/>
<email mailhost="mail.oracle.com" returnaddress="#build-manager#@oracle.com" buildresultsurl="http://10.177.135.55:8080/buildresults/#project-name#" reportsuccess="never">
<failure address="#dev-team-email-id(s)#@oracle.com"/>

</email>
</publishers>
</project>

5. For the project tag of the type in 2b use the following template(The string between ##, including the hashes, is to be replaced with the suitable string):
<project name="#project-name#">
<listeners>
<currentbuildstatuslistener file="logs/#project-name#/status.txt"/>
</listeners>

<bootstrappers>
<!-- This is a sample for subversion source control. This command is different for different source controls. Add as many paths as you want updated.-->
<svnbootstrapper localWorkingCopy="projects/hourly/#the path to the checked out project where svn update needs to take place#" username="#username#" password="#password#"/>
</bootstrappers>

<!--This tag is to specify the paths that you want CC to track changes made in. i.e. If there is any change of a file in these paths a build will be triggered.-->
<modificationset>
<!--Again, this is for subversion source control. Add as many paths as you want to track.-->
<svn localWorkingCopy="projects/nightly/#the path to the checked out project where you want CC to detect changes for a build to take place#" username="#username#" password="#password#"/>
</modificationset>

<!--This tag species that there is no schedule. So, the time to build is taken from the command. Here, it is 8pm everyday.-->
<schedule>
<!--The maven command to run if a build must take place. This could be an ant command too. Check the CC reference for the right syntax.-->
<maven2 mvnhome="${mvn.home}" time="2000" pomfile="projects/nightly/#the path to the build file in the checked out project.#" goal="clean install" flags="-e"/>
</schedule>

<publishers>
<currentbuildstatuspublisher file="logs/#project-name#/status.txt"/>
<email mailhost="mail.oracle.com" returnaddress="#build-manager#@oracle.com" buildresultsurl="http://10.177.135.55:8080/buildresults/#project-name#" reportsuccess="never">
<failure address="#dev-team-email-id(s)#@oracle.com"/>

</email>
</publishers>
</project>
6. After this step one can go to the CC dashboard and monitor the project builds as well as trigger builds manually.

Wednesday, March 18, 2009

Oracle Coherence learnings, so far...Part 2

1 . If you want to listen to events like when a new object is inserted in the coherence cache(CC) or when an object is deleted from CC or when an object in CC is updated do it like this folks:
NamedCache cache = CacheFactory.getCache("person");
cache.addMapListener(new MapListener() {

public void entryInserted(MapEvent mapEvent) {
//Code when some new object is inserted into cache
}

public void entryUpdated(MapEvent mapEvent) {
Object key = mapEvent.getKey();
Object oldVal = mapEvent.getOldValue();
Object val = mapEvent.getNewValue();
System.out.println("Old value:"+oldVal+", New value being added:"+val);
}

public void entryDeleted(MapEvent mapEvent) {
//Code when some an object is deleted into cache
}
});


2. Up until now we have just done put and get of data into the cache. I am sure, by now, someone is craving to ask what about concurrency. If we want to control concurrency to the data we have the option to lock and unlock keys. But there is a better way to do this: Entry Processes are agents that perform processing against entries, and will carry this out directly where the data is being held. The sort of processing you can carry out may change the data, e.g. create, update or remove or may just perform calculations on the data. Entry processors that work against the same key will be logically queued. This means that you can achieve lock-free (high performance) processing. This is called in-line processing of data. Custom entry processes may be written in the following way:
Class RaiseSalary extents AbstractProcessor {
...
public Object process (Entry entry) {
Employee emp = (Employee)entry.getValue();
emp.setSalary(emp.getSalary() * 1.10);
entry.setValue(emp);
return null;
}

To invoke this you then do the following:
empCache.invokeAll(AlwaysFilter.INSTANCE, new RaiseSalary());

3. Up until now, we've seen simple keys. What if when we want to have a composite key like the id and the version number. Then it is a good idea to wrap all the member elements that form a composite key into an inner class and call it "public static class Key implements ExternalizableLite". A sample key implementation may look like this:
public static class Key implements ExternalizableLite {

// lets define a key of id and version
private int id;
private int version;

public Key() {
//for serializble
}

public Key(int id, int version) {
this.id = id;
this.version = version;
}

public Key(Person p) {
this.id = p.getId();
this.version = 1; // default to version 1
}

public void writeExternal(DataOutput dataOutput) throws IOException {
ExternalizableHelper.writeInt(dataOutput, this.id);
ExternalizableHelper.writeInt(dataOutput, this.version);
}

public void readExternal(DataInput dataInput) throws IOException {
this.id = ExternalizableHelper.readInt(dataInput);
this.version = ExternalizableHelper.readInt(dataInput);
}

@Override
public boolean equals(Object object) {
...
}

@Override
public int hashCode() {
final int PRIME = 37;
int result = 1;
return result;
}
}

As a next step, we can target for efficiencies of partitioning. That is, in the cluster of caches, we can target for similar objects to be in the same cache instance. For this the inner class can implement KeyAssociation and override the method getAssociatedKey(). With this, for instance, if this method returns the same surname then those persons are placed in the same cache instance.

Oracle Coherence learnings, so far...Part 1

I've been going thru a lab which is dealing with a set of new functions in each chapter. Here are my learnings so far:

1.Definition: Oracle Coherence is an in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data.

2. The standard way of putting and getting data into cache:
NamedCache cache = CacheFactory.getCache("person");
Person p1 = new Person(2, "Jade", "Goody", "London, 38", 36, Person.FEMALE);
cache.put(p1.getId(), p1);
Person p2 = (Person)cache.get(1);


3. The custom objects that you put into coherence cache must be atleast Serializable. If you want a more efficient way then implement com.tangosol.io.ExternalizableLite. This will require you to implement two methods too but mashalling and unmarshalling of objects become very efficient.

4. To do bulk upload of data into the in-memory cache, you would want to do it efficiently like this:
public static void bulkLoad(NamedCache cache, Connection conn)
{
Statement s;
ResultSet rs;
Map buffer = new HashMap();

try
{
int count = 0;
s = conn.createStatement();
rs = s.executeQuery("select key, value from table");
while (rs.next())
{
Integer key = new Integer(rs.getInt(1));
String value = rs.getString(2);
buffer.put(key, value);

// this loads 1000 items at a time into the cache
if ((count++ % 1000) == 0)
{
cache.putAll(buffer);
buffer.clear();
}
}
if (!buffer.isEmpty())
{
cache.putAll(buffer);
}
...
}
catch (SQLException e)
{...}

}
5. To carry-out efficient processing of filtered results you may want to do this instead of the regular iterator stuff:
public static void performQuery()
{
NamedCache c = CacheFactory.getCache("test");

// Search for entries that start with 'c'
Filter query = new LikeFilter(IdentityExtractor.INSTANCE, "c%", '\\', true);

// Perform query, return keys of entries that match
Set keys = c.keySet(query);

// The amount of objects to process at a time
final int BUFFER_SIZE = 100;

// Object buffer
Set buffer = new HashSet(BUFFER_SIZE);

for (Iterator i = keys.iterator(); i.hasNext();)
{
buffer.add(i.next());

if (buffer.size() >= BUFFER_SIZE)
{
// Bulk load BUFFER_SIZE number of objects from cache
Map entries = c.getAll(buffer);

// Process each entry
process(entries);

// Done processing these keys, clear buffer
buffer.clear();
}
}
// Handle the last partial chunk (if any)
if (!buffer.isEmpty())
{
process(c.getAll(buffer));
}

}
6. This is how filtering works:
Set malesOver35 = cache.entrySet(
new AndFilter(new EqualsFilter("getGender", Person.MALE),
new GreaterEqualsFilter("getAge", 35)));

7. This is how aggregation works:
Double avgAgeMales =
(Double)cache.aggregate(new EqualsFilter("getGender", Person.MALE),
new DoubleAverage("getAge"));


8. Entry Processes are agents that perform processing against entries, and will carry this out directly where the data is being held. The sort of processing you can carry out may change the data, e.g. create, update or remove or may just perform calculations on the data. Entry processors that work against the same key will be logically queued. This means that you can achieve lock-free (high performance) processing. A small example is as follows:
Class RaiseSalary extents AbstractProcessor {
...
public Object process (Entry entry) {
Employee emp = (Employee)entry.getValue();
emp.setSalary(emp.getSalary() * 1.10);
entry.setValue(emp);
return null;
}
To invoke this you then do the following:
empCache.invokeAll(AlwaysFilter.INSTANCE, new RaiseSalary());

Wednesday, February 25, 2009

Maven practice

- Unzip the maven installable and show the settings.xml. Set the repository folder to the repo in the maven training area. Also uncomment the proxy and change settings.

- Then run the create archetype command.
mvn archetype:create -DartifactId=test -DgroupId=sample.oracle.toplink

- Display the folder that is generated. Run the "mvn install" command to display how build, packaging and installing is done.

- Copy the resulting pom to SampleApp folder and make changes to artifactId, packaging(to pom), add modules like this:
<modules>
<module>Model</module>
<module>ViewController</module>
</modules>

- Copy pom to Model and ViewController and do similar things. Apply the parent tag these pom files:
<parent>
<groupId>sample.oracle.toplink</groupId>
<artifactId>TestApp</artifactId>
<version>1.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
Change the artifact id, name, packaging and remove the dependencies for now.
- In pom.xml of Model & ViewController put in the build tag with the proper source directory because the project is a jdev project.
<build>
<sourceDirectory>src</sourceDirectory>
</build>
- Now compile with the command "mvn compile" and you should get compile errors. Firstly, because the wrong jdk is being used. Correct it by putting the following in both pom files:
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
</plugins>
</build>
- Now, to the pom.xml of Model & ViewController add resource folder and inclusion details in the build tag to the pom file which may look like this:
<resources>
<resource>
<directory>src</directory>
<includes>
<include>**/*.gif</include>
<include>**/*.jpg</include>
<include>**/*.jpeg</include>
<include>**/*.png</include>
<include>**/*.properties</include>
<include>**/*.xml</include>
<include>**/*-apf.xml</include>
<include>**/*.ejx</include>
<include>**/*.xcfg</include>
<include>**/*.cpx</include>
<include>**/*.dcx</include>
<include>**/*.wsdl</include>
<include>**/*.ini</include>
<include>**/*.tld</include>
<include>**/*.tag</include>
<include>**/*.jpx</include>
<include>**/*.properties</include>
</includes>
</resource>
</resources>

- Now to install dependent jars of oracle into the local repo.
<use utility>
- Add dependencies to the pom file.
- Do the same to the web project. But it requires different treatment. Firstly, add the Model project as a dependency to this project. That is add a dependency like this:
<dependency>
<groupId>sample.oracle.toplink</groupId>
<artifactId>Model</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
Then add the following plugin so that the packaging that takes place is of war type:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<webappDirectory>public_html</webappDirectory>
<archive>
<manifest>
<addClasspath>true</addClasspath>
</manifest>
</archive>
</configuration>
</plugin>
- Now to create an ear file one must create a new folder called ear in the parent folder that contains the Model and ViewController project. This will be like a new maven project but only dealing with creation of the ear file. Copy the pom file created in the first stages to this folder and do the following changes:
- Add the parent tag like before. Make changes to the artifactId, name and packaging(it must be ear now).
- Add the war project as a dependency. It should look like this:
<dependencies>
<dependency>
<groupId>sample.oracle.toplink</groupId>
<artifactId>UI</artifactId>
<version>10.1.3.3</version>
<type>war</type>
</dependency>
<dependency>
<groupId>sample.oracle.toplink</groupId>
<artifactId>Model</artifactId>
<version>10.1.3.3</version>
<type>ejb</type>
</dependency>
</dependencies>
- Add a build tag like this to be able to build an ear file:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-ear-plugin</artifactId>
<configuration>
<displayName>TestApp</displayName>
<description>Test Application</description>
<modules>
<ejbModule>
<groupId>sample.oracle.toplink</groupId>
<artifactId>Model</artifactId>
</ejbModule>
<webModule>
<groupId>sample.oracle.toplink</groupId>
<artifactId>UI</artifactId>
<contextRoot>/test</contextRoot>
</webModule>
</modules>
<earSourceExcludes>*.jar</earSourceExcludes>
</configuration>
</plugin>
</plugins>
</build>
- This should create an ear file. But when you try to deploy the ear file in the oc4j server, you will start getting errors about a few classes not found. These were the jars that got left out during installation of jars in the local repo because the xml file didn't have mappings for a few libraries. I still haven't been able to find the file that contains all mappings for libraries. These jars you will have to manually install and put in pom as dependencies. For now, the libs that are'nt available are open source ones. These are the easiest to use. You just define them as dependency, they will be automatically installed in the local repo.
- Make the folders src/main/resources/META-INF and put the orion-application.xml file into it, which you get from building an ear file from the jdeveloper. (It is always advisable to make a war deployment profile in jdeveloper, before going ahead with this exercise. Also, try deploying it on a standalone OC4J instance and see it running.) You will also need to add this to the pom.xml so that this file is dropped in the right place before packaging(note the resources tag only).
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<targetPath>../${project.artifactId}-${project.version}</targetPath>
</resource>
</resources>
<plugins>

Tuesday, February 17, 2009

Developing an ADF/Faces-Toplink application

In this post I'd like to address a few common issues that developers face while creating an ADF/Faces and Toplink application. For such an application, usually this is the flow of steps:

  1. This is assuming the usage of JDeveloper for development.
  2. Create an application with two projects. One for model(which will contain toplink objects) and one for the UserInterface.
  3. First, in the model project, start by creating "Java objects from tables". Make the toplink POJOs Serializable. Create the named toplink queries(find, update and insert) methods. Create the Facade Session bean to facade the named queries. Then create a DataControl out of the Session Bean. Mind you, this Datacontrol is good as long as things work by dragging and dropping into the JSF pages.
  4. Now, create the JSF pages in the UI project. The best way to do this is by going to the Overview window of the faces-config.xml file. Here you drop the jsf pages and navigations. Then you double-click on the jsf pages and create the actual jsf jsp pages.
  5. Sometimes when you'd like to call the facade session bean methods on your own then you have the job of looking it up. But a few pointers first, that I have noticed: Always make the facade session bean implement the Remote interface(implementing only Local is logical but doesn't work :( ), following is how the SessionBean looks like:

  6. @Stateless(name="SessionEJB1")
    public class SessionEJB1Bean implements SessionEJB1, SessionEJB1Local {


    and the following is the code to lookup the session bean in JDev 10.1.3.2/3:

    InitialContext ctx=new InitialContext();
    Object o = ctx.lookup("SessionEJB1");
    SessionEJB1 ejb = (SessionEJB1)PortableRemoteObject.narrow(o, SessionEJB1.class);
    return ejb;


  7. One big challenge in such applications is the creation of new objects or in other words new rows in a table, how would you handle the new ID to be assigned. If you are using Oracle database(I do not experience with other databases) the best way to handle assignment of new IDs is to use native sequencing. In the top-level of the toplink mapping set the sequencing to use "Native sequencing". Obviously, you will need to have sequences created in the database for the tables in which new rows are going to be created. In the toplink mapping window go to the details of a table and set the sequencing information by using the sequence name you created for that table in the database. That's it! You will not have to set the ID parameter in the new java object before persisting it. The method to use is the persistEntity(Object) method in the facade session bean. It returns the persisted object which contains the newly assigned ID. Cool!

Monday, February 16, 2009

More JSF

Inverting the bean value requires the ! (or not) operator:
<h:inputtext rendered="#{!bean.hide}">

You can concatenate plain strings and value binding expressions, simply by placing them next to each other. Consider, for example,
<h:outputtext value="#{messages.greeting}, #{user.name}!">

Here is a typical use of a method binding expression.
<h:commandbutton action="#{user.checkPassword}">


The h:dataTable tag iterates over data to create an HTML table. Here's how you use it:
<h:datatable value="#{items}" var="item"><h:datatable value="#{items}" var="item">
The value attribute represents the data over which h:dataTable iterates; that data must be one of the following:
  • an array

  • an instance of java.util.List

  • an instance of java.sql.ResultSet

  • an instance of javax.servlet.jsp.jstl.sql.Result

  • an instance of javax.faces.model.DataModel

Java Server Faces - obtaining value binding in java

FacesContext context = FacesContext.getCurrentInstance();

ValueBinding binding = context.getApplication().createValueBinding("#{user.name}");

String name = (String) binding.getValue(context);