Tuesday, January 18, 2011

Using TOP more effectively (Repring)

Disclaimer: this article is published by Mulyadi Santosa, I find it from http://www.linuxforums.org/articles/using-top-more-efficiently_89.html

For desktop users, monitoring resource usage is an important task. By doing this, we can locate system bottleneck, planning what to do to optimize our system, identifying memory leak and so on. The problem is, which software one should use and how to use it according to our need.
Among many monitoring tools that available, most people use "top" (a part of procps package). Top provide almost everything we need to monitor our system's resource usage within single shot. In this article, all the information are based on procps 3.2.5 running on top of Linux kernel 2.6.x

Here, we assume that procps package is already installed and run well in your Linux system. No previous experience with top is needed here, but if you had given it a try briefly, that would be an advantage.

Here are some challenges:

A. Interactive or batch mode?
By default, top is invoked using interactive mode. In this mode, top runs indefinitely and accepts keypress to redefine how top works. But, sometimes you need to post-process the top's output and this is hardly achieved using this mode. The solution? Use batch mode.

$ top -b

You will get output like below:

top - 15:22:45 up 4:19, 5 users, load average: 0.00, 0.03, 0.00
Tasks: 60 total, 1 running, 59 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.8% us, 2.9% sy, 0.0% ni, 89.6% id, 3.3% wa, 0.4% hi, 0.0% si
Mem: 515896k total, 495572k used, 20324k free, 13936k buffers
Swap: 909676k total, 4k used, 909672k free, 377608k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 16 0 1544 476 404 S 0.0 0.1 0:01.35 init
2 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/0
3 root 10 -5 0 0 0 S 0.0 0.0 0:00.11 events/0
Uh, wait, it runs repeatedly, just like interactive mode does. Don't worry, limit its repetition with -n. So, if you just want single shot, type:

$ top -b -n 1
The real advantage of this mode is you can easily combine in with at or cron. Together, top can snapshot resource usage at certain time unattendedly. For example, using at, we can schedule top to run 1 minute later.

$ cat ./test.at
TERM=linux top -b -n 1 >/tmp/top-report.txt
$ at -f ./test.at now+1minutes
Careful reader might ask "why do I need to set TERM environment before invoking top when creating new at job?". The answer is, top needs this variable set but unfortunately "at" isn't retained it from the time of invocation. Simply set it like above and top will work smoothly.

B. How to monitor certain processes only?
Sometimes, we are only interested on several processes only, maybe just 4 or 5 of the whole existing processes. For example, if you want monitor process identifier (PID) 4360 and 4358, you type:

$ top -p 4360,4358
OR
$ top -p 4360 -p 4358
Seems easy, just use -p and list all the PIDs you need, each separated with comma or simply use -p multiple times coupled with the target PID.

Another possibility is just monitoring process with certain user identifier (UID). For this need, you can use -u or -U option. Assuming user "johndoe" has UID 500, you can type:

$ top -u johndoe
OR
$ top -u 500
OR
$ top -U johndoe
The conclusion is, you can either use the plain user name or the numeric UID. "-u, -U? Those two are different?" Yes. Like almost any other GNU tools, options are case sensitive. -U means top will find matching effective, real, saved and filesystem UIDs, while -u just find matching effective user id. Just for reminder, every *nix process runs using effective UID and sometimes it isn't equal with real user ID. Most likely, one is interested in effective UID as filesystem permission and operating system capability are checked against it, not real UID.

While -p is just command-line option only, both -U and -u can be used inside interactive mode. Like you guess, press 'U' or 'u' to filter the processes based on their user name. Same rule is applied, 'u' for effective UID and 'U' for real/effective/saved/filesystem user name. You will be asked to enter the user name or the numeric UID.

{mospagebreak title=Fast or slow update?}
C. Fast or slow update?
Before we answer this question, let's take a short look on how top really works. Strace is your friend here:

$ strace -o /tmp/trace.txt top -b -n 1
Use you favourite text editor and load /tmp/trace.txt. What do you think? A lot of jobs for single invocation, that is what I think and maybe you'll agree. One of the jobs top must do in every iteration is opening many files and parsing their contents, as shown by the number:

$ grep open( /tmp/hasil.txt | wc -l
Just for illustration, in my Linux system, it yields 304. Closer look reveals that top iterates inside /proc directory to gather processes information. /proc itself is pseudo filesystem, meaning it doesn't exist on real disk but is created on the fly by the Linux kernel and live on RAM. Within directory such as /proc/2097 (2097 is a PID), Linux kernel exports information about related process and this is where top gathers processes information along with resource usage.

Also try these:

$ time top -b -n 1
This will give you illustration how fast top works on single round. In my system, this yields around 0.5-0.6 seconds. Look at "real" field, not the "user" or "system" field because "real" reflects the total time top needs to work.

So, realizing this fact, it will be wise to use moderate update interval. Browsing RAM based filesystem takes time too, so be wise. As rule of thumb, 1 to 3 seconds interval is enough for most users. Use -d in command line option or press "s" inside interactive mode to set it. You can use fractional number as interval, e.g: 2.5, 4.1 and so on

When we should faster than 1 seconds?

You need more samples during a time. For this need, better use batch mode and redirect standart output to a file so you can analyze it better.
You don't mind with extra CPU load carried by top. Yes, it is small but it is still a load. If your Linux system is relatively idle, feel free to use short interval, but if not, better preserve your CPU time for more important task.
One way to reduce top's work is by monitoring certain PIDs only. This way, top won't need to traverse all the /proc sub-directory. How about user name filtering? It won't do any good. User name filtering brings extra work for top, thus combining it with very short interval will increase CPU load.

Of course, whenever you need to force the update, just press Space and top will refresh the statistic right away.

{mospagebreak title=Fields we need}
D. Fields that we need
By default, top starts by showing the following task's property:

Field Description
PID : Process ID
USER : Effective User ID
PR : Dynamic priority
NI : Nice value, also known as base priority
VIRT : Virtual Size of the task. This includes the size of process's executable binary, the data area and all the loaded shared libraries.
RES : The size of RAM currently consumed by the task. Swapped out portion of the task is not included.
SHR : Some memory areas could be shared between two or more task, this field reflects that shared areas. The example of shared area are shared library and SysV shared memory.
S : Task status
%CPU : The percentage of CPU time dedicated to run the task since the last top's screen update.
%MEM : The percentage of RAM currently consumed by the task.
TIME+ : The total CPU time the task has been used since it started. "+" sign means it is displayed with hundreth of a second granularity. By default, TIME/TIME+ doesn't account the CPU time used by the task's dead children.
COMMAND : Showing program names.
But, there are more. Here, I will just explain fields that might interest you:

Field Description
nFLT (key 'u')
Number of major page fault since the process is started. Technically, page fault happens when the task access a non existant page in its address space. A page fault is said as "major" if kernel needs to access the disk to make the page available. On the contrary, soft minor page fault means the kernel only need to allocate pages in RAM without reading anything from disk.

For illustration, consider the size of program ABC is 8 kB and assume the page size is 4 kB. When the program is fully loaded to RAM, there will be 2 times major page fault (2 * 4 kB). The program itself allocates another 8 kB for temporary data storage in RAM. Thus, there will be 2 minor page fault.

A high number of nFLT could mean:

The task is aggressively load some portions of its executable or library from the disk.
The task is accessing a page that is swapped ou
It is normal if you see a high number of major page fault when a program is run for first time. On the next invocations, buffer is utilized so likely you will see "0" or low number of nFLT. But, if a program is continously triggerring major page fault, big chance your program needs larger RAM size than currently installed.

nDRT (key 'v')
The number of dirty pages since they are written back to the disk.

Maybe you wonder, what is dirty page? First, a little bac ground. As you know, Linux employ caching mechanism, so everything that is read from disk is also cached in RAM. The advantage of this action is, subsequent read to the same disk block can be served from RAM thus reading completes faster.

But it also costs something. If the buffer's content is modified, it needs to be synchronized. Thus, sooner or la this modified buffer (dirty page) must be written back. The failure on the synchronization might cause data inconsistency on related disk.

On mostly idle to fairly loaded system, nDRT is usually below 10 (this is just a raw prediction)or mostly zero. If it is constantly bigger than that:

The task is aggresively write something to file(s). It is so often that disk I/O can't keep up with it
The disk suffers I/O congestion, thus even the task only modifies small portion of file(s), it must wait a bit longer to be synchronized. Congestion happens when many processes access the disk at a time but cache hit is low.
These days, (1) unlikely happens because I/O speed is getting faster and less CPU demanding (thanks to DMA). So (2) has bigger probability.

Note: On 2.6.x, this field is always zero without unknown reason.

P (key 'j')
Last used CPU. This field only has meaning in SMP environment. SMP here refers to Hyperthread, multi core or true multi processor. If you just have one processor (non multi core, not HT), this field will always show '0'.

In SMP system, don't be surprised if this field change sometimes. That means, the Linux kernel tries to move your task to the other CPU which is considered less loaded.

CODE (key 'r') and DATA (key 's')
CODE simply reflects the size of your application code, while DATA reflects the size of data segment (stack, heap, variables but not shared libraries). Both are measured in kilobyte.

DATA is useful to show how much your application allocates memory. Sometimes, it can also reveal memory leaks. Of course, you need better tool such as valgrind to differentiate between repetitive memory allocation or growing memory leaks if DATA continously climbs up.

Note: DATA, CODE, SHR, SWAP, VIRT, RES are all measured in page size (4KB in Intel architecture). Read only data section is also calculated as CODE size, thus sometimes it is larger than the actual text (executable) segment.

SWAP (key 'p')
The size of swapped out portion of a task's virtual memory image. This field is sometimes confusing, here is why:

Logically, you would expect this field really shows whether your program is partially swapped out and how much. But the reality shows otherwise. Even the "Swap used" field shows 0, you will be surprised that SWAP field of each tasks show greater than zero number. So, what's wrong?

This comes from the fact that top use this formula:


VIRT = SWAP + RES or equal
SWAP = VIRT - RES
As explained previously, VIRT includes anything inside task's address space, no matter it is in RAM, swapped out or still not loaded from disk. While RES represents total RAM consumed by this task. So, SWAP here means it represents the total amount of data being swapped out OR still not loaded from disk. Don't be fooled by the name, it doesn't just represent the swapped out data.

To display the above fields, press 'f' inside the interactive mode. Then press the related key (mentioned above inside the parentheses). Those keys toggle the related fields, so press once to show it, press again to hide it. To find out whether the fields are displayed or not, simply watch the series of letters on the first line (at the right of "Current Fields"). Uppercase means the fields is shown, lower case means the opposite. Press Enter after you are satisfied with the selection.

Sorting use similar way. Press 'O' (upper case) followed by a key representing the field. Don't worry if you don't remember the key map, top will show it. The new sort key will be marked with asterisk and the letter will change to upper case, so you can notice it easily. Press Enter after you are finished

{mospagebreak title=Multi view are better than one?}
E. Multi view are better than one?
In different situations, sometimes we want to monitor different system property. For example, at one time you want to monitor %CPU and cpu time spent by all tasks. At another time, you want to monitor resident size and total page faults of all tasks. Rapidly press 'f' and change the visible fields? I don't think this is a smart choice.

Why don't you use Multiple Windows mode? Press 'A' (upper case) to switch to multi windows view. By default, you will see 4 different set of field groups. Each field groups has a default label/name:

1st field group: Def
2nd field group: Job
3rd field group: Mem
4th field group: Usr

1st field group is the usual group you see in single window view, while the rest are hidden. Inside multi window mode, press 'a' or 'w' to cycle through all the available windows. Pay attention, switching to another window also change the active window (also known as current window). If you are not sure which one is currently the active one, just look at the first line of top's display (at the left of current time field). Another way to change active window is by pressing 'G' followed by windows number (1 to 4).

Active window is the one which react to user input, so make sure to select your preferred window first before doing anything. After that, you can do anything exactly like you do in single window mode. Usually, what you want to do here is customizing field display, so just press 'f' and start customizing.

If you think 4 is too much, just switch to a field group and press '-' to hide it. Please note, even you hide current field group, that doesn't mean you also change the active group. Press '-' once again to make current group visible.

If you are done with multi window mode, press 'A' again. That also make active group as the new field group of single window mode.

F. "How come there is only so few free memory on my Linux PC?"
Come to same question? No matter how much you put RAM in your motherboard, you quickly notice the free RAM is reduced so fast. Free RAM miscalculation? No!

Before answering this, first check the memory summary located on the upper side of top's display (you may need to press 'm' to unhide it). There, you will find two fields: buffers and cached. "Buffers" represent how much portion of RAM is dedicated to cache disk block. "Cached" is similar like "Buffers", only this time it caches pages from file reading. For thorough understanding of those terms, refer to Linux kernel book like Linux Kernel Development by Robert M. Love.

It is enough to understand that both "buffers" and "Cached" repre- sent the size of system cache. They dynamically grow or shrink as requested by internal Linux kernel mechanism.

Besides consumed by cache, the RAM itself is also occupied by application data and code. So, to conclude, free RAM size here means RAM area that isn't occupied by cache nor application data/code. Generally, you can consider cache area as another "free" RAM since it will be shrunk gradually if the application demands more memory.

On the task point of view, you might wonder which field truly represent memory consumption. VIRT field? certainly not! Recall that this field represent everything inside task address space, including the related shared libraries. After reading top source code and proc.txt (inside Documentation/filesystem folder of kernel source's tree), I conclude that RSS field is the best field describing task's memory consumption. I said "best" because you should consider it as approximation and isn't 100% accurate on all time.

G. Working with many saved configurations
Wanna keep several different configuration of top so you can easily switch between preconfigured display? Just create symbolic link to the top binary (name it anything you like:

# ln -s /usr/bin/top /usr/bin/top-a
Then run the new "top-a". Do the tweak and press 'W' to save the configuration. It will be saved under ~/.top-arc (the format is your top alias name+'rc').

Now run the original top to load your first display alternative, top-a for the second one and so on.

{mospagebreak title=Conclusion}
H. Conclusion
There are numerous tricks to use top more efficiently. The key is by knowing what you really need and possibly a little good understanding of Linux low level mechanism. The statistics isn't always correct, but at least it is helpful as a overall measurement. All these numbers are gathered from /proc, so make sure it is mounted first!

Reference:

Understanding The Linux Kernel, 2nd edition.
Documentation/filesystems/proc.txt inside kernel source tree.
Linux kernel source.

Thursday, January 13, 2011

Dive into Spring test framework - Part1

I don't wanna repeat that how important is unittest to a developer, but in my expirence I found developers around me seldom write unit test, to say nothing of the spring test framework.
I would like to share my knowledge about how to perform tests based on spring test framework, also it is a reminder to myself to keep walking.

Spring framework
In current world, almost all running or developing systems will follow MVC architectural style, in java world we will separate a system into about 4 layers: 'controller', 'service', 'domain', 'DAO'.
Let me ask you a question, what is the biggest challenge when testing a 'DAO'? It is how to make database stay same state with before runnig test. If the state changed from the initial state of before running test, the tests will influence each other. When you run a single test again and again, you will get diffrent result, absolutely we should avoid it.
By adopting spring test framework, we can easily rollback the transaction when finish a test. Actually spring will automatically rollback it(of course you can ask spring commit it), and each testcase will be the boundary of a transaction.
One more question, do we need Mocker? it depends. When we test a service, do we need to mock a DAO instance then no need to access real database? no, I would like to always run tests against real database. OK, maybe you will say that's integration test, not unit test...the difference here isn't important, actually if the integration test passed, of course the unittest test(a Mock DAO) will pass.
import java.math.BigDecimal;
import java.sql.Connection;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
import java.util.UUID;

import javax.sql.DataSource;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.dbunit.database.DatabaseConfig;
import org.dbunit.database.DatabaseConnection;
import org.dbunit.database.IDatabaseConnection;
import org.dbunit.ext.oracle.OracleDataTypeFactory;
import org.springframework.test.AbstractTransactionalDataSourceSpringContextTests;

import com.mpos.lottery.te.common.encrypt.RsaCipher;
import com.mpos.lottery.te.config.MLotteryContext;
import com.mpos.lottery.te.config.dao.OperationParameterDao;
import com.mpos.lottery.te.config.dao.SysConfigurationDao;
import com.mpos.lottery.te.config.domain.LottoOperationParameter;
import com.mpos.lottery.te.config.domain.SysConfiguration;
import com.mpos.lottery.te.config.domain.logic.GameTypeBeanFactory;
import com.mpos.lottery.te.draw.dao.FunTypeDao;
import com.mpos.lottery.te.draw.dao.GameDrawDao;
import com.mpos.lottery.te.draw.domain.Game;
import com.mpos.lottery.te.draw.domain.GameDraw;
import com.mpos.lottery.te.draw.domain.LottoFunType;
import com.mpos.lottery.te.hasplicense.HASPManage;
import com.mpos.lottery.te.settlement.domain.SettlementReport;
import com.mpos.lottery.te.test.unittest.BaseUnitTest;
import com.mpos.lottery.te.ticket.domain.LottoEntry;
import com.mpos.lottery.te.ticket.domain.Ticket;
import com.mpos.lottery.te.ticket.domain.logic.lotto.BankerStrategy;
import com.mpos.lottery.te.ticket.domain.logic.lotto.BetOptionStrategy;
import com.mpos.lottery.te.ticket.domain.logic.lotto.MultipleStrategy;
import com.mpos.lottery.te.ticket.domain.logic.lotto.RollStrategy;
import com.mpos.lottery.te.ticket.domain.logic.lotto.SelectedNumber;
import com.mpos.lottery.te.ticket.domain.logic.lotto.SingleStrategy;
import com.mpos.lottery.te.trans.domain.Transaction;
import com.mpos.lottery.te.workingkey.domain.WorkingKey;

/**
 * Looks like the test framework of spring for JPA only support JUnit 3.8...not
 * ready for Junit4.X
 */
public class BaseTransactionTest extends AbstractTransactionalDataSourceSpringContextTests {
 protected Log logger = LogFactory.getLog(BaseTransactionTest.class);
 public static final String DATE_FORMAT = "yyyyMMddHHmmss";

 /**
  * About onXXX inherited from AbstractTransactionSprintContextTests for
  * onSetUp() and onTearDown(), AbstractTransactionSprintContextTests has
  * implemented logic which adopted template method pattern to invoke
  * onSetUpXXX and onTearDownXXX. It means if you want override onSetUp() and
  * onTearDown(), super.OnSetUp() and super.OnTearDown() must be invoked,
  * otherwise spring won't invoke onSetUpXXX and onTearDownXXX(). You can
  * completely override onSetUpXXX() and onTearDownXXX(), they are template
  * methods.
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onSetUp
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onSetUpBeforeTransaction
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onSetUpInTransaction
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onTearDownInTransaction
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onTearDownAfterTransaction
  * @see org.springframework.test.AbstractTransactionalSpringContextTests#onTearDown
  */
 public void onSetUp() throws Exception {
  logger.info("------------------- onSetUp -------------------");
  HASPManage.isChecked = false;
  logger.info("Disable HASP key...");
  this.initializeMLotteryContext();
  // must invoke super.onSetUp(), new transaction will be created here,
  // also will invoke template methods in order
  super.onSetUp();
 }

 public void onTearDown() throws Exception {
  logger.info("------------------- onTearDown -------------------");
  // must invoke super.onTearDown(), transaction will be rolled back
  // here, also will invoke template methods in order.
  super.onTearDown();
 }

 public void onSetUpBeforeTransaction() throws Exception {
  logger.info("------------------- onSetUpBeforeTransaction -------------------");
  // Oops, i found use a sql file to load data into database is simpler
  // that DBUnit,
  // as we can query database directlly. If by DBUnit, we can query
  // database only when committing a trasaction.

  // use DBUnit to cleanup data first, or we can use
  // this.deleteFromTables(String[] tableNames);
  // IDatabaseConnection conn = this.getDataBaseConnection();
  // try {
  // // when delete, DBUnit will execute from last table to first table,
  // // be
  // // opposed to INSERT.
  // DatabaseOperation.DELETE.execute(conn, new
  // FlatXmlDataSetBuilder().build(this
  // .getClass().getResourceAsStream("/testdata/oracle_test_union.xml")));
  // DatabaseOperation.DELETE.execute(conn, new FlatXmlDataSetBuilder()
  // .build(new InputSource(this.getClass().getResourceAsStream(
  // "/testdata/oracle_test_common.xml"))));
  // DatabaseOperation.INSERT.execute(conn, new FlatXmlDataSetBuilder()
  // .build(new InputSource(this.getClass().getResourceAsStream(
  // "/testdata/oracle_test_common.xml"))));
  // DatabaseOperation.INSERT.execute(conn, new
  // FlatXmlDataSetBuilder().build(this
  // .getClass().getResourceAsStream("/testdata/oracle_test_union.xml")));
  // } finally {
  // // return the connection to pool
  // DataSourceUtils.releaseConnection(conn.getConnection(),
  // this.getJdbcTemplate()
  // .getDataSource());
  // }
 }

 public void onTearDownAfterTransaction() throws Exception {
  this.logger.info("------------------- onTearDownAfterTransaction -------------------");
  // // use DBUnit to cleanup data, or we can use
  // // this.deleteFromTables(String[] tableNames);
  // IDatabaseConnection conn = this.getDataBaseConnection();
  // try {
  // // when delete, DBUnit will execute from last table to first table,
  // // be
  // // opposed to INSERT.
  // DatabaseOperation.DELETE.execute(conn, new
  // FlatXmlDataSetBuilder().build(this
  // .getClass().getResourceAsStream("/testdata/oracle_test_union.xml")));
  // DatabaseOperation.DELETE.execute(conn, new FlatXmlDataSetBuilder()
  // .build(new InputSource(this.getClass().getResourceAsStream(
  // "/testdata/oracle_test_common.xml"))));
  // } finally {
  // // return the connection to pool
  // DataSourceUtils.releaseConnection(conn.getConnection(),
  // this.getJdbcTemplate()
  // .getDataSource());
  // }
  logger.info("*** Finished cleanup test data ***");
 }

 public void onSetUpInTransaction() throws Exception {
  logger.info("------------------- onSetUpInTransaction -------------------");
  // this.executeSqlScript("testdata/oracle_masterdata.sql", false);
  // this.executeSqlScript("/testdata/oracle_testdata.sql", false);
  // this.executeSqlScript("/testdata/oracle_testdata_union.sql", false);

  /**
   * NOTE: In the original implementation, I invoke DBUnit.INSERT in
   * onSetUpInTrransaction() and DBUnit.DELETE in
   * onTearDownInTransaction(), it means DBUnit.INSERT and DBUnit.DELETE
   * will be executed in the lifecycle of test transaction managed by
   * Spring, then there is a chance that Spring test transaction will
   * conflict with DBUnit transaction. Here is a case: 1) Sprint create a
   * new transaction for testcase. 2) DBUnit.INSERT test data in
   * onSetUpInTransaction(a new auto-commit transaction) 3)
   * "this.getJdbcTemplate().execute('update GPE_KEY...'" which will
   * update GPE_KEY in test transaction. 4) run test case. 5)
   * DBUnit.DELETE test data in onTearDownInTransaction(a new auto-commit
   * transaction). when DBUnit try to delete from GPE_KEY, it will be
   * blocker forever, as the test transaction has hold the exclusive lock
   * of row of GPE_KEY, and only will release after DBUnit.DELETE.... My
   * conclusion is if we plan to use DBUnit in separated transaction, it
   * is better to invoke DBUnit in onSetUpBeforeTransaction() and
   * onTearDownAfterTransaction(). By this mean all transactions won't
   * influence one another. The other solution is if use DBUnit in
   * onTearDownInTransaction, we should invoke "this.endTransaction()"
   * first which will rollback/commit transaction.
   */
  SimpleDateFormat sdf = new SimpleDateFormat(WorkingKey.DATE_PATTERN);
  this.getJdbcTemplate().execute(
          "update GPE_KEY set create_time=sysdate,update_time=sysdate,create_date='"
                  + sdf.format(new Date()) + "'");

  logger.info("*** Finished preparing test data ***");
 }

 public void onTearDownInTransaction() throws Exception {
  logger.info("------------------- onTearDownInTransaction -------------------");
 }

 protected IDatabaseConnection getDataBaseConnection() throws Exception {
  DataSource ds = this.getJdbcTemplate().getDataSource();

  /**
   * Will retrieve connection from current transaction context, but due to
   * TE will query te_sequence in a new connection, and at the same time
   * DBUnit doesn't commit transaction(managed by Spring) yet, TE will
   * fail to get sequence record. So DBUnit use a new connection which is
   * different from the connection associated with spring transaction
   * context to manipulate data. The disadvantage is you have to delete
   * all those data when finish a test case and close connection manually.
   */
  // Connection connection = DataSourceUtils.getConnection(ds);
  Connection connection = ds.getConnection(); // a auto commit connection

  // must set schema if the database user is DBA.
  // Refer to com.mpos.lottery.te.test.util.DBUnitUtils
  IDatabaseConnection conn = new DatabaseConnection(connection, "RAMONAL");
  DatabaseConfig dbConfig = conn.getConfig();
  dbConfig.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new OracleDataTypeFactory());

  return conn;
 }

 @Override
 protected String[] getConfigLocations() {
  /**
   * If there are beans with same name in different configuration files,
   * the last bean definition will overwrite the previous one. When do
   * integration test, this feature will be a good facility. By defining a
   * separated test spring configuration file, we can get a test
   * environment, but no need to modify normal spring configuration file
   * which will manage the production environment.
   */
  return new String[] { "spring-service.xml", "spring-dao.xml", "spring-eig.xml",
          "spring-raffle.xml" };
 }

 /**
  * Convert java.util.Date to string, then compare the string of date. Due to
  * the long value of java.util.Date is different from the long value of
  * java.util.Date retrieved from database.
  */
 protected String date2String(Date date) {
  assert date != null : "Argument 'date' can not be null.";
  SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT);
  return sdf.format(date);
 }

 protected String uuid() {
  UUID uuid = UUID.randomUUID();
  String uuidStr = uuid.toString();
  return uuidStr.replace("-", "");
 }

 protected SysConfiguration getSysConfiguration() {
  SysConfigurationDao dao = this.getBean(SysConfigurationDao.class, "sysConfigurationDao");
  return dao.getSysConfiguration();
 }

 protected String encryptSerialNo(String serialNo) {
  String tmp = serialNo;
  if (this.getSysConfiguration().isEncryptSerialNo()) {
   tmp = RsaCipher.encrypt(BaseUnitTest.RSA_PUBLIC_KEY, serialNo);
  }
  return tmp;
 }

 protected void printMethod() {
  StringBuffer lineBuffer = new StringBuffer("+");
  for (int i = 0; i < 80; i++) {
   lineBuffer.append("-");
  }
  lineBuffer.append("+");
  String line = lineBuffer.toString();

  // Get the test method. If index=0, it means get current method.
  StackTraceElement eles[] = new Exception().getStackTrace();
  // StackTraceElement eles[] = new Exception().getStackTrace();
  // for (StackTraceElement ele : eles){
  // System.out.println("class:" + ele.getClassName());
  // System.out.println("method:" + ele.getMethodName());
  // }
  String className = eles[1].getClassName();
  int index = className.lastIndexOf(".");
  className = className.substring((index == -1 ? 0 : (index + 1)));

  String method = className + "." + eles[1].getMethodName();
  StringBuffer padding = new StringBuffer();
  for (int i = 0; i < line.length(); i++) {
   padding.append(" ");
  }
  System.out.println(line);
  String methodSig = (method + padding.toString()).substring(0, line.length() - 3);
  System.out.println("| " + methodSig + "|");
  System.out.println(line);
 }

 protected  T getBean(Class c, String beanName) {
  return (T) this.getApplicationContext().getBean(beanName, c);
 }

 protected void initializeMLotteryContext() {
  MLotteryContext.getInstance().setBeanFactory(this.getApplicationContext());
 }

 protected GameDraw getGameInstance(String drawNo, String gameId) {
  GameDrawDao drawDao = this.getBean(GameDrawDao.class, "gameDrawDao");
  GameDraw draw = drawDao.getByNumberAndGame(drawNo, gameId);
  FunTypeDao funTypeDao = this.getBean(FunTypeDao.class, "lottoFunTypeDao");
  LottoFunType funType = (LottoFunType) funTypeDao.getById(draw.getGame().getFunTypeId());
  draw.getGame().setFunType(funType);
  return draw;
 }

 protected BigDecimal calculateTicketAmount(Ticket ticket) throws Exception {
  LottoFunType funType = (LottoFunType) ticket.getGameDraw().getGame().getFunType();
  List entries = ticket.getEntries();
  long totalBets = 0;
  for (LottoEntry entry : entries) {
   int betOption = entry.getBetOption();
   String numberFormat = MLotteryContext.getInstance().getLottoNumberFormat(betOption);

   BetOptionStrategy strategy = null;
   if (betOption == LottoEntry.BETOPTION_SINGLE) {
    strategy = new SingleStrategy(numberFormat, funType);
   } else if (betOption == LottoEntry.BETOPTION_MULTIPLE) {
    strategy = new MultipleStrategy(ticket, numberFormat, funType);
   } else if (betOption == LottoEntry.BETOPTION_BANKER) {
    strategy = new BankerStrategy(numberFormat, funType);
   } else if (betOption == LottoEntry.BETOPTION_ROLL) {
    strategy = new RollStrategy(numberFormat, funType);
   }

   SelectedNumber sNumber = new SelectedNumber();
   String numberParts[] = entry.getSelectNumber().split(SelectedNumber.DELEMETER_BASE);
   sNumber.setBaseNumber(numberParts[0]);
   if (numberParts.length == 2) {
    sNumber.setSpecialNumber(numberParts[1]);
   }
   sNumber.setBaseNumbers(parseNumberPart(sNumber.getBaseNumber()));
   sNumber.setSpecialNumbers(parseNumberPart(sNumber.getSpecialNumber()));
   totalBets += strategy.getTotalBets(sNumber);
  }
  // get base amount
  Game game = ticket.getGameDraw().getGame();
  OperationParameterDao opDao = GameTypeBeanFactory.getOperatorParameterDao(game.getType());
  LottoOperationParameter lop = (LottoOperationParameter) opDao.getById(game
          .getOperatorParameterId());
  return lop.getBaseAmount().multiply(new BigDecimal(totalBets));
 }

 protected int[] parseNumberPart(String numberPart) {
  if (numberPart == null)
   return null;
  String strNumbers[] = numberPart.split(SelectedNumber.DELEMETER_NUMBER);
  int numbers[] = new int[strNumbers.length];
  for (int i = 0; i < numbers.length; i++) {
   numbers[i] = Integer.parseInt(strNumbers[i]);
  }
  // Sorts the specified array of integers into ascending numerical order.
  Arrays.sort(numbers);
  return numbers;
 }
}

As we know, in a web application, Spring context will be stored in Servlet context, so we override the createApoplicationContext() method to meet our requirement, please check the method's comment.
Now it is time to show a example, let's there is a servlet named HttpDispatchServlet, and we will write test case for it. Look at the sprint-service.xml first, as our test case will extend from BaseServletTest which extends from BaseTransactionTest, we must define a DataSource typed bean in spring context(spring will inject the DataSource instance into BaseTransactionTest automatically)

 1 <beans ...="">
 2         <bean class="net.mpos.lottery.httprmi.DefaultBookService" id="bookService"></bean>
 3         <bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSourceTarget">
 4                 <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver">
 5                 <property name="url" value="jdbc:oracle:thin:@192.168.2.9:1521/orcl">
 6                 <property name="username" value="ramonal">
 7                 <property name="password" value="ramonal">
 8         </property></property></property></property></bean>
 9         <bean class="net.mpos.lottery.spring.MyDelegatingDataSource" id="dataSource">
10                 <property name="targetDataSource">
11                         <ref local="dataSourceTarget"></ref>
12                  </property>
13         </bean>
14         <bean class="org.springframework.jdbc.datasource.DataSourceTransactionManager" id="transactionManager">
15                 <property name="dataSource">
16                    <ref bean="dataSource"></ref>
17                 </property>
18         </bean>
19 </beans>

When initialize spring context, you will get a exception: two intances of DataSource type...Here I will override the setDataSource() method in BaseTransactionTest class to fix it.

/**
 * As there are two DataSource typed instance in Spring context, we must
 * override the parent method to set the Qualifier. 
 */
public void setDataSource(@Qualifier("dataSource") DataSource dataSource){
    super.setDataSource(dataSource);
}
Let's look at HttpDispatchServletTest, how to implement it...
public class HttpDispatchServletTest extends BaseServletTest {
    private HttpDispatchServlet servlet;
    
    public void mySetUp() throws Exception{
        super.mySetUp();
        servlet = new HttpDispatchServlet();
        servlet.init(config);
    }

    @Test
    public void testDoPost_BookService_add_Encryption() throws Exception {
        printMethod();
        GSonUtils gson = new GSonUtils();
        String reqContent = gson.toJson(DomainMocker.mockBook());
        request.addHeader(HttpDispatchServlet.HEADER_RMI_TAG, "bookService.add");
        request.addHeader(EncryptionHttpPackInterceptor.HEADER_MAC, HMacMd5Cipher.doDigest(
        reqContent, HMacMd5CipherTest.MAC_KEY));
        reqContent = TriperDESCipher.encrypt(TriperDesCipherTest.DES_KEY, reqContent);
        request.setContent(reqContent.getBytes());
        servlet.doPost(request, response);
        // assert response
        int status = response.getStatus();
        assertEquals(200, status);
    } 

    public void myTearDown(){
        super.myTearDown();
        servlet = null;
    }
}
OK, now you can run it from eclipse by 'run as junit test', No need tomcat.