Saturday, December 12, 2009

Location based DB schema in mySQL

http://dev.mysql.com/doc/refman/4.1/en/spatial-extensions.html

Sunday, November 15, 2009

OpenInviter

  • Easy access to your visitors address book in all major email providers and social networks around the world.

  • Completely painless and easy way of integrating in your website. It takes virtually not more than 5 minutes to have your own OpenInviterTM up and running on your site.

  • Constant updates so that you can sit back and relax and always have access to the latest ways to get your visitor's address book.

  • WGET-ready! Yes, you read right! OpenInviterTM is the only contacts importer supporting both WGET and cURL as methods of handling requests (since version 1.2) so now you can use it on ANY server you want without the hassle of installing libcurl!

  • Real time access to the service statuses so you can know if there is an email provider that is not working right with OpenInviterTM.

http://openinviter.com/faq.php

Monday, October 26, 2009

Upgrading mySQL from 5.0 to 5.1.on CentOS 5

My first attempt was to use yum:

[root@linux /] yum info mysql
Loading "fastestmirror" plugin
Loading mirror speeds from cached hostfile
* base: mirror.sanctuaryhost.com
* updates: mirror.fdcservers.net
* addons: mirror.steadfast.net
* extras: mirror.trouble-free.net
Installed Packages
Name : mysql
Arch : i386
Version: 5.0.45
Release: 7.el5
Size : 7.3 M
Repo : installed
Summary: MySQL client programs and shared libraries.
Description:
MySQL is a multi-user, multi-threaded SQL database server. MySQL is a
client/server implementation consisting of a server daemon (mysqld)
and many different client programs and libraries. The base package
contains the MySQL client programs, the client shared libraries, and
generic MySQL files.


Available Packages
Name : mysql
Arch : i386
Version: 5.0.77
Release: 3.el5
Size : 4.8 M
Repo : base
Summary: MySQL client programs and shared libraries
Description:
MySQL is a multi-user, multi-threaded SQL database server. MySQL is a
client/server implementation consisting of a server daemon (mysqld)
and many different client programs and libraries. The base package
contains the MySQL client programs, the client shared libraries, and
generic MySQL files.


The version 5.0.77 is good, but it not last updated release, which is 5.1.40 up today, which contains an important mysql feature named
Pationing: Enables distributing portions of individual tables across a file system, according to rules which can be set when the table is created. In effect, different portions of a table are stored as separate tables in different locations, but from the user point of view, the partitioned table is still a single table.

Basically it won’t do an upgrade as the vendor has changed from being MySQL to Sun Microsystems, and as a result I have to do a complete uninstall and re-install manually, first backup your data, stop mysql service and uninstall the current installation( 5.0.45 in my case):

[root@16 /]service mysqld stop
[root@16 /]rpm -qa | grep -i '^mysql-'
mysql-server-5.0.45-7.el5
mysql-5.0.45-7.el5
mysql-devel-5.0.45-7.el5
[root@16 /]rpm -e mysql-server-5.0.45-7.el5
[root@16 /]rpm -e mysql-5.0.45-7.el5
[root@16 /]rpm -e mysql-devel-5.0.45-7.el5

Now download all the current MySQL packages you need and install all with rpm -i( links from mySQL site):

[root@linux /usr/local/bin] mkdir mysql_5.1.40
[root@linux /usr/local/bin] cd mysql_5.1.40
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-server-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-client-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-shared-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-shared-compat-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-devel-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-embedded-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
wget http://dev.mysql.com/get/Downloads/MySQL-5.1/MySQL-test-community-5.1.40-0.rhel5.i386.rpm/from/http://mirror.mirimar.net/mysql/
[root@linux /mysql_5.1.40] rpm -i MySQL-shared-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] rpm -i MySQL-embedded-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] rpm -i MySQL-server-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] rpm -i MySQL-client-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] rpm -i MySQL-test-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] rpm -i MySQL-devel-community-5.1.40-0.rhel5
[root@linux /mysql_5.1.40] mysql_upgrade
[root@linux /mysql_5.1.40] service mysql start
[root@linux /mysql_5.1.40] mysqladmin -V
mysqladmin Ver 8.42 Distrib 5.1.40, for pc-linux-gnu on i686

Monday, October 19, 2009

The Cloud Dilemma for Developers( From Tikal Community Event)

Introduction to Cloud Computing by Yanai Franchi



Google App Engine Intro By Andrew Skiba



Amazon AWS Case Study by: Dudi Landau , “Thomson Reuters", "ClearForest CTO”

Wednesday, August 19, 2009

Benchmarks in java

First one deals with all the nasty and important details one has to consider to start benchmarking (compiler optimizations, JVM specifics, etc) and gives an example of how they can bite you in your behind. Second one is a real-life example of an attempt of performance analysis that at first glance appears reasonable. After a close look it turns out to have tons of problems and flaws that basically obscure the intended measurements.

Saturday, July 25, 2009

WEB4J. Because simple is beautiful.

The important things about WEB4J are :Graphic

faint - The Face Annotation Interface


This project is a flexible Java framework for face detection and face recognition technologies, that is based on different plugin and filter types. A suitable graphical interface can be used to set up pipelines for detection and recognition by combining these plugins and filters. Moreover an integrated photo browser allows users to apply the face detection and recognition process on personal images.

  1. Project Details
  2. Download and Launch
  3. Demo Videos
  4. Developer Guide




















nVidia CUDA

NVIDIA® CUDA™ technology leverages the massively parallel processing power of NVIDIA GPUs. The CUDA architecture is a revolutionary parallel computing architecture that delivers the performance of NVIDIA’s world-renowned graphics processor technology to general purpose GPU Computing. Applications that run on the CUDA architecture can take advantage of an installed base of over one hundred million CUDA-enabled GPUs in desktop and notebook computers, professional workstations, and supercomputer clusters.
With the CUDA architecture and tools, developers are achieving dramatic speedups in fields such as medical imaging and natural resource exploration, and creating breakthrough applications in areas such as image recognition and real-time HD video playback and encoding. CUDA enables this unprecedented performance via standard APIs such as the soon to be released OpenCL™ and DirectX® Compute, and high level programming languages such as C/C++, Fortran, Java, Python, and the Microsoft .NET Framework.

The CUDA Developer SDK provides examples with source code to help you get started with CUDA. Examples include:
  • Parallel bitonic sort
  • Matrix multiplication
  • Matrix transpose
  • Performance profiling using timers
  • Parallel prefix sum (scan) of large arrays
  • Image convolution
  • 1D DWT using Haar wavelet
  • OpenGL and Direct3D graphics interoperation examples
  • CUDA BLAS and FFT library usage examples
  • CPU-GPU C- and C++-code integration
  • Binomial Option Pricing
  • Black-Scholes Option Pricing
  • Monte-Carlo Option Pricing
  • Parallel Mersenne Twister (random number generation)
  • Parallel Histogram
  • Image Denoising
  • Sobel Edge Detection Filter

Sunday, July 19, 2009

Thursday, July 9, 2009

Tweaking hard disk on Linux

 
hdparm -Tt /dev/hda

/dev/hda:
Timing buffer-cache reads: 128 MB in 1.34 seconds =95.52 MB/sec
Timing buffered disk reads: 64 MB in 17.86 seconds = 3.58 MB/sec

hdparm /dev/hda

/dev/hda:
multcount = 0 (off)
I/O support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 0 (off)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 1870/255/63, sectors = 30043440, start = 0

  1. multcount: Short for multiple sector count. This controls how many sectors are fetched from the disk in a single I/O interrupt. Almost all modern IDE drives support this. The man page claims: when this feature is enabled, it typically reduces operating system overhead for disk I/O by 30-50%. On many systems, it also provides increased data throughput of anywhere from 5% to 50%.
  2. I/O support: This is a big one. This flag controls how data is passed from the PCI bus to the controller. Almost all modern controller chipsets support mode 3, or 32-bit mode w/sync. Some even support 32-bit async. Turning this on will almost certainly double your throughput (see below.)
  3. unmaskirq: Turning this on will allow Linux to unmask other interrupts while processing a disk interrupt. What does that mean? It lets Linux attend to other interrupt-related tasks (i.e., network traffic) while waiting for your disk to return with the data it asked for. It should improve overall system response time, but be warned: Not all hardware configurations will be able to handle it. See the manpage.
  4. using_dma: DMA can be a tricky business. If you can get your controller and drive using a DMA mode, do it. But I have seen more than one machine hang while playing with this option.

 
hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda:
setting 32-bit I/O support flag to 3
setting multcount to 16
setting unmaskirq to 1 (on)
setting using_dma to 1 (on)
setting xfermode to 66 (UltraDMA mode2)
multcount = 16 (on)
I/O support = 3 (32-bit w/sync)
unmaskirq = 1 (on)
using_dma = 1 (on)

hdparm -tT /dev/hda

/dev/hda:
Timing buffer-cache reads: 128 MB in 1.43 seconds = 89.51 MB/sec
Timing buffered disk reads: 64 MB in 3.18 seconds = 20.13 MB/sec

Saturday, July 4, 2009

An Eclipse 3.5 Galileo is here !

An Eclipse Galileo 3.5

What's new in the latest version of the open source multipurpose IDE and application platform

Galileo is the simultaneous release of 33 major Eclipse projects. The Eclipse Foundation states that the Galileo release train consists of 33 projects. However, some are subprojects that are rolled up into projects, and not all projects are highlighted in the Eclipse Foundation's marketing push. Regardless, Galileo represents the largest single release of new technology to date.

The important thing to remember about Galileo in particular and Eclipse release trains in general is that even though it's a simultaneous release, it doesn't mean these projects are unified. Each project is a separate open source project, operating with its own project leadership, its own committers, and its own development plan. The release train concept is designed to provide a transparent and predictable development cycle.

Get Galileo

There are two main ways to get Galileo. The first — and recommended — way is to just grab a package relevant to you. The other way to get Galileo is to use an update site.

Packages

Go to the Eclipse Galileo Packages site. The packages site contains nine pre-bundled versions of Galileo specific for your needs.

Galileo update site

To get Galileo using an update site, download the Eclipse V3.5 SDK. Once this is done, you can launch Eclipse and access the software-update mechanism via Help > Software Updates (see Figure 2). Enter the proper Galileo update site information, if it isn't already available as the Galileo Discovery Site. Once you are connected to the Galileo update site, you should see the list of available features that are part of the Galileo release train. It's as simple as that. Once you're connected, you can simply choose what features to install into your Eclipse.


Figure 2. Software updates


The projects

The Eclipse ecosystem is a large and sometimes intimidating place. About 100 projects are being overseen by the Eclipse Foundation, and the Galileo release only represents a snapshot of that. The Galileo release train showcases Eclipse technology and helps adopters integrate Eclipse technology into their products. For more information about the Galileo projects, see the links below.


ProjectSynopsisWeb site
Accessibility Tools Framework (ACTF)Build applications and content for people with disabilitieshttp://www.eclipse.org/actf/
Business Intelligence and Reporting Tools (BIRT)Generate reportshttp://www.eclipse.org/birt
C/C++ Development Tooling (CDT)Code C/C++http://www.eclipse.org/cdt
Data Tools Platform (DTP)Extensible frameworks and toolshttp://www.eclipse.org/datatools/
Eclipse Modeling Framework (EMF)Modeling framework and code generation facilityhttp://www.eclipse.org/modeling/emf/
Eclipse Packaging ProjectCreate, download, and install packageshttp://www.eclipse.org/epp/
Eclipse PlatformCore frameworks and serviceshttp://www.eclipse.org/platform/
EquinoxImplementation of the OSGi R4 core framework spechttp://www.eclipse.org/equinox/
Graphical Editor Framework (GEF)Develop graphical applicationshttp://www.eclipse.org/gef/
Graphical Modeling Framework (GMF)Develop graphical editorshttp://www.eclipse.org/gmf/
Java™ Workflow Tooling (JWT)Toolset for workflows and processes from design to monitoringhttp://www.eclipse.org/jwt/
Java Development Tools (JDT)Develop Java applicationshttp://www.eclipse.org/jdt/
Java Emitter Templates (M2T JET)Generate textual artifacts from modelshttp://www.eclipse.org/modeling/m2t/
Memory AnalyzerFind memory leaks and reduce memory consumptionhttp://www.eclipse.org/mat/
Mobile Tools for Java (MTJ)Extend Eclipse frameworks to support mobile device Java application developmenthttp://www.eclipse.org/dsdp/mtj/
MylynMonitors your work to make the GUI relevant to what you're doinghttp://www.eclipse.org/mylyn/
PHP Development Tools (PDT)Code PHPhttp://www.eclipse.org/pdt/
Rich Ajax Platform (RAP)Code Ajaxhttp://www.eclipse.org/rap/
SCA ToolsTools for the Service Component Architecture standardhttp://www.eclipse.org/stp/sca/
SOA ToolsCode Service-Oriented Architecture appshttp://www.eclipse.org/stp/
SwordfishExtensible SOA frameworkhttp://www.eclipse.org/swordfish/
Target ManagementConfigure and manage remote systemshttp://www.eclipse.org/dsdp/tm/
Test and Performance Tools Platform Project (TPTP)Tooling for profiling and testing applicationshttp://www.eclipse.org/tptp/
Textual Modeling Framework (Xtext)Code external textual DSLshttp://www.eclipse.org/modeling/tmf/
Tools for mobile Linux (TmL)Code mobile applicationshttp://www.eclipse.org/dsdp/tml/
Web Tools Platform (WTP)Code Web and Java EE applicationshttp://www.eclipse.org/webtools/

Google Developer Days Brazil 2009 - Keynote

Tuesday, June 30, 2009

Neo4j a graph database

Neo4j is a graph database. It is an embedded, disk-based, fully transactional Java persistence engine that stores data structured in graphs rather than in tables. A graph (mathematical lingo for a network) is a flexible data structure that allows a more agile and rapid style of development.

According Emil Eifrem
the neo4j database outperforms relational backends with >1000x for many increasingly important use cases.

Sunday, June 28, 2009

How to Exploit Multiple Cores

How to Exploit Multiple Cores for Better Performance and Scalability( by Todd Hoff)

InfoQueue has this excellent talk by Brian Goetz on the new features being added to Java SE 7 that will allow programmers to fully exploit our massively multi-processor future. While the talk is about Java it's really more general than that and there's a lot to learn here for everyone.

Brian starts with a short, coherent, and compelling explanation of why programmers can't expect to be saved by ever faster CPUs and why we must learn to exploit the strengths of multiple core computers to make our software go faster.

Some techniques for exploiting multiple cores are given in an equally short, coherent, and compelling explanation of why divide and conquer as the secret to multi-core bliss, fork-join, how the Java approach differs from map-reduce, and lots of other juicy topics.

The multi-core "problem" is only going to get worse. Tilera founder Anant Agarwal estimates by 2017 embedded processors could have 4,096 cores, server CPUs might have 512 cores and desktop chips could use 128 cores. Some disagree saying this is too optimistic, but Agarwal maintains the number of cores will double every 18 months.

An abstract of the talk follows though I would highly recommend watching the whole thing. Brian does a great job.

Why is Parallelism More Important Now?

  •  Coarse grain concurrency was all the rage for Java 5. The hardware reality has changed. The number of cores is increasing so applications must now search for fine grain parallelism (fork-join)
  •  As hardware becomes more parallel, more and more cores, software has to look for techniques to find more and more parallelism to keep the hardware busy.
  •  Clock rates have been increasing exponentially over the last 30 years or so. Allowed programmers to be lazy because a faster processor would be released that saved your butt. There wasn't a need to tune programs.
  •  That wait for faster processor game is up. Around 2003 clock rates stopped increasing. Hit the power wall. Faster processors require more power. Thinner chip conductor lines were required and the thinner lines can't dissipate the increased power without causing overheating which effects the resistance characteristics of the conductors. So you can't keep increasing clock rate.
  •  Fastest Intel CPU 4 or 5 years ago was 3.2 Ghz. Today it's about the same or even slower.
  •  Easier to build 2.6 Ghz or 2.8 Ghz chips. Moore's law wasn't repealed so we can cram more transistors on each wafer. So more processing power could be put on a chip which leads to putting more and more processing cores on a chip. This is multicore.
  •  Multicore systems are the trend. The number of cores will grow at exponential rate for the next 10 years. 4 cores at the low end. The high end 256 (Sun) and 800 (Azul) core systems.
  •  More cores per chip instead of faster chips. Moore's law has been redirected to multicore.
  • The problem is it's harder to make a program go faster on a multicore system. A faster chip will run your program faster. If you have a 100 cores you program won't go faster unless you explicitly design it to take advantage of those chips.
  •  No free lunch anymore. Must now be able to partition your program so it can run faster by running on multiple cores. And you must be able keep doing that as the number of cores keeps improving.
  •  We need a way to specify programs so they can be made parallel as topologies change by adding more cores.
  • As hardware evolves platforms must evolve to take advantage of the new hardware. Started off with course grain tasks which was sufficient given the number of cores. This approach won't work as the number cores increase.
  • Must find finer-grained parallelism. Example sorting and searching data. Opportunities around data. The data can for sorting can be chunked and sorted and the brought together with a merge sort. Searching can be done in parallel by searching subregions of the data and merging the results.
  • Parallel solutions use more CPU in aggregate because of the coordination needed and that data needs to be handled more than once (merge). But the result is faster because it's done in parallel. This adds business value. Faster is better for humans.

    What has Java 7 Added to Support Parallelism?

  •  Example problem is to find the max number from a list.
  •  The course grained threading approach is to use a thread pool, divide up the numbers, and let the task pool compute the sub problems. A shared task pool is slow as the number increases which forces the work to be more course grained. No way to load balance. Code is ugly. Doesn't match the problem well. The runtime is dominated by how long it takes the longest subtask to run. Had to decide up front how many pieces to divide the problem into.
  • Solution using divide and conquer. Divide set into pieces recursively until the problem is so small the sequential solution is more efficient. Sort the pieces. Merge the results. 0(n log n), but problem is parallelizable. Scales well and can keep many CPUs busy.
  • Divide and conquer uses fork-join to fork off subtasks and wait for them to complete and then join the results. A typical thread pool solution is not efficient. Creates too many threads and creating threads are expensive and use a lot of memory.
  • This approach portable because it's abstract. It doesn't know how many processors are available It's independent of the topology.
  • The fork-join pool is optimized for fine grained operations whereas the thread pool is optimized for course grained operations. Best used for problems without IO. Just computations using CPU that tend to fork off sub problems. Allows data to be shared read-only and used across different computations without copying.
  • This approach scales nearly linearly with the number of hardware threads.
  • The goal for fork-join: Avoid context switches; Have as many threads as hardware threads and keep them all busy; Minimize queue lock contention for data structures. Avoid common task queue.
  •  Implementation uses Work-Stealing. Each thread has a work queue that is a double ended queue. Each thread pulls work from the head of queue and processes it. When there's nothing do it steals work from the tail of another queue. No contention for the head because only one thread access it. Rare contention on tail because stealing is infrequent as the stolen work is large which takes them time to process. Process starts with one task. It breaks up the work. Other tasks steal work and start the same process. Load balances without central coordination, few context switches, little coordination.
  • The same approach also works for graph traversal, matrix operations, linear algebra, modeling, generate moves and evaluate the result. Latent parallelism can be found in a lot of places once you start looking.
  •  Support higher level operations like ParallelArray. Can specify filtering, transformation, and aggregation options. Not a generalized in-memory database, but has a very transparent cost model. It's clear how many parallel operations are happening. Can look at the code and quickly know what's a parallel operation so you will know the cost.
  • Looks like map reduce except this is scaling across a multicore system, one single JVM, whereas map reduce is across a cluster. The strategy is the same: divide and conquer.
  • Idea is to make specifying parallel operations so easy you wouldn't even think of the serial approach.
  • Tuesday, June 16, 2009

    How to control services in linux with chkconfig

    Linux / Unix Command: chkconfig

    chkconfig - updates and queries runlevel information for system services:

    chkconfig --list [name]
    chkconfig --add name
    chkconfig --del name
    chkconfig [--level levels] name <on|off|reset>
    chkconfig [--level levels] name

    DESCRIPTION

    chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving system administrators of the task of directly manipulating the numerous symbolic links in those directories.

    This implementation of chkconfig was inspired by the chkconfig command present in the IRIX operating system. Rather than maintaining configuration information outside of the /etc/rc[0-6].d hierarchy, however, this version directly manages the symlinks in /etc/rc[0-6].d. This leaves all of the configuration information regarding what services init
    starts in a single location. chkconfig has five distinct functions: adding new services for
    management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service. When chkconfig is run without any options, it displays usage
    information. If only a service name is given, it checks to see if the service is configured to be started in the current runlevel. If it is, chkconfig returns true; otherwise it returns false. The
    --level option may be used to have chkconfig query an alternative runlevel rather than the current one. If one of on, off, or reset is specified after the service name, chkconfig changes the startup information for the specified service. The on and off flags cause the service
    to be started or stopped, respectively, in the runlevels being changed. The reset flag resets the startup information for the service to whatever is specified in the init script in question.

    By default, the on and off options affect only runlevels 2, 3, 4, and 5, while reset affects all of the runlevels. The --level option may be used to specify which runlevels are affected.

    Note that for every service, each runlevel has either a start script or a stop script. When switching runlevels, init will not re-start an already-started service, and will not re-stop a service that is not running.

    OPTIONS

    --level levels

    Specifies the run levels an operation should pertain to. It is given as a string of numbers from 0 to 7. For example, --level 35 specifies runlevels 3 and 5.
    --add name

    This option adds a new service for management by chkconfig.When a new service is added, chkconfig ensures that the servicehas either a start or a kill entry in every runlevel. If any runlevel is missing such an entry, chkconfig creates the appropriate entryas specified by the default values in the init script. Note thatdefault entries in LSB-delimited 'INIT INFO' sections take precedence over the default runlevels in the initscript.

    --del name

    The service is removed from chkconfig management, and any symboliclinks in /etc/rc[0-6].d which pertain to it are removed.

    --list name

    This option lists all of the services which chkconfig knows about, and whether they are stopped or started in each runlevel. If name is specified, information in only display about service name.

    RUNLEVEL FILES

    Each service which should be manageable by chkconfig needs two or more commented lines added to its init.d script. The first linetells chkconfig what runlevels the service should be started in by default, as well as the start and stop priority levels. If the serviceshould not, by default, be started in any runlevels, a - should be used in place of the runlevels list. The second line contains a description for the service, and may be extended acrossmultiple lines with backslash continuation.

    For example, random.init has these three lines:

    # chkconfig: 2345 20 80
    # description: Saves and restores system entropy pool for \
    # higher quality random number generation.


    This says that the random script should be started in levels 2, 3, 4, and 5, that its start priority should be 20, and that its stop priority should be 80. You should be able to figure out what the description says; the \ causes the line to be continued. The
    extra space in front of the line is ignored.

    For instance take a look on service configuration to run tomcat:

    #startup script for Jakarta Tomcat
    #
    # chkconfig: 345 84 16
    # description: Jakarta Tomcat Java Servlet/JSP Container

    TOMCAT_HOME=/usr/local/bin/apache-tomcat-6.0.16
    TOMCAT_START=/usr/local/bin/apache-tomcat-6.0.16/bin/startup.sh
    TOMCAT_STOP=/usr/local/bin/apache-tomcat-6.0.16/bin/shutdown.sh
    TOMCAT_RUN=/usr/local/bin/apache-tomcat-6.0.16/bin/catalina.sh
    #Necessary environment variables
    export CATALINA_HOME=/usr/local/bin/apache-tomcat-6.0.16

    # Source function library.
    . /etc/rc.d/init.d/functions

    # Source networking configuration.
    . /etc/sysconfig/network

    # Check that networking is up.
    [ ${NETWORKING} = "no" ] && exit 0

    #Check for tomcat script
    if [ ! -f $TOMCAT_HOME/bin/catalina.sh ]
    then
    echo "Tomcat not available..."
    exit
    fi

    start() {
    echo -n "Starting Tomcat: "
    su - root -c $TOMCAT_START
    echo
    touch /var/lock/subsys/tomcatd
    # We may need to sleep here so it will be up for apache
    # sleep 5
    #Instead should check to see if apache is up by looking for http.pid
    }
    run() {
    echo -n "Starting Tomcat: "
    su - root -c $TOMCAT_START
    echo
    touch /var/lock/subsys/tomcatd
    # We may need to sleep here so it will be up for apache
    # sleep 5
    #Instead should check to see if apache is up by looking for http.pid
    }

    stop() {
    echo -n $"Shutting down Tomcat: "
    su - root -c $TOMCAT_STOP
    rm -f /var/lock/subsys/tomcatd.pid
    echo
    }

    status() {
    ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap
    start" | awk '{printf $1 " "}' | wc | awk '{print $2}' > /tmp/tomcat_process_count.txt
    read line < /tmp/tomcat_process_count.txt if [ $line -gt 0 ]; then echo -n "tomcatd ( pid " ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap start" | awk '{printf $1 " "}' echo -n ") is running..." else echo -n "Tomcat is stopped" fi } case "$1" in start) start ;; run) run ;; stop) stop ;; restart) stop sleep 3 start ;; status) status ;; *) echo "Usage: tomcatd {start|stop|restart|status}" exit 1 esac