Sunday, May 24, 2009

How reset the auto_increment propertie in mysql table.

Recently encountered on a 'odd' behaviour of mySQL table with primary key that defined as auto-increment, in a first look the table had only 5 records, which was far away from limit(256), then I've remebered that the table was used by junit runs for inserting/deleting records and apparently the limit was reached, I think that most expected behavior would be to generate an id for primary key that most close to maximum value of primary key.
Anyway in order to reset the auto_increment, in a situation where some of the most recently added rows were deleted, use:

ALTER TABLE theStuckedTable AUTO_INCREMENT=1234

Saturday, May 23, 2009

Use command-line tools in PHP

Gennady recommends this from developerWorks(TM)
---------------------------------------------------------------------

Title: Use command-line tools in PHP

Learn how to better integrate scripts with command-line tools. Emphasis is placed on using shell_exec(), exec(), passthru(), and system(); safely passing information to the command line; and safely retrieving information from it.

Learn more:
http://www.ibm.com/developerworks/opensource/library/os-php-commandline/index.html?ca=drs-

developerWorks
IBM's resource for developers.
http://www.ibm.com/developerworks/

Tuesday, May 19, 2009

The Downfall of Agile Hitler



The Downfall of Agile Hitler

Sunday, May 17, 2009

Luke - Lucene Index Toolbox

Lucene is an Open Source, mature and high-performance Java search engine. It is highly flexible, and scalable from hundreds to millions of documents.

Luke is a handy development and diagnostic tool, which accesses already existing Lucene indexes and allows you to display and modify their content in several ways:
  • browse by document number, or by term
  • view documents / copy to clipboard
  • retrieve a ranked list of most frequent terms
  • execute a search, and browse the results
  • analyze search results
  • selectively delete documents from the index
  • reconstruct the original document fields, edit them and re-insert to the index
  • optimize indexes
  • and much more...


Saturday, May 16, 2009

Finally was launched alpha of computational knowledge engine Wolfram, that did a lot of 'noise' recently, it's has a lot of useful features and expose API for external use:
http://www61.wolframalpha.com/developers.html

Wednesday, May 6, 2009

Monday, May 4, 2009

A memcached implementation in JGroups - PartitionedHashMap

PartitionedHashMap is an implementation of memcached on top of JGroups, written completely in Java. It has a couple of advantages over memcached:

  • Java clients and PartitionedHashMap can run in the same address space and therefore don't need to use the memcached protocol to communicate. The latter is text based2 and slow, due to serialization. This allows servlets to access the cache directly, without serialization overhead.

  • All PartitionedHashMap processes know about each other, and can therefore make intelligent decisions as to what to do when a cluster membership change occurs. For example, a server to be stopped can migrate all of the keys it manages to the next server. With memcached, the entries hosted by a server S are lost when S goes down. Of course, this doesn't work when S crashes.

  • Similat to the above point, when a cluster membership change occurs (e.g. a new server S is started), then all servers check whether an entry hosted by them should actually be hosted by S. They will move all entries to be hosted by S to S. This has the advantage that entries don't have to be re-read from the DB (for example) and inserted into the cache (as in memcached's case), but the cache rebalances itself automatically.

  • PartitionedHashMap has a level 1 cache (L1 cache). This allows for caching of data near to where it is really needed. For example, if we have servers A, B, C, D and E and a client adds a (to be heavily accessed) news article to C, then memcached would always redirect every single request for the article to C. So, a client accessing D would always trigger a GET request from D to C and then return the article. JGroups caches the article in D's L1 cache on the first access, so all other clients accessing the article from D would get the cached article, and we can avoid a round trip to C. Note that each entry has an expiration time, which will cause the entry to be removed from the L1 cache on expiry, and the next access would have to fetch it again from C and place it in D's L1 cache. The expiration time is defined by the submitter of the article.

  • Since the RPCs for GETs, SETs and REMOVEs use JGroups as transport, the type of transport and the quality of service can be controlled and customized through the underlying XML file defining the transport. For example, we could add compression, or decide to encrypt all RPC traffic. It also allows for use of either UDP (IP multicasting and/or UDP datagrams) or TCP.

  • The connector (org.jgroups.blocks.MemcachedConnector) which is responsible for parsing the memcached protocol and invoking requests on PartitionedHashMap, PartitionedHashMap (org.jgroups.blocks.PartitionedHashMap) which represents the memcached implementation, the server (org.jgroups.demos.MemcachedServer) and the L1 and L2 caches (org.jgroups.blocks.Cache) can be assembled or replaced at will. Therefore it is simple to customize the JGroups memcached implementation; for example to use a different MemcachedConnector which parses a binary protocol (requiring matching client code of course).

  • All management information and operations are exposed via JMX.

http://www.jgroups.org/memcached/memcached.html

Want SEO Results? Implement These 8 Important Techniques(from LinkedIn forum)

Implementing more than a handful of the Search Engine

Optimization methods we encounter can be a never-ending

struggle, hence the reason for this list. Included below are

the methods that will get you the most favor in the eyes of
the search engines. When you implement these SEO methods,
you will be able to see a noticable difference in your
traffic and wonder why you didn't impletment them sooner.

1. Use your 'title' tag effectively by including your
keywords. Each page should have a specific theme and be
optimized for a certain key phrase anyway, so including your
keywords in the page title should be natural. It just so
happens that search engines eat it up.

2. Get your keywords in your inbound links. If another
page is pointing at you, whatever text is part of that link
goes to your credit in the eyes of the search engines. If
your blogger friend has a link to your website on his, it
probably says "check out Joes business." Try to get him to
put "auto mechanic" or "car repair" as the link, or at least
"Joes Auto Repair" - your business name.

3. Have Unique Content and Update Often. Add something to
the Internet. Become a resource for potential clients and
those in your industry. If you get more page views and
clicks through a search engine, the search engine will value
your site higher. As a side benefit, you are seen as an
expert in the community, which never hurts business.

4. Put your Keywords in your Filenames. Instead of titling
your Auto Repair Services page "arserv.html" or another
shorthand title, use long, descriptive filenames like
"car-auto-repair-services.html." This is a great technique
that is easy to implement and yields great results.

5. Create an XML Sitemap. If you're not used to writing
code, this can seem daunting, but you can always hire
someone. Creating and submitting the sitemap will help
Google find your site and all its content much faster. You
can find the protocol here
( https://www.google.com/webmasters/tools/docs/en/protocol.html ).
Save it as 'sitemap.xml' in the root of your website (the
same directory as your index page).

6. Build a Large Number of Backlinks. There are entire
companies that do nothing but provide backlink generation
services, which should tell you how important this process
is. Simply put, search engines favor sites that are
referenced by other well-referenced sites (the basis of
Google's PageRank). There are countless ways to get
backlinks, but arguably the best way is to use the content
you generated in Step 3. Submit your articles to
directories and content libraries using special programs or
manual submissions. You can even delegate this a
knowledgeable employee (or one that knows how to use Google)
for their downtime. Additionally, search engines favor links
from .gov and .edu domains, so use your connections if you
have them!

7. Keep your Keyword Density Between 3-7%. When youre
writing all that great, unique content, make sure to seed
your keywords and phrases in there so that they comprise
3-7% of the text on a page. Any more, and you look like
youre keyword stuffing (whether youre trying to or not), any
less, and the page doesnt seem relevant.

8. Use your Keywords in your Headings. Many times a section
heading can be seen as irrelevant for a search engine, so
always make sure to fill your 'h1' and 'h2' tags with
important keywords. Dont leave them out or treat them as
'implied' - thats the worst thing you can do.

If you follow these 8 tips, you will rank high for your
target keyword. If your keyword is very competitive, add
more backlinks with your keyword phrase by creating
articles like this, submitting them to article directories,
and placing your link in the 'About' box below. Well, what
are you waiting for?

Choosing the Right Scrum Management Tool

Just read a two interesting articles about choosing a scrum management tools:

http://tommynorman.blogspot.com/2009/05/choosing-right-scrum-management-tool_04.html

http://tommynorman.blogspot.com/2009/05/choosing-right-scrum-management-tool.html