Accessing Elasticsearch from command line

If you work as much with Elasticsearch as I do, it can be sometimes annoying to access Elasticsearch functionalities via the normal API HTTP requests. And if you work gladly on the command line, es2unix is for you. It’s a command line tool for accessing the Elasticsearch API.

It’s written in clojure and pretty easy to install. You just have to download it (here to ~/bin/) and make it executable:

curl -s download.elasticsearch.org/es2unix/es >~/bin/es
chmod +x bin/es

To check for example the health of the cluster what you would normally e.g. do with a GET request to http://localhost:9200/_cluster/health?pretty=true you can now just call the following from the command line:

bin/es health -u http://localhost:9200

This would return e.g. for a cluster “cluster1” and 8 nodes, 8 data nodes, 15 primary shards, 45 active shards, 2 relocating shards, 0 initialising shards and 0 unassigned shards

14:11:07 cluster1 green 8 8 15 45 2 0 0

To print the columns you just have to add the verbose command -v

bin/es health -u http://localhost:9200 -v
time     cluster   status nodes data pri shards relo init unassign
14:14:25 cluster1  green      8    8  15     45    2    0        0

The following commands are available for the es tool:

  • allocation
  • count
  • lifecycle
  • health
  • heap
  • ids
  • indices
  • master
  • nodes
  • search
  • shards
  • version

With count you can get the total number of documents e.g.

bin/es count -u http://localhost:9200

14:16:02 108,260,980

lifecycle is also nice to get the node joining history. For this command you need access to the Elasticsearch log files:

bin/es lifecycle elasticsearch/logs/cluster1.log

returns in my case:

2013-07-10 09:13:41,537 search01 INIT 0.90.2
2013-07-10 09:13:43,075 search01 BIND xxx.xxx.xxx.xxx:9300
2013-07-10 09:13:46,143 search01 MASTER search03
2013-07-10 09:13:47,810 search01 START
2013-07-10 09:14:16,294 search01 ADD search03
2013-07-10 12:34:38,862 search01 REMOVE search08
2013-07-10 12:34:56,436 search01 ADD search08
2013-07-10 13:41:50,709 search01 REMOVE search07
2013-07-10 13:42:08,382 search01 ADD search07
2013-07-10 14:00:57,001 search01 REMOVE search06
2013-07-10 14:01:14,594 search01 ADD search06

To get a more detailed documentation go to github.com/elasticsearch/es2unix

Thinking Sphinx attribute filter and negative values

Just wondered why I got no results after executing a search via Sphinx and Thinking Sphinx. The problem was, that I used a negative value in a filter attribute and that Sphinx only supports unsigned integers:

Attributes are named. Attribute names are case insensitive. Attributes are not full-text indexed; they are stored in the index as is. Currently supported attribute types are:

  • unsigned integers (1-bit to 32-bit wide);
  • UNIX timestamps;
  • floating point values (32-bit, IEEE 754 single precision);
  • string ordinals (specially computed integers);
  • strings (since 1.10-beta);
  • MVA, multi-value attributes (variable-length lists of 32-bit unsigned integers).
  • Official Google Blog: Our new search index: Caffeine

    Google Logo bg:Картинка:Google.png
    Image via Wikipedia

    Caffeine provides 50 percent fresher results for web searches than our last index, and it's the largest collection of web content we've offered. Whether it's a news story, a blog or a forum post, you can now find links to relevant content much sooner after it is published than was possible ever before.

    via Official Google Blog: Our new search index: Caffeine.

    next-generation architecture for Google’s web search

    For the last several months, a large team of Googlers has been working on a secret project: a next-generation architecture for Google’s web search. It’s the first step in a process that will let us push the envelope on size, indexing speed, accuracy, comprehensiveness and other dimensions. The new infrastructure sits “under the hood” of Google’s search engine, which means that most users won’t notice a difference in search results. But web developers and power searchers might notice a few differences, so we’re opening up a web developer preview to collect feedback.
    (see Google Webmaster Central)

    Reblog this post [with Zemanta]

    krugle code search engine

    Just found krugle. A nifty search engine which helps you finding source-code by searching through projects which are available in the internet. It also allows you to add comments to the code.

    Krugle helps programmers find existing source code and the information they need to evaluate, deploy and fix code.

    There is a demo-video which shows all the features.

    I found my code at Krugle