Code Monkey home page Code Monkey logo

fsriver's Introduction

FileSystem River for Elasticsearch

Welcome to the FS River Plugin for Elasticsearch

This river plugin helps to index documents from your local file system and using SSH.

WARNING: If you use this river in a multinode mode on different servers without SSH, you need to ensure that the river can access files on the same mounting point. If not, when a node stop, the other node will think that your local dir is empty and will erase all your docs.

WARNING: starting from 0.0.3, you need to have the Attachment Plugin. It's not included anymore in the distribution.

Versions

FS River Plugin ElasticSearch Attachment Plugin Release date
master (0.4.0-SNAPSHOT) 0.90.3 1.8.0 31/10/2013 ?
0.3.0 0.90.3 1.8.0 09/08/2013
0.2.0 0.90.0 1.7.0 30/04/2013
0.1.0 0.90.0.Beta1 1.6.0 15/03/2013
0.0.3 0.20.4 1.6.0 12/02/2013
0.0.2 0.19.8 1.4.0 16/07/2012
0.0.1 0.19.4 1.4.0 19/06/2012

Build Status

Thanks to cloudbees for the build status : build status

Test trends

Getting Started

Installation

Just type :

bin/plugin -install fr.pilato.elasticsearch.river/fsriver/0.3.0

This will do the job...

-> Installing fr.pilato.elasticsearch.river/fsriver/0.3.0...
Trying http://download.elasticsearch.org/fr.pilato.elasticsearch.river/fsriver/fsriver-0.3.0.zip...
Trying http://search.maven.org/remotecontent?filepath=fr/pilato/elasticsearch/river/fsriver/0.3.0/fsriver-0.3.0.zip...
Trying https://oss.sonatype.org/service/local/repositories/releases/content/fr/pilato/elasticsearch/river/fsriver/0.3.0/fsriver-0.3.0.zip...
Downloading ......DONE
Installed fsriver

Creating a Local FS river

We create first an index to store our documents :

curl -XPUT 'localhost:9200/mydocs/' -d '{}'

We create the river with the following properties :

  • FS URL: /tmp or c:\\tmp if you use Microsoft Windows OS
  • Update Rate: every 15 minutes (15 * 60 * 1000 = 900000 ms)
  • Get only docs like *.doc and *.pdf
  • Don't index resume*
curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"update_rate": 900000,
	"includes": "*.doc,*.pdf",
	"excludes": "resume"
  }
}'

Adding another local FS river

We add another river with the following properties :

  • FS URL: /tmp2
  • Update Rate: every hour (60 * 60 * 1000 = 3600000 ms)
  • Get only docs like *.doc, *.xls and *.pdf

By the way, we define to index in the same index/type as the previous one:

  • index: docs
  • type: doc
curl -XPUT 'localhost:9200/_river/mynewriver/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp2",
	"update_rate": 3600000,
	"includes": [ "*.doc" , "*.xls", "*.pdf" ]
  },
  "index": {
  	"index": "mydocs",
  	"type": "doc",
  	"bulk_size": 50
  }
}'

Indexing using SSH (>= 0.3.0)

You can now index files remotely using SSH:

  • FS URL: /tmp3
  • Server: mynode.mydomain.com
  • Username: username
  • Password: password
  • Protocol: ssh (default to local)
  • Update Rate: every hour (60 * 60 * 1000 = 3600000 ms)
  • Get only docs like *.doc, *.xls and *.pdf
curl -XPUT 'localhost:9200/_river/mysshriver/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp3",
	"server": "mynode.mydomain.com",
	"username": "username",
	"password": "password",
	"protocol": "ssh",
	"update_rate": 3600000,
	"includes": [ "*.doc" , "*.xls", "*.pdf" ]
  }
}'

Searching for docs

This is a common use case in elasticsearch, we want to search for something ;-)

curl -XGET http://localhost:9200/docs/doc/_search -d '{
  "query" : {
    "text" : {
        "_all" : "I am searching for something !"
    }
  }
}'

Indexing JSon docs (>= 0.0.3)

If you want to index JSon files directly without parsing them through the attachment mapper plugin, you can set json_support to true.

curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"update_rate": 3600000,
	"json_support" : true
  },
  "index": {
    "index": "mydocs",
    "type": "doc",
    "bulk_size": 50
  }
}'

Of course, if you did not define a mapping prior creating the river, Elasticsearch will auto guess the mapping.

If you have more than one type, create as many rivers as types:

curl -XPUT 'localhost:9200/_river/mydocs1/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp/type1",
	"update_rate": 3600000,
	"json_support" : true
  },
  "index": {
    "index": "mydocs",
    "type": "type1",
    "bulk_size": 50
  }
}'

curl -XPUT 'localhost:9200/_river/mydocs2/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp/type2",
	"update_rate": 3600000,
	"json_support" : true
  },
  "index": {
    "index": "mydocs",
    "type": "type2",
    "bulk_size": 50
  }
}'

You can also index many types from one single dir using two rivers on the same dir and by setting includes parameter:

curl -XPUT 'localhost:9200/_river/mydocs1/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"update_rate": 3600000,
    "includes": [ "type1*.json" ],
	"json_support" : true
  },
  "index": {
    "index": "mydocs",
    "type": "type1",
    "bulk_size": 50
  }
}'

curl -XPUT 'localhost:9200/_river/mydocs2/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"update_rate": 3600000,
    "includes": [ "type2*.json" ],
	"json_support" : true
  },
  "index": {
    "index": "mydocs",
    "type": "type2",
    "bulk_size: 50
  }
}'

Please note that the document _id is always generated (hash value) from the JSon filename to avoid issues with special characters in filename. You can force to use the _id to be the filename using filename_as_id attribute:

curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"update_rate": 3600000,
	"json_support": true,
	"filename_as_id": true
  },
  "index": {
    "index": "mydocs",
    "type": "doc",
    "bulk_size": 50
  }
}'

Disabling file size field (>= 0.2.0)

By default, FSRiver will create a field to store the original file size in octet. You can disable it using `add_filesize' option:

curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
  "type": "fs",
  "fs": {
	"url": "/tmp",
	"add_filesize": false
  }
}'

Suspend or restart a file river (>= 0.2.0)

If you need to stop a river, you can call the `_stop' endpoint:

curl 'localhost:9200/_river/mydocs/_stop'

To restart the river from the previous point, just call _start end point:

curl 'localhost:9200/_river/mydocs/_start'

Advanced

Autogenerated mapping

When the FSRiver detect a new type, it creates automatically a mapping for this type.

{
  "doc" : {
    "properties" : {
      "file" : {
        "type" : "attachment",
        "path" : "full",
        "fields" : {
          "file" : {
            "type" : "string",
            "store" : "yes",
            "term_vector" : "with_positions_offsets"
          },
          "author" : {
            "type" : "string"
          },
          "title" : {
            "type" : "string",
            "store" : "yes"
          },
          "name" : {
            "type" : "string"
          },
          "date" : {
            "type" : "date",
            "format" : "dateOptionalTime"
          },
          "keywords" : {
            "type" : "string"
          },
          "content_type" : {
            "type" : "string",
            "store" : "yes"
          }
        }
      },
      "name" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "pathEncoded" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "postDate" : {
        "type" : "date",
        "format" : "dateOptionalTime"
      },
      "rootpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "virtualpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "filesize" : {
        "type" : "long"
      }
    }
  }
}

Creating your own mapping (analyzers)

If you want to define your own mapping to set analyzers for example, you can push the mapping before starting the FS River.

{
  "doc" : {
    "properties" : {
      "file" : {
        "type" : "attachment",
        "path" : "full",
        "fields" : {
          "file" : {
            "type" : "string",
            "store" : "yes",
            "term_vector" : "with_positions_offsets",
            "analyzer" : "french"
          },
          "author" : {
            "type" : "string"
          },
          "title" : {
            "type" : "string",
            "store" : "yes"
          },
          "name" : {
            "type" : "string"
          },
          "date" : {
            "type" : "date",
            "format" : "dateOptionalTime"
          },
          "keywords" : {
            "type" : "string"
          },
          "content_type" : {
            "type" : "string",
            "store" : "yes"
          }
        }
      },
      "name" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "pathEncoded" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "postDate" : {
        "type" : "date",
        "format" : "dateOptionalTime"
      },
      "rootpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "virtualpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "filesize" : {
        "type" : "long"
      }
    }
  }
}

To send mapping to Elasticsearch, refer to the Put Mapping API

Meta fields

FS River creates some meta fields :

Field Description Example
name Original file name mydocument.pdf
pathEncoded BASE64 encoded file path (for internal use) 112aed83738239dbfe4485f024cd4ce1
postDate Indexing date 1312893360000
rootpath BASE64 encoded root path (for internal use) 112aed83738239dbfe4485f024cd4ce1
virtualpath Relative path mydir/otherdir
filesize File size in octet 1256362

Advanced search

You can use meta fields to perform search on.

curl -XGET http://localhost:9200/docs/doc/_search -d '{
  "query" : {
    "term" : {
        "name" : "mydocument.pdf"
    }
  }
}'

Disabling _source

If you don't need to highlight your search responses nor need to get back the original file from Elasticsearch, you can think about disabling _source field.

In that case, you need to store name field. Otherwise, FSRiver won't be able to remove documents when they disappear from your hard drive.

{
  "doc" : {
    "_source" : { "enabled" : false },
    "properties" : {
      "file" : {
        "type" : "attachment",
        "path" : "full",
        "fields" : {
          "file" : {
            "type" : "string",
            "store" : "yes",
            "term_vector" : "with_positions_offsets"
          },
          "author" : {
            "type" : "string"
          },
          "title" : {
            "type" : "string",
            "store" : "yes"
          },
          "name" : {
            "type" : "string"
          },
          "date" : {
            "type" : "date",
            "format" : "dateOptionalTime"
          },
          "keywords" : {
            "type" : "string"
          },
          "content_type" : {
            "type" : "string",
            "store" : "yes"
          }
        }
      },
      "name" : {
        "type" : "string",
        "analyzer" : "keyword",
        "store" : true
      },
      "pathEncoded" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "postDate" : {
        "type" : "date",
        "format" : "dateOptionalTime"
      },
      "rootpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "virtualpath" : {
        "type" : "string",
        "analyzer" : "keyword"
      },
      "filesize" : {
        "type" : "long"
      }
    }
  }
}

Extracted characters

By default the mapper attachment plugin extracts only a limited size of characters (100000 by default). Setting index.mapping.attachment.indexed_chars property in your elasticsearch.yml file for each node may help to index bigger files.

But, you can also have a finer control and set indexed_chars to 1 in FSRiver definition.

curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
  "type": "fs",
  "fs": {
    "url": "/tmp",
    "indexed_chars": 1
  }
}'

That option will add a special field _indexed_chars to the document. It will be set to the filesize. This field is used by mapper attachment plugin to define the number of extracted characters.

  • indexed_chars : 0 (default) will use default mapper attachment settings (index.mapping.attachment.indexed_chars)
  • indexed_chars : x will compute file size, multiply it with x and pass it to Tika using _indexed_chars field.

That means that a value of 0.8 will extract 20% less characters than the file size. A value of 1.5 will extract 50% more characters than the filesize (think compressed files). A value of 1, will extract exactly the filesize.

Get content_type

By default, content_type is detected by mapper attachment plugin and stored in documents. So, you can easily access it:

curl -XPOST http://localhost:9200/mydocs/doc/_search -d '{
  "fields" : ["file.content_type", "_source"],
  "query":{
    "match_all" : {}
  }
}'

gives:

{
  "took" : 19,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "fsrivermetadatatest",
      "_type" : "doc",
      "_id" : "fb6115c44876aa1e94cc4f86b03ba93",
      "_score" : 1.0,
      "fields" : {
        "file.content_type" : "application/vnd.oasis.opendocument.text",
        "_source" : "..."
      }
    } ]
  }
}

Storing extracted content

If you need to store and retrieve as is extracted content by the mapper attachment plugin, you simply have to set store to yes for your file field in your mapping:

{
  "doc": {
    "properties": {
      "file": {
        "type": "attachment",
        "path": "full",
        "fields": {
          "file": {
            "type": "string",
            "store": "yes",
            "term_vector": "with_positions_offsets"
          }
        }
      }
    }
  }
}

Then, you can extract document content using fields property when searching:

curl -XPOST http://localhost:9200/mydocs/doc/_search -d '{
  "fields" : ["file"],
  "query":{
    "match_all" : {}
  }
}'

gives:

{
  "took" : 19,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "fsrivermetadatatest",
      "_type" : "doc",
      "_id" : "fb6115c44876aa1e94cc4f86b03ba93",
      "_score" : 1.0,
      "fields" : {
        "file" : "Bonjour David\n\n\n"
      }
    } ]
  }
}

Behind the scene

How it works ?

TO BE COMPLETED

License

This software is licensed under the Apache 2 license, quoted below.

Copyright 2011-2012 David Pilato

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.

fsriver's People

Contributors

dadoonet avatar fgaujous avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.