leschenko / elasticsearch_autocomplete Goto Github PK
View Code? Open in Web Editor NEWSimple autocomplete for rails models using awesome elasticsearch and tire gem
License: MIT License
Simple autocomplete for rails models using awesome elasticsearch and tire gem
License: MIT License
I want to index a primary_tag
field with the name
field, so that I don't have to hit the database after getting results from Elasticsearch. Do I need to provide custom mappings for this?
I can't let your gem working at all.
Every time that i call your method ac_search i receive:
NoMethodError (undefined method `match' for #<Tire::Search::Query:0x5bcff50 @value={}>):
and when I indexing everything at the beginning (rake environment tire:import CLASS='User' FORCE=true)
[IMPORT] Deleting index 'nilclass_users'
[IMPORT] Creating index 'nilclass_users' with mapping:
{"user":{"properties":{"first_name":{"type":"multi_field","fields":{"first_name":{"type":"string"},"ac_first_name":{"type":"string","search_analyzer":"ac_search
","include_in_all":false,"index_analyzer":"ac_edge_ngram"},"ac_word_first_name":{"type":"string","search_analyzer":"ac_search","include_in_all":false,"index_ana
lyzer":"ac_edge_ngram_word"}}}}}}
#15/15 | 100% #######################################################
Import finished in 0.04500 seconds
What is the issue here?
Hi, Great Gem btw :) Saved me a bit of work. One snag: I noticed the following:
1.9.3p385 :003 > ElasticsearchAutocomplete.defaults
=> {:attr=>:name, :localized=>false, :mode=>:word, :index_prefix=>"nilclass"}
1.9.3p385 :004 > ElasticsearchAutocomplete.default_index_prefix
=> "treehouse"
For some reason, it's naming the index nilclass_model_name_pluralize
Just FYI. I only noticed it because I have a resque worker that manually rebuilds the index and the names didn't match :)
I noticed that Mongoid is not supported out of the box. The reason is that the railtie doesn't work since ActiveSupport.on_load :active_record never fires when using Mongoid.
lib/elasticsearch_autocomplete/railtie.rb
module ElasticsearchAutocomplete
class Railtie < Rails::Railtie
initializer 'elasticsearch_autocomplete.model_additions' do
ActiveSupport.on_load :active_record do
include ElasticsearchAutocomplete::ModelAddition
end
end
end
end
The solution is quite simple, just include the ModelAddition in the model directly.
class Post
include Mongoid::Document
include ElasticsearchAutocomplete::ModelAddition
end
Perhaps it should just be documented that you can add the ModelAddition include directly to a model when using Mongoid.
How do you think we can modify the ac_search method in order to provide an optional filter, that avoid to return the current user in the list?
I have this working in my model, but I can't figure out how let it working with your gem.
I have it directly in a tire search, but I would like to incorporate directly in the ac_search method, maybe using an optional parameter to pass directly to the method (need to be passed in the method and not through the option, due to the impossibility to access a current user from the user model):
filter :ids, :values => self.without_current_user(user).map(&:id)
def self.without_current_user(user)
self.where("users.id != ?", user.id)
end
Looking forward to hear from you.
Best
Dinuz
I am deploying to Heroku and was wondering how to specify an alternative host (bonsai.io)?
Also, is there a smooth way of not using Tire to import the model data from Postgres to ES?
Thanks
Is it possible use this gem with mongoid odm instead of activerecord?
Thanks!
I keep getting this error when I try to create the index by running the following command:
$ rake autocomplete
[ERROR] There has been an error when creating the index -- elasticsearch returned:
400 : {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"analyzer on field [ac_term] must be set when search_analyzer is set"}],"type":"mapper_parsing_exception","reason":"mapping [autocomplete]","caused_by":{"type":"mapper_parsing_exception","reason":"analyzer on field [ac_term] must be set when search_analyzer is set"}},"status":400}
[ERROR] There has been an error when creating the index -- elasticsearch returned:
400 : {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"analyzer on field [ac_term] must be set when search_analyzer is set"}],"type":"mapper_parsing_exception","reason":"mapping [autocomplete]","caused_by":{"type":"mapper_parsing_exception","reason":"analyzer on field [ac_term] must be set when search_analyzer is set"}},"status":400}
Thanks for your help!
This is my old tire class method:
def self.search(params)
tire.search(load: true, page: params[:page], per_page: 9) do
query do
boolean do
must { string params[:query], default_operator: "AND" } if params[:query].present?
must { range :published, lte: Time.zone.now }
must { term :post_type, params[:post_type] } if params[:post_type].present?
end
end
highlight :title, :description, :options => { :tag => '<strong>', :fragment_size => 170, :number_of_fragments => 5 }
sort { by :created_at, "desc" }
facet "posttypes" do
terms :post_type
end
end
end
I would like to use this options and merge it with the analyzers of elasticsearch_autocomplete gem.
How can I do it?
Thanks!
Hi,
I am trying to limit the number of fields the search returns. I usually do this by providing the the _source block like so:
mapping :_source => {
:enabled => true,
:includes => [] # array of fields I want to include
} do
...
end
Is it possible using skip_settings: true and only provide the _source block and keeping the rest of the setting same? Please explain me how!
Cheers
I have a problem with special characters ñ or áéíóú.
my model:
class Car
ac_field :name, :description, :city, :skip_settings => true
def self.ac_search(params, options={})
tire.search load: true, page: params[:page], per_page: 9 do
query do
boolean do
must { string params[:query], default_operator: "AND" } if params[:query].present?
must { term :city, params[:city] } if params[:city].present?
end
end
filter :term, city: params[:city] if params[:city].present?
facet "city" do
terms :city
end
end
end
end
This version works fine with special characters e.g.:
Query with Martin
I get all results with Martín, martín, martin, Martin
With this approach this is the problem:
Now what results is individual words. e.g. A city tagged ["San Francisco", "Madrid"] will end up having three separate tags. Similarly, if I do a query to search on "san francisco" (must { term 'city', params[:city] }), that will fail, while a query on "San" or "Francisco" will succeed. The desired behaviour here is that the tag should be atomic, so only a "San Francisco" (or "Madrid") tag search should succeed.
To fix this problem I create my custom mapping:
model = self
settings ElasticsearchAutocomplete::Analyzers::AC_BASE do
mapping _source: {enabled: true, includes: %w(name description city)} do
indexes :name, model.ac_index_config(:name)
indexes :description, model.ac_index_config(:description)
indexes :city, :type => 'string', :index => :not_analyzed
end
end
With this mapping the problem with multi-words is fixed, and now facets with city
field works fine:
Instead of getting the type facets San
and Francisco
Now I get San Francisco
Now, the problem is that with this mapping inside of the model the search doesn't find results with special characters e.g.:
Query with Martin
I get only results with Martin martin
I'm using mongoid instead active record.
How can I fix this problem?
Thanks!
This is the setting for the custom analyser that I want to use with analyzer ac_edge_ngram_full
{
"settings" : {
"analysis" : {
"char_filter" : {
"my_mapping" : {
"type" : "mapping",
"mappings" : ["(=>", ")=>", "\\u0020-\\u0020=>\\u0020", "\\u0020-=>\\u0020", "-\u0020=>"]
}
},
"analyzer" : {
"my_ngram_analyzer" : {
"tokenizer" : "my_ngram_tokenizer",
"filter" : ["lowercase", "asciifolding"],
"char_filter" : ["my_mapping"]
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "1",
"max_gram" : "50",
"token_chars": [ "letter", "digit", "whitespace", "punctuation"]
}
}
}
}
}
I have read the Wiki relating to the Custom Mapping, but could not figure out, how do I had the char_filter
to the ac_edge_ngram_full
analyzer , as well as add an extra property token_chars
to the ac_edge_ngram_full
tokenizer ? Also, would the code be able to handle unicode for whitespace ?
I believe forking the repository and making the changes would not be a good option/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.