Elasticsearch Edge NGram tokenizer higher score when word begins with n-gram
up vote
0
down vote
favorite
Suppose there is the following mapping with Edge NGram Tokenizer: { "settings": { "analysis": { "analyzer": { "autocomplete_analyzer": { "tokenizer": "autocomplete_tokenizer", "filter": [ "standard" ] }, "autocomplete_search": { "tokenizer": "whitespace" } }, "tokenizer": { "autocomplete_tokenizer": { "type": "edge_ngram", "min_gram": 1, "max_gram": 10, "token_chars": [ "letter", "symbol" ] ...