C
C
chelius_ua2017-05-30 14:14:27
HAproxy
chelius_ua, 2017-05-30 14:14:27

Why doesn't Elasticsearch set the field types correctly in the index?

Hello. It hit my head to make myself a dashboard in kibana to monitor the work of haproxy, install logstash, make a config, run .... In general, everything is fine, except that elasticsearch in the created index sets the string data type for almost all fields, although in the pattern logstesha all data types are registered. I read that it is possible to rebuild the index with the necessary data types, but it turns out that every day you will need to do this ...
Elasticsearch was not configured by me, maybe something else needs to be added to it? Thanks
config file for logstash
input {
file {
path => "/var/log/haproxy.log"
start_position => "end"
stat_interval => 1
discover_interval => 30
}
file {
path => "/var/log/haproxy.log.1"
start_position => "end"
stat_interval => 1
discover_interval => 30
}
}
filter {
grok {
patterns_dir => "/etc/logstash/conf.d/patterns"
match => { "message" => "%{HAPROXYHTTP}" }
}
}
output {
elasticsearch {
hosts => ["10.0.20.10:9200"]
index => "haproxy-%{+YYYY.MM.dd}"
}
}

Answer the question

In order to leave comments, you need to log in

1 answer(s)
R
RidgeA, 2017-05-30
@chelius_ua

Elastica sets the data type for the field, which is determined when adding the first value to this field.
In order to specify what type the field should be in the mapping.
You don't need to create a mapping for every day - you can create a mapping on the wildcard name of the index or type (I forgot already). Type "indexname*", where * will be for example "2017-05-30"
Look in the dock, somewhere here.
https://www.elastic.co/guide/en/elasticsearch/refe...
Logstash itself creates a dynamic mapping for its index, but it is of a general form - you can see how it should look there.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question