工作原理如下如所示:
部署流程:
1、安装logstash的JDK环境:
# tar zvxf jdk-8u73-linux-x64.tar.gz
# mv jdk-8u73-linux-x64 /usr/local/java
# vim /etc/profile
export JAVA_HOME=/usr/local/java
CLASSPATH=/usr/local/java/lib/dt.jar/usr/local/java/lib/tools.jar
PATH=/usr/local/java/bin:$PATH
export PATH JAVA_HOME CLASSPATH
# source /etc/profile
# java -version
java version "1.8.0_73"
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)
出来java的版本号,JDK安装成功;
2、安装 logstash
下载并安装 Logstash ,本次安装 logstash 到/usr/local (安装路径自己定义);
# wget
# tar zvxf logstash-1.5.2.tar.gz -C /usr/local/
安装完成后执行命令:
# /usr/local/logstash-1.5.2/bin/logstash -e 'input { stdin { } } output { stdout {} }'
Logstash startup completed
hello ELK
2016-09-29T09:28:57.992Z web10.gz.com hello ELK
-e :指定logstash的配置信息,可以用于快速测试;
-f :指定logstash的配置文件;可以用于生产环境;
在 logstash 安装目录下创建一个测试文件 logstash-test.conf, 文件内容如下:
# vim logstash-simple.conf
input { stdin { } }
output {
stdout { codec=> rubydebug }
}
# echo "`date` hello ELK"
Thu Sep 29 17:33:23 CST 2016 hello ELK
# /usr/local/logstash-1.5.2/bin/logstash agent -f logstash-simple.conf
Logstash startup completed
Thu Sep 29 17:33:23 CST 2016 hello ELK
{
"message" => "Thu Sep 29 17:33:23 CST 2016 hello ELK",
"@version" => "1",
"@timestamp" => "2016-09-29T09:33:57.711Z",
"host" => "web10.gz.com"
}
安装supervisor,管理logstash:
#yum install -y install supervisor --enablerepo=epel
#vim /etc/supervisord.conf
添加内容
在这里插入代码片
[program:elkpro_1]
environment=LS_HEAP_SIZE=5000m
directory=/usr/local/logstash-1.5.2 #logstash安装目录
command=/usr/local/logstash-1.5.2/bin/logstash -f /usr/local/logstash-1.5.2/logstash-simple.conf -w 10 -l /var/log/logstash/logstash-simple.log #logstash执行的命令
pro1.conf #logstash指定运行的配置文件
/var/log/logstash/pro1.log #指定logstash日志存放位置;
开启关闭supervisord
#service supervisord stop
#service supervisord start
开机启动
#chkconfig supervisord on
开启关闭logstash
#supervisorctl start elkpro_1
#supervisorctl stop elkpro_1
3、安装 Elasticsearch
下载 Elasticsearch 后,解压到/usr/local/;
# wget
# tar zvxf elasticsearch-1.6.0.tar.gz -C /usr/local/
启动 Elasticsearch
# /usr/local/elasticsearch-1.6.0/bin/elasticsearch
后台运行 elasticsearch:
# nohup /usr/local/elasticsearch-1.6.0/bin/elasticsearch >nohup &
# ps aux|grep logstash
root 21154 1.6 5.0 3451732 196856 pts/0 Sl+ 17:33 0:10 /usr/local/java/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xmx500m -Xss2048k -Djffi.boot.library.path=/usr/local/logstash-1.5.2/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xbootclasspath/a:/usr/local/logstash-1.5.2/vendor/jruby/lib/jruby.jar -classpath :/usr/local/java/lib/dt.jar/usr/local/java/lib/tools.jar -Djruby.home=/usr/local/logstash-1.5.2/vendor/jruby -Djruby.lib=/usr/local/logstash-1.5.2/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/local/logstash-1.5.2/lib/bootstrap/environment.rb logstash/runner.rb agent -f logstash-simple.conf
elasticsearch官方给的启动脚本:
https://codeload.github.com/elastic/elasticsearch-servicewrapper/zip/master
上传到服务器上
#unzip elasticsearch-servicewrapper-master.zip
#mv elasticsearch-servicewrapper-master/service/ /usr/local/elasticsearch/bin/
#cd /usr/local/elasticsearch/bin/service
#./elasticsearch install (在init.d下自动创建服务脚本)
#/etc/init.d/elasticsearch restart
#curl -XGET 'http://elasticsearch_IP:9200/_count?pretty' -d ' #IP为 elasticsearch安装的服务器IP
> {
> "query":{
> "match_all":{}
> }
> }
> '
返回值:
{
"count" : 710,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
}
在logstash安装目录下,创建测试文件logstash-es-simple.conf,查看结果显示是否输出到elastisearch中。
# vim logstash-es-simple.conf
logstash-es-simple.confinput { stdin { } }
output {
elasticsearch {host => "localhost" }
stdout { codec=> rubydebug }
}
执行:
# /usr/local/logstash-1.5.2/bin/logstash agent -f logstash-es-simple.conf
...启动输出...
Logstash startup completed
hello ELK
{
"message" => "hello ELK",
"@version" => "1",
"@timestamp" => "2016-09-29T09:52:21.426Z",
"host" => "web10.gz.com"
}
使用 curl 命令发送请求来查看elastisearch 是否接收到了数据:
# curl '
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
},
.....
现在已成功可以使用 Elasticsearch 和 Logstash 来收集日志数据了。
4、安装 elasticsearch 插件
在你安装 Elasticsearch 的目录中执行以下命令;
# cd /usr/local/elasticsearch-1.6.0/
# ./bin/plugin -install lmenezes/elasticsearch-kopf
安装完成后在 plugins 目录下可以看到 kopf
# ls plugins/kopf
在浏览器访问 http://192.168.1.114:9200/_plugin/kopf 浏览保存在 Elasticsearch 中的数据,如图:
5、安装 Kibana
下载 kibana 后,解压到/usr/local/下
# wget
# tar zvxf kibana-4.1.1-linux-x64.tar.gz
启动 kibana
# /usr/local/kibana-4.1.1-linux-x64/bin/kibana
使用 http://kibanaServerIP:5601 访问 Kibana ,登录后,配置一个索引,默认就可以, Kibana 的数据被指向 Elasticsearch ,使用默认的 logstash-* 的索引名称,并且是基于时间的,点击“ Create ”即可。
看到如下界面说明索引创建完成。
点击“ Discover ”,可以搜索和浏览 Elasticsearch 中的数据;
到此, ELK 平台部署已完成。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/17045.html