大家好,我是一安~
导读:我们之前在集成日志框架时实现了sky-walking
链路追踪,今天继续讲解在日志框架的基础上如何将日志输出到Elasticsearch
。
简介
ELK
是三个开源工具的组合,分别是Elasticsearch、Logstash
和Kibana
。这三个工具都是由Elastic
公司开发和维护的。
Elasticsearch
是一种分布式搜索和分析引擎,可以方便地存储、搜索和分析大量结构化和非结构化数据。它支持实时搜索、数据分析和可视化,可以处理PB级别的数据。
Logstash
是一种数据收集引擎,可以从各种来源收集数据,例如文件、系统日志、数据库等。它可以将这些数据转换为可查询的结构化数据,并将其发送到Elasticsearch
中进行索引和分析。
Kibana
是一种数据可视化工具,可以从Elasticsearch
中检索数据并将其可视化。它提供了许多可视化选项,例如图表、仪表板和地图,可以帮助用户更好地了解数据。
ELK
作为一种日志分析解决方案,被广泛应用于各种场景,例如应用程序日志分析、系统日志分析、安全监控等。它可以帮助用户快速发现和解决问题,提高系统的可靠性和稳定性。
如何安装
Elasticsearch、Logstash、Kibana
本文就不再介绍了,百度一大堆,本文重点是Logback -> Logstash -> Elasticsearch
正文
引入依赖
日志模块引入即可:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.2</version>
</dependency>
配置logback-spring.xml
完整配置内容:
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<springProperty scope="context" name="logpath" source="yian.logs.path" defaultValue="/org/yian/logs"/>
<springProperty scope="context" name="ftst" source="yian.logs.ftst" defaultValue="0101"/>
<substitutionProperty name="LOG_HOME" value="${logpath}"/>
<substitutionProperty name="FTST" value="${ftst}"/>
<substitutionProperty name="LOG_HOME_COMMON" value="${LOG_HOME}/stdout"/>
<substitutionProperty name="LOG_HOME_ERROR" value="${LOG_HOME}/error"/>
<!--输出操作日志-->
<substitutionProperty name="LOG_HOME_PERFORMANCE" value="${LOG_HOME}/common"/>
<!--日志传输到skywalking的appender-->
<appender name="skywalkingLog" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${FTST}|%d{yyyy-MM-dd' 'HH:mm:ss.sss}|%-5level|%thread|${springAppName:-},%tid,%logger{36} - %msg%n</pattern>
</layout>
</encoder>
</appender>
<!--LogStash-->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--logstash的请求地址-->
<destination>localhost:5044</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
</encoder>
</appender>
<!-- console -->
<appender name="consoleLog" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>
<!-- 格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符-->
${FTST}|%d{yyyy-MM-dd' 'HH:mm:ss.sss}|%-5level|%thread|${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%logger{36} - %msg%n
</pattern>
</layout>
</appender>
<!-- file common -->
<appender name="fileCommonLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--滚动策略-->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME_COMMON}-%d{yyyy-MM-dd}-%i.txt</fileNamePattern>
<TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>100MB</MaxFileSize>
</TimeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>
${FTST}|%d{yyyy-MM-dd' 'HH:mm:ss.sss}|%-5level|%thread|${springAppName:-},%tid,%logger{36} - %msg%n
</pattern>
</layout>
</encoder>
</appender>
<!-- file error -->
<appender name="fileErrorLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<!--滚动策略-->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME_ERROR}-%d{yyyy-MM-dd}-%i.txt</fileNamePattern>
<TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>100MB</MaxFileSize>
</TimeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>
${FTST}|%d{yyyy-MM-dd' 'HH:mm:ss.sss}|%-5level|%thread|${springAppName:-},%tid,%logger{36} - %msg%n
</pattern>
</layout>
</encoder>
</appender>
<!-- file performance -->
<appender name="filePerformanceLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--滚动策略-->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME_PERFORMANCE}-%d{yyyy-MM-dd}-%i.txt</fileNamePattern>
<TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>100MB</MaxFileSize>
</TimeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder>
<pattern>
%msg%n
</pattern>
</encoder>
</appender>
<!--操作日志日志配置输入性能日志文件-->
<logger name="org.yian.log.SysLogAspect" level="DEBUG" additivity="false">
<appender-ref ref="filePerformanceLog"/>
<appender-ref ref="logstash" />
</logger>
<root level="info">
<appender-ref ref="skywalkingLog" />
<appender-ref ref="consoleLog"/>
<appender-ref ref="fileCommonLog"/>
<appender-ref ref="fileErrorLog"/>
</root>
</configuration>
编写logstash-logback.conf
名字随便定义:
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
tcp {
host => "0.0.0.0"
port => 5044
codec => json_lines
}
}
filter {
mutate{
# 删除无用的字段
remove_field => ["@timestamp","@version","host","path","thread_name","level_value","port"]
}
}
output {
stdout{
# 解析转换类型 json_lines
codec => json_lines
}
}
检测配置文件格式
PS D:localspringcloudlogstash-7.6.2bin> .logstash -f ..configlogstash-logback.conf -t
Sending Logstash logs to D:/local/springcloud/logstash-7.6.2/logs which is now configured via log4j2.properties
[2023-04-17T15:41:04,342][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-04-17T15:41:06,099][INFO ][org.reflections.Reflections] Reflections took 49 ms to scan 1 urls, producing 20 keys and 40 values
Configuration OK
[2023-04-17T15:41:06,667][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
启动logstash
PS D:localspringcloudlogstash-7.6.2bin> .logstash -f ..configlogstash-logback.conf
Sending Logstash logs to D:/local/springcloud/logstash-7.6.2/logs which is now configured via log4j2.properties
[2023-04-17T15:42:21,837][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-04-17T15:42:21,975][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2023-04-17T15:42:24,664][INFO ][org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 20 keys and 40 values
[2023-04-17T15:42:25,566][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2023-04-17T15:42:25,582][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["D:/local/springcloud/logstash-7.6.2/config/logstash-logback.conf"], :thread=>"#<Thread:0x64db8c8e run>"}
[2023-04-17T15:42:26,553][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-04-17T15:42:26,569][INFO ][logstash.inputs.tcp ][main] Starting tcp input listener {:address=>"0.0.0.0:5044", :ssl_enable=>"false"}
[2023-04-17T15:42:26,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-04-17T15:42:27,200][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
测试控制台输出
配置输出到Elasticsearch
input {
tcp {
host => "0.0.0.0"
port => 5044
codec => json_lines
}
}
filter {
mutate{
# 删除无用的字段
remove_field => ["@timestamp","@version","host","path","thread_name","level_value","port"]
}
}
output {
stdout{
# 解析转换类型 json_lines
codec => json_lines
}
elasticsearch {
hosts => ["localhost:9200"]
index => "yianweilai"
}
}
重启logstash
补充说明:
<!--LogStash-->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--logstash的请求地址-->
<destination>localhost:5044</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"serverName":"${applicationName}"}</customFields>
</encoder>
</appender>
input {
tcp {
host => "0.0.0.0"
port => 5044
codec => json_lines
type => "cloud_alibaba"
}
}
filter {
mutate{
# 删除无用的字段
remove_field => ["@timestamp","@version","host","path","thread_name","level_value","port"]
}
}
output {
# 如果不需要打印可以直接删除
stdout{
codec => json_lines
}
# 通过type用于区分不同来源的日志
if [type] == "cloud_alibaba"{
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[serverName]}-%{+YYYY.MM.dd}"
}
}
}
logstash
需要使用tcp
协议接受logstash
传来的日志,并使用customFields
中定义的serverName
动态建立索引
至此我们已成功将日志输出到Elasticsearch
。
如果这篇文章对你有所帮助,或者有所启发的话,帮忙 分享、收藏、点赞、在看,你的支持就是我坚持下去的最大动力!
2023 ,你还在使用 Java 中的 SimpleDateFormat 吗?
基于Spring6.0全面讲解Spring(配合案例分析,值得收藏)
原文始发于微信公众号(一安未来):SpringCloud Alibaba微服务实战之集成Elasticsearch
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/145022.html