当前位置:网站首页>ELK生产实践
ELK生产实践
2022-04-23 07:28:00 【李俊的博客】
背景
ELK为日志存储/查询系统,用于监控系统数据源和日志查询
目前JOOX日志经过落地、采集(filebeat)、缓冲 (kafka)[可选]、 过滤(logstash)、搜索(elasticsearch)、展示(kibana)整条流程实现为自建ELK流程
1、日志落地
1.1 终端日志
终端会上报数据到指定的日志服务器
1.2 nginx日志
nginx日志落地在日志文件夹下
2、日志采集
2.1 filebeat配置说明
filebeats部署9.59.1.154和9.131.172.16 上,通过配置paths路径采集日志文件,然后投递到logstash或者kafka、有大量日志写入需要写入kafka进行缓冲,如果没有大量日志可以直接投递到logstash
filebeat.spool_size: 5000
filebeat.idle_timeout: "5s"
filebeat.prospectors:
- input_type: log
paths:
- /usr/local/tnginx/logs/www.joox.com.access.log
- /usr/local/tnginx/logs/www.joox.com_445.access.log
fields_under_root: true
fields:
feature: nginx-webjoox
document_type: webjoox_nginx
- input_type: log
paths:
- /usr/local/tnginx/logs/api_joox.access.log
- /usr/local/tnginx/logs/api_joox_445.access.log
fields_under_root: true
fields:
feature: nginx-apijoox
document_type: apijoox_nginx
- input_type: log
paths:
- /data/joox-web-ssr/logs/JOOX-WEB-SSR-ERROR.log
fields_under_root: true
fields:
feature: nginx-pm2ssr
document_type: pm2joox_nginx
#output.logstash:
# #loadbalance: true
# #hosts: ["10.228.175.10:5044","10.228.14.205:5044"]
# hosts: ["10.228.180.102:5045"]
# #hosts: ["10.228.139.36:5045"]
output.kafka:
hosts: ["kafka.iops.woa.com:9092"]
topic: web_joox2
required_acks: 1
compression: none |
2.2 启动filebeat
~]# cd /usr/local/elk/filebeat_2000701487_15/ && filebeat -e -c filebeat-app.yml
3、日志缓冲【可选】
在腾讯云 消息队列Ckafka 实例 joox-iops 下 topic 可以找到 web_joox2的topic
消息队列 CKafka 技术原理 - 产品简介 - 文档中心 - 腾讯云

4、日志过滤
4.1 Logstash 负责日志输入、过滤、输出
Logstash 通过容器启动,在k8s跳板机【9.81.29.39】执行如下命令来理解logstash数据来源和去向。
1、因为filebeats配置文件中配置了投递地址为9.26.7.167:17007,所以先找到service,发现是nodeport类型的service,端口为17007
[root@k8s-jump ~]# kubectl get svc -n iops | grep jooxapp
logstash-jooxapp-fb NodePort 10.97.93.0 <none> 17007:17007/TCP 466d
2、找到名称为logstash-jooxapp-fb的deployment,查看创建yaml文件,发现logstash配置文件通过configmap来挂载使用,配置文件在容器路径为/usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
[root@k8s-jump ~]# kubectl get deploy logstash-jooxapp-fb -n iops -o yaml
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: logstash-jooxapp-fb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: logstash-jooxapp-fb
spec:
containers:
- env:
volumeMounts:
- mountPath: /usr/share/logstash/pipeline
name: vm-config
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: logstash-config-named-k8s
path: indexer-kafka-named-k8s.conf
name: logstash-jooxapp-fb
name: vm-config
3、查看logstash通过configmap挂载的indexer-kafka-named-k8s.conf配置文件,主要逻辑为通过读取filebeats的日志输入,然后投递到ES
[root@k8s-jump ~] # kubectl exec -it logstash-jooxapp-fb-76bb48b96-zh2c2 /bin/bash -n iops
beac38d ] # cat /usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
filter {
if [type] == "cmsjoox_nginx" {
ruby {
init => "@kname = ['remote_addr','unknow','remote_user','time_local','request','status','body_bytes_sent','http_referer','http_user_agent','http_x_forwarded_for','host','cookie_ssl_edition','upstream_addr','upstream_status','request_time','upstream_response_time','cookie_uin','cookie_luin','proxy_host']"
code => "
v = Hash[@kname.zip(event.get('message').split('|'))]
new_event = LogStash::Event.new(v)
new_event.remove('@timestamp')
event.append(new_event)
event.set('raw_request', event.get('request'))
req = event.get('request').split(' ')[1];
if req.index('?') == ''
event.set('request',req);
else
event.set('request',req.split('?')[0]);
end
event.set('upstream_response_time_ext',event.get('upstream_response_time'));
event.set('request_time_ext',event.get('request_time'));
"
}
geoip {
source => "remote_addr"
}
date {
match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
mutate {
convert => ["upstream_response_time_ext", "float"]
convert => ["request_time_ext", "float"]
}
}
}
## logstash conf
input {
kafka{
bootstrap_servers => ["kafka.iops.woa.com:9092"]
client_id => "${NODE_NAME}"
group_id => "joox_cms"
auto_offset_reset => "latest"
decorate_events => "true"
topics => ["cmsjoox"]
codec => json #一定要加上这段,不然传输错误,${message}
}
}
## logstash conf
output {
if "parsefailure" in [tags] {
stdout {codec => rubydebug}
}else{
elasticsearch {
hosts => ["http://iLogstack.oa.com:19200/es-logstashing/"]
index => "cmsjoox_%{+YYYY.MM.dd}"
}
}
}
4、logstash-joox-cms 结构解析
logstash pipeline 包含两个必须的元素:input和output,和一个可选元素:filter。
从input读取事件源,(经过filter解析和处理之后),从output输出到目标存储库(elasticsearch或其他)
4、日志存储和搜索(ES)
4.1 分布式、RESTful 风格的搜索和数据分析引擎
1、logstash 投递到ES的数据保留7天,webjoox_nginx 的index格式为 logstash-webjoox_nginx_%{+YYYY.MM.dd}

4.2 定时清理ES索引
9.81.21.162 每5分钟会执行一次的清理脚本
*/5 9-23 * * * cd /data/iLogStack/es_plugins/ops && sh delete_indices.sh > /dev/null 2>&1
#!/bin/bash
##rossypli at 2018/03/25
###################################
#删除早于7天的ES集群的索引
###################################
function delete_indices() {
Today=`date +"%Y.%m.%d %H:%M:%S"`
#修复匹配%{ostype}问题
indices=`echo "$1" | sed 's/%/*/g' | sed 's/{/*/g' | sed 's/}/*/g'`
curl -XDELETE http://iLogStack.oa.com:19200/es-monitoring/$indices
Today_end=`date +"%Y.%m.%d %H:%M:%S"`
echo "DELETE indices|$Today|$Today_end|$indices" >> log/DELETE.log
echo "DELETE $indices SUCC"
exit ##因为删除大索引会影响ES稳定性,所以删除一个索引后退出,等下次调用再删除下一个
}
function filter_indices(){
comp_date=`date -d "$delete_day day ago" +"%Y.%m.%d"`
# V updated for closed indices on 20181130
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep $indices|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
#调用索引删除函数
delete_indices ${LINE}$comp_date
done
}
function filter_indices_default(){ ##默认删除为7天
comp_date=`date -d "7 day ago" +"%Y.%m.%d"`
#curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F" " '{print $3}' | sort | uniq| while read LINE
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
#调用索引删除函数
delete_indices ${LINE}$comp_date
#break ##因为删除大索引会影响ES稳定性,所以删除一个索引后退出,等下次调用再删除下一个
done
}
##如果索引有设置存储时长,按存储时长删除,如果没有设置,则按默认的7天进行删除
grep -v '#' indices.conf |while read line
do
indices=$(echo $line |awk '{print $1}')
delete_day=$(echo $line |awk '{print $2}')
filter_indices
echo $indices >> indices_tmp
done
sed -i ':a;N;$!ba;s/\n/|/g' indices_tmp ##换行符替换为逗号
indices_set=$(cat indices_tmp)
rm -f indices_tmp
filter_indices_default
5、日志展示(Kibana)
5.1 Kibana 负责数据可视化、数据分析
1、Kibana支持创建索引匹配,然后在kibana上选择创建的索引模版进行索引匹配即可

2、在kibana选择一个索引模版,进行搜索即可

版权声明
本文为[李俊的博客]所创,转载请带上原文链接,感谢
https://duozan.blog.csdn.net/article/details/122453646
边栏推荐
- Install MySQL for Ubuntu and query the average score
- idea:使用easyYapi插件导出yapi接口
- Samsung, March to the west again
- An example of network communication based on TCP / IP protocol -- file transmission
- Why are there 1px problems? How?
- My heart's broken! A woman's circle of friends envied others for paying wages on time and was fired. Even her colleagues who liked her were fired together
- 一款拥有漂亮外表的Typecho简洁主题_Scarfskin 源码下载
- 有意思的js 代码
- 将实例化对象的方法 给新的对象用
- 浏览器中的 Kubernetes 和 IDE | 交互式学习平台Killercoda
猜你喜欢

The simple problem of leetcode is to calculate the numerical sum of strings

一键清理项目下pycharm和Jupyter缓存文件

Data security has become a hidden danger. Let's see how vivo can make "user data" armor again

vslam PPT
![[appium] encountered the problem of switching the H5 page embedded in the mobile phone during the test](/img/4a/c741ec4f9aa724e150a5ae24d0f9e9.png)
[appium] encountered the problem of switching the H5 page embedded in the mobile phone during the test

How to import Excel data in SQL server, 2019 Edition

输入/输出系统

Online yaml to XML tool

ansible自动化运维详解(一)ansible的安装部署、参数使用、清单管理、配置文件参数及用户级ansible操作环境构建

Ansible Automation Operation and Maintenance details (ⅰ) Installation and Deployment, Parameter use, list Management, Profile Parameters and user level ansible operating environment Construction
随机推荐
Positioning of high precision welding manipulator
User manual of Chinese version of solidity ide Remix
Distributed service governance Nacos
C语言学习记录——삼십팔 字符串函数使用和剖析(2)
分组背包呀
5.6 comprehensive case - RTU-
PHP high precision computing
室内定位技术对比
监控智能回放是什么,如何使用智能回放查询录像
Rearranging log files for leetcode simple question
Qt读写XML文件
ASAN 极简原理
网赚APP资源下载类网站源码
JS converts tree structure data into one-dimensional array data
浏览器中的 Kubernetes 和 IDE | 交互式学习平台Killercoda
一键清理项目下pycharm和Jupyter缓存文件
每周leetcode - 06 数组专题 7~739~50~offer 62~26~189~9
一个必看的微信小程序开发指南1-基础知识了解
Online yaml to XML tool
Compiler des questions de principe - avec des réponses