当前位置:网站首页>Production practice elk
Production practice elk
2022-04-23 09:00:00 【Li Jun's blog】
background
ELK For log storage / Query system , Used for monitoring system data source and log query
at present JOOX Log after landing 、 collection (filebeat)、 buffer (kafka)[ Optional ]、 Filter (logstash)、 Search for (elasticsearch)、 Exhibition (kibana) The whole process is self built ELK technological process
1、 Log landing
1.1 Terminal log
The terminal will report data to the designated log server
1.2 nginx journal
nginx Log landing in the log folder
2、 Log collection
2.1 filebeat Configuration instructions
filebeats Deploy 9.59.1.154 and 9.131.172.16 On , By configuring paths Path collection log file , And then deliver it to logstash perhaps kafka、 There are a lot of log writes to write kafka Buffering , If you don't have a lot of logs, you can send them directly to logstash
filebeat.spool_size: 5000
filebeat.idle_timeout: "5s"
filebeat.prospectors:
- input_type: log
paths:
- /usr/local/tnginx/logs/www.joox.com.access.log
- /usr/local/tnginx/logs/www.joox.com_445.access.log
fields_under_root: true
fields:
feature: nginx-webjoox
document_type: webjoox_nginx
- input_type: log
paths:
- /usr/local/tnginx/logs/api_joox.access.log
- /usr/local/tnginx/logs/api_joox_445.access.log
fields_under_root: true
fields:
feature: nginx-apijoox
document_type: apijoox_nginx
- input_type: log
paths:
- /data/joox-web-ssr/logs/JOOX-WEB-SSR-ERROR.log
fields_under_root: true
fields:
feature: nginx-pm2ssr
document_type: pm2joox_nginx
#output.logstash:
# #loadbalance: true
# #hosts: ["10.228.175.10:5044","10.228.14.205:5044"]
# hosts: ["10.228.180.102:5045"]
# #hosts: ["10.228.139.36:5045"]
output.kafka:
hosts: ["kafka.iops.woa.com:9092"]
topic: web_joox2
required_acks: 1
compression: none |
2.2 start-up filebeat
~]# cd /usr/local/elk/filebeat_2000701487_15/ && filebeat -e -c filebeat-app.yml
3、 The log buffer 【 Optional 】
Tencent's cloud Message queue Ckafka example joox-iops Next topic Can find web_joox2 Of topic
Message queue CKafka Technical principle - Product Brief - Document center - Tencent cloud

4、 Log filtering
4.1 Logstash Responsible for log input 、 Filter 、 Output
Logstash Start via container , stay k8s Springboard machine 【9.81.29.39】 Execute the following command to understand logstash Data source and destination .
1、 because filebeats The delivery address is configured in the configuration file 9.26.7.167:17007, So find it first service, Found to be nodeport Type of service, Port is 17007
[root@k8s-jump ~]# kubectl get svc -n iops | grep jooxapp
logstash-jooxapp-fb NodePort 10.97.93.0 <none> 17007:17007/TCP 466d
2、 Find the name logstash-jooxapp-fb Of deployment, View creation yaml file , Find out logstash Profile passed configmap To mount and use , The configuration file in the container path is /usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
[root@k8s-jump ~]# kubectl get deploy logstash-jooxapp-fb -n iops -o yaml
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: logstash-jooxapp-fb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: logstash-jooxapp-fb
spec:
containers:
- env:
volumeMounts:
- mountPath: /usr/share/logstash/pipeline
name: vm-config
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: logstash-config-named-k8s
path: indexer-kafka-named-k8s.conf
name: logstash-jooxapp-fb
name: vm-config
3、 see logstash adopt configmap Mounted indexer-kafka-named-k8s.conf The configuration file , The main logic is to read filebeats Log entry for , And then deliver it to ES
[root@k8s-jump ~] # kubectl exec -it logstash-jooxapp-fb-76bb48b96-zh2c2 /bin/bash -n iops
beac38d ] # cat /usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
filter {
if [type] == "cmsjoox_nginx" {
ruby {
init => "@kname = ['remote_addr','unknow','remote_user','time_local','request','status','body_bytes_sent','http_referer','http_user_agent','http_x_forwarded_for','host','cookie_ssl_edition','upstream_addr','upstream_status','request_time','upstream_response_time','cookie_uin','cookie_luin','proxy_host']"
code => "
v = Hash[@kname.zip(event.get('message').split('|'))]
new_event = LogStash::Event.new(v)
new_event.remove('@timestamp')
event.append(new_event)
event.set('raw_request', event.get('request'))
req = event.get('request').split(' ')[1];
if req.index('?') == ''
event.set('request',req);
else
event.set('request',req.split('?')[0]);
end
event.set('upstream_response_time_ext',event.get('upstream_response_time'));
event.set('request_time_ext',event.get('request_time'));
"
}
geoip {
source => "remote_addr"
}
date {
match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
mutate {
convert => ["upstream_response_time_ext", "float"]
convert => ["request_time_ext", "float"]
}
}
}
## logstash conf
input {
kafka{
bootstrap_servers => ["kafka.iops.woa.com:9092"]
client_id => "${NODE_NAME}"
group_id => "joox_cms"
auto_offset_reset => "latest"
decorate_events => "true"
topics => ["cmsjoox"]
codec => json # Be sure to add this paragraph , Otherwise, transmission error ,${message}
}
}
## logstash conf
output {
if "parsefailure" in [tags] {
stdout {codec => rubydebug}
}else{
elasticsearch {
hosts => ["http://iLogstack.oa.com:19200/es-logstashing/"]
index => "cmsjoox_%{+YYYY.MM.dd}"
}
}
}
4、logstash-joox-cms Structural analysis
logstash pipeline Contains two essential elements :input and output, And an optional element :filter.
from input Read the event source ,( after filter After parsing and processing ), from output Output to the target Repository (elasticsearch Or others )
4、 Log storage and search (ES)
4.1 Distributed 、RESTful Style search and data analysis engine
1、logstash Delivered to ES Data retention 7 God ,webjoox_nginx Of index The format is logstash-webjoox_nginx_%{+YYYY.MM.dd}

4.2 Clean up regularly ES Indexes
9.81.21.162 Every time 5 A cleanup script that executes once a minute
*/5 9-23 * * * cd /data/iLogStack/es_plugins/ops && sh delete_indices.sh > /dev/null 2>&1
#!/bin/bash
##rossypli at 2018/03/25
###################################
# Delete before 7 Days of ES The index of the cluster
###################################
function delete_indices() {
Today=`date +"%Y.%m.%d %H:%M:%S"`
# Repair match %{ostype} problem
indices=`echo "$1" | sed 's/%/*/g' | sed 's/{/*/g' | sed 's/}/*/g'`
curl -XDELETE http://iLogStack.oa.com:19200/es-monitoring/$indices
Today_end=`date +"%Y.%m.%d %H:%M:%S"`
echo "DELETE indices|$Today|$Today_end|$indices" >> log/DELETE.log
echo "DELETE $indices SUCC"
exit ## Because deleting a large index will affect ES stability , So delete an index and exit , Delete the next one after the next call
}
function filter_indices(){
comp_date=`date -d "$delete_day day ago" +"%Y.%m.%d"`
# V updated for closed indices on 20181130
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep $indices|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
# Call index delete function
delete_indices ${LINE}$comp_date
done
}
function filter_indices_default(){ ## The default deletion is 7 God
comp_date=`date -d "7 day ago" +"%Y.%m.%d"`
#curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F" " '{print $3}' | sort | uniq| while read LINE
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
# Call index delete function
delete_indices ${LINE}$comp_date
#break ## Because deleting a large index will affect ES stability , So delete an index and exit , Delete the next one after the next call
done
}
## If the index is set, the storage time , Delete by storage duration , If not set , Then press the default 7 Days to delete
grep -v '#' indices.conf |while read line
do
indices=$(echo $line |awk '{print $1}')
delete_day=$(echo $line |awk '{print $2}')
filter_indices
echo $indices >> indices_tmp
done
sed -i ':a;N;$!ba;s/\n/|/g' indices_tmp ## Replace line breaks with commas
indices_set=$(cat indices_tmp)
rm -f indices_tmp
filter_indices_default
5、 Log display (Kibana)
5.1 Kibana Responsible for data visualization 、 Data analysis
1、Kibana Support the creation of index matching , And then in kibana Select the index template created on the to match the index

2、 stay kibana Select an index template , Just search

版权声明
本文为[Li Jun's blog]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230727535608.html
边栏推荐
- Basic usage of synchronized locks
- L2-022 rearrange linked list (25 points) (map + structure simulation)
- Latex paper typesetting operation
- How to read excel table to database
- 【58】最后一个单词的长度【LeetCode】
- Search tree judgment (25 points)
- Research purpose, construction goal, construction significance, technological innovation, technological effect
- Get the absolute path of the class according to the bytecode
- cadence的工艺角仿真、蒙特卡洛仿真、PSRR
- Failed to prepare device for development
猜你喜欢

企业微信应用授权/静默登录

Automatic differentiation and higher order derivative in deep learning framework

Enterprise wechat application authorization / silent login

After a circle, I sorted out this set of interview questions..

2021 Li Hongyi's adaptive learning rate of machine learning

Bk3633 specification

LLVM之父Chris Lattner:编译器的黄金时代

Solidity 问题汇总

Cadence process angle simulation, Monte Carlo simulation, PSRR

LeetCode_DFS_中等_1254. 统计封闭岛屿的数目
随机推荐
Is Zhongyan futures safe and reliable?
The K neighbors of each sample are obtained by packet switching
Go语言自学系列 | golang结构体指针
Illegal character in scheme name at index 0:
DJ music management software pioneer DJ rekordbox
使用flask和h5搭建网站/应用的简要步骤
【原创】使用System.Text.Json对Json字符串进行格式化
Whether the same binary search tree (25 points)
MATLAB入门资料
Research purpose, construction goal, construction significance, technological innovation, technological effect
Applet in wechat and app get current ()
How much inventory recording does the intelligent system of external call system of okcc call center need?
Please arrange star trek in advance to break through the new playing method of chain tour, and the market heat continues to rise
政务中台研究目的建设目标,建设意义,技术创新点,技术效果
1099 establish binary search tree (30 points)
Star Trek强势来袭 开启元宇宙虚拟与现实的梦幻联动
在sqli-liabs学习SQL注入之旅(第十一关~第二十关)
Introduction to GUI programming swing
Latex mathematical formula
L2-023 graph coloring problem (25 points) (graph traversal)