当前位置:网站首页>Production practice elk
Production practice elk
2022-04-23 09:00:00 【Li Jun's blog】
background
ELK For log storage / Query system , Used for monitoring system data source and log query
at present JOOX Log after landing 、 collection (filebeat)、 buffer (kafka)[ Optional ]、 Filter (logstash)、 Search for (elasticsearch)、 Exhibition (kibana) The whole process is self built ELK technological process
1、 Log landing
1.1 Terminal log
The terminal will report data to the designated log server
1.2 nginx journal
nginx Log landing in the log folder
2、 Log collection
2.1 filebeat Configuration instructions
filebeats Deploy 9.59.1.154 and 9.131.172.16 On , By configuring paths Path collection log file , And then deliver it to logstash perhaps kafka、 There are a lot of log writes to write kafka Buffering , If you don't have a lot of logs, you can send them directly to logstash
filebeat.spool_size: 5000
filebeat.idle_timeout: "5s"
filebeat.prospectors:
- input_type: log
paths:
- /usr/local/tnginx/logs/www.joox.com.access.log
- /usr/local/tnginx/logs/www.joox.com_445.access.log
fields_under_root: true
fields:
feature: nginx-webjoox
document_type: webjoox_nginx
- input_type: log
paths:
- /usr/local/tnginx/logs/api_joox.access.log
- /usr/local/tnginx/logs/api_joox_445.access.log
fields_under_root: true
fields:
feature: nginx-apijoox
document_type: apijoox_nginx
- input_type: log
paths:
- /data/joox-web-ssr/logs/JOOX-WEB-SSR-ERROR.log
fields_under_root: true
fields:
feature: nginx-pm2ssr
document_type: pm2joox_nginx
#output.logstash:
# #loadbalance: true
# #hosts: ["10.228.175.10:5044","10.228.14.205:5044"]
# hosts: ["10.228.180.102:5045"]
# #hosts: ["10.228.139.36:5045"]
output.kafka:
hosts: ["kafka.iops.woa.com:9092"]
topic: web_joox2
required_acks: 1
compression: none |
2.2 start-up filebeat
~]# cd /usr/local/elk/filebeat_2000701487_15/ && filebeat -e -c filebeat-app.yml
3、 The log buffer 【 Optional 】
Tencent's cloud Message queue Ckafka example joox-iops Next topic Can find web_joox2 Of topic
Message queue CKafka Technical principle - Product Brief - Document center - Tencent cloud

4、 Log filtering
4.1 Logstash Responsible for log input 、 Filter 、 Output
Logstash Start via container , stay k8s Springboard machine 【9.81.29.39】 Execute the following command to understand logstash Data source and destination .
1、 because filebeats The delivery address is configured in the configuration file 9.26.7.167:17007, So find it first service, Found to be nodeport Type of service, Port is 17007
[root@k8s-jump ~]# kubectl get svc -n iops | grep jooxapp
logstash-jooxapp-fb NodePort 10.97.93.0 <none> 17007:17007/TCP 466d
2、 Find the name logstash-jooxapp-fb Of deployment, View creation yaml file , Find out logstash Profile passed configmap To mount and use , The configuration file in the container path is /usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
[root@k8s-jump ~]# kubectl get deploy logstash-jooxapp-fb -n iops -o yaml
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: logstash-jooxapp-fb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: logstash-jooxapp-fb
spec:
containers:
- env:
volumeMounts:
- mountPath: /usr/share/logstash/pipeline
name: vm-config
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: logstash-config-named-k8s
path: indexer-kafka-named-k8s.conf
name: logstash-jooxapp-fb
name: vm-config
3、 see logstash adopt configmap Mounted indexer-kafka-named-k8s.conf The configuration file , The main logic is to read filebeats Log entry for , And then deliver it to ES
[root@k8s-jump ~] # kubectl exec -it logstash-jooxapp-fb-76bb48b96-zh2c2 /bin/bash -n iops
beac38d ] # cat /usr/share/logstash/pipeline/indexer-kafka-named-k8s.conf
filter {
if [type] == "cmsjoox_nginx" {
ruby {
init => "@kname = ['remote_addr','unknow','remote_user','time_local','request','status','body_bytes_sent','http_referer','http_user_agent','http_x_forwarded_for','host','cookie_ssl_edition','upstream_addr','upstream_status','request_time','upstream_response_time','cookie_uin','cookie_luin','proxy_host']"
code => "
v = Hash[@kname.zip(event.get('message').split('|'))]
new_event = LogStash::Event.new(v)
new_event.remove('@timestamp')
event.append(new_event)
event.set('raw_request', event.get('request'))
req = event.get('request').split(' ')[1];
if req.index('?') == ''
event.set('request',req);
else
event.set('request',req.split('?')[0]);
end
event.set('upstream_response_time_ext',event.get('upstream_response_time'));
event.set('request_time_ext',event.get('request_time'));
"
}
geoip {
source => "remote_addr"
}
date {
match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
mutate {
convert => ["upstream_response_time_ext", "float"]
convert => ["request_time_ext", "float"]
}
}
}
## logstash conf
input {
kafka{
bootstrap_servers => ["kafka.iops.woa.com:9092"]
client_id => "${NODE_NAME}"
group_id => "joox_cms"
auto_offset_reset => "latest"
decorate_events => "true"
topics => ["cmsjoox"]
codec => json # Be sure to add this paragraph , Otherwise, transmission error ,${message}
}
}
## logstash conf
output {
if "parsefailure" in [tags] {
stdout {codec => rubydebug}
}else{
elasticsearch {
hosts => ["http://iLogstack.oa.com:19200/es-logstashing/"]
index => "cmsjoox_%{+YYYY.MM.dd}"
}
}
}
4、logstash-joox-cms Structural analysis
logstash pipeline Contains two essential elements :input and output, And an optional element :filter.
from input Read the event source ,( after filter After parsing and processing ), from output Output to the target Repository (elasticsearch Or others )
4、 Log storage and search (ES)
4.1 Distributed 、RESTful Style search and data analysis engine
1、logstash Delivered to ES Data retention 7 God ,webjoox_nginx Of index The format is logstash-webjoox_nginx_%{+YYYY.MM.dd}

4.2 Clean up regularly ES Indexes
9.81.21.162 Every time 5 A cleanup script that executes once a minute
*/5 9-23 * * * cd /data/iLogStack/es_plugins/ops && sh delete_indices.sh > /dev/null 2>&1
#!/bin/bash
##rossypli at 2018/03/25
###################################
# Delete before 7 Days of ES The index of the cluster
###################################
function delete_indices() {
Today=`date +"%Y.%m.%d %H:%M:%S"`
# Repair match %{ostype} problem
indices=`echo "$1" | sed 's/%/*/g' | sed 's/{/*/g' | sed 's/}/*/g'`
curl -XDELETE http://iLogStack.oa.com:19200/es-monitoring/$indices
Today_end=`date +"%Y.%m.%d %H:%M:%S"`
echo "DELETE indices|$Today|$Today_end|$indices" >> log/DELETE.log
echo "DELETE $indices SUCC"
exit ## Because deleting a large index will affect ES stability , So delete an index and exit , Delete the next one after the next call
}
function filter_indices(){
comp_date=`date -d "$delete_day day ago" +"%Y.%m.%d"`
# V updated for closed indices on 20181130
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep $indices|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
# Call index delete function
delete_indices ${LINE}$comp_date
done
}
function filter_indices_default(){ ## The default deletion is 7 God
comp_date=`date -d "7 day ago" +"%Y.%m.%d"`
#curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F" " '{print $3}' | sort | uniq| while read LINE
curl -XGET http://iLogStack.oa.com:19200/es-monitoring/_cat/indices |grep -Ev "$indices_set"|grep $comp_date| awk -F "$comp_date" '{print $1}' | awk -F " " '{print $NF}' | sort | uniq| while read LINE
do
# Call index delete function
delete_indices ${LINE}$comp_date
#break ## Because deleting a large index will affect ES stability , So delete an index and exit , Delete the next one after the next call
done
}
## If the index is set, the storage time , Delete by storage duration , If not set , Then press the default 7 Days to delete
grep -v '#' indices.conf |while read line
do
indices=$(echo $line |awk '{print $1}')
delete_day=$(echo $line |awk '{print $2}')
filter_indices
echo $indices >> indices_tmp
done
sed -i ':a;N;$!ba;s/\n/|/g' indices_tmp ## Replace line breaks with commas
indices_set=$(cat indices_tmp)
rm -f indices_tmp
filter_indices_default
5、 Log display (Kibana)
5.1 Kibana Responsible for data visualization 、 Data analysis
1、Kibana Support the creation of index matching , And then in kibana Select the index template created on the to match the index

2、 stay kibana Select an index template , Just search

版权声明
本文为[Li Jun's blog]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230727535608.html
边栏推荐
- MySQL查询两张表属性值非重复的数据
- PLC的点表(寄存器地址和点表定义)破解探测方案--方便工业互联网数据采集
- Idea is configured to connect to the remote database mysql, or Navicat fails to connect to the remote database (solved)
- How to read excel table to database
- The crawler returns null when parsing with XPath. The reason why the crawler cannot get the corresponding element and the solution
- 企业微信应用授权/静默登录
- Flink同时读取mysql与pgsql程序会卡住且没有日志
- LeetCode_ DFS_ Medium_ 1254. Count the number of closed islands
- Use include in databinding
- Arbre de dépendance de l'emballage des ressources
猜你喜欢

Summary of solid problems

valgrind和kcachegrind使用运行分析

Multi view depth estimation by fusing single view depth probability with multi view geometry

LeetCode_ DFS_ Medium_ 1254. Count the number of closed islands

Star Trek's strong attack opens the dream linkage between metacosmic virtual reality

ONEFLOW learning notes: from functor to opexprinter

Pctp test experience sharing

Idea is configured to connect to the remote database mysql, or Navicat fails to connect to the remote database (solved)

LLVM之父Chris Lattner:编译器的黄金时代

Reference passing 1
随机推荐
关于堆的判断 (25 分) 两种插入方式
Initial experience of talent plan learning camp: communication + adhering to the only way to learn open source collaborative courses
OneFlow学习笔记:从Functor到OpExprInterpreter
Cadence process angle simulation, Monte Carlo simulation, PSRR
valgrind和kcachegrind使用运行分析
MySQL查询两张表属性值非重复的数据
The K neighbors of each sample are obtained by packet switching
Yangtao electronic STM32 Internet of things entry 30 step notes IV. engineering compilation and download
Notes on xctf questions
L2-024 tribe (25 points) (and check the collection)
Introduction to matlab
ONEFLOW learning notes: from functor to opexprinter
2022-04-22 openebs cloud native storage
tsdf +mvs
bashdb下载安装
Download and install bashdb
On time atom joins hands with oneos live broadcast, and the oneos system tutorial is fully launched
根据后序和中序遍历输出先序遍历 (25 分)
L2-024 部落 (25 分)(并查集)
是否完全二叉搜索树 (30 分)