当前位置:网站首页>On distributed lock
On distributed lock
2022-04-23 05:04:00 【canger_】
Why locks are needed
Stand alone program , In the case of multi-threaded concurrency , When operating on the same resource , It needs to be locked and other synchronization measures to ensure atomicity . Take an example of multithreading self increment :
package main
import (
"sync"
)
// Global variables
var counter int
func main() {
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
counter++
wg.Done()
}()
}
wg.Wait()
println(counter)
}
Multiple runs will get different results :
> go run test.go
98
> go run test.go
99
> go run test.go
100
Obviously, the result is not satisfactory , Full of unpredictability . To get the right results , You need to increase the count self lock
package main
import (
"sync"
)
// Global variables
var counter int
var mtx sync.Mutex
func main() {
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
mtx.Lock()
counter++
mtx.Unlock()
wg.Done()
}()
}
wg.Wait()
println(counter)
}
The results obtained after multiple runs :
> go run test.go
100
> go run test.go
100
> go run test.go
100
One 、 be based on Redis Of setnx
In a distributed scenario , We also need this " preemption " The logic of , Now what? ? We can use Redis Provided setnx command :
package main
import (
"fmt"
"strconv"
"sync"
"time"
"gopkg.in/redis.v5"
)
var rds = redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: "mymaster",
SentinelAddrs: []string{
"127.0.0.1:26379"},
})
// Global variables
func incrby() error {
lockkey := "count_key"
counterkey := "counter"
succ, err := rds.SetNX(lockkey, 1, time.Second*time.Duration(5)).Result()
if err != nil || !succ {
fmt.Println(err, " lock result:", succ)
return err
}
defer func() {
succ, err := rds.Del(lockkey).Result()
if err == nil && succ > 0 {
fmt.Println("unlock sucess")
} else {
fmt.Println("unlock failed, err=", err)
}
}()
resp, err := rds.Get(counterkey).Result()
if err != nil && err != redis.Nil {
fmt.Println("get count failed, err=", err)
return err
}
var cnt int64
if err == nil {
cnt, err = strconv.ParseInt(resp, 10, 64)
if err != nil {
fmt.Println("parse string failed, s=", resp)
return err
}
}
fmt.Println("curr cnt:", cnt)
cnt++
_, err = rds.Set(counterkey, cnt, 0).Result()
if err != nil {
fmt.Println("set value fialed,err=", err)
return err
}
return nil
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
incrby()
}()
}
wg.Wait()
}
Running results :
> go run test.go
curr cnt: 0
<nil> lock result: false
unlock sucess
<nil> lock result: false
curr cnt: 1
<nil> lock result: false
unlock sucess
curr cnt: 2
<nil> lock result: false
unlock sucess
curr cnt: 3
<nil> lock result: false
<nil> lock result: false
unlock sucess
The remote invocation setnx Running process and stand-alone trylock Very similar , If the lock acquisition fails , Then the relevant task logic will not continue to execute downward .
setnx It is very suitable for high concurrency scenarios , To compete for some of the only resources .
Two 、 be based on zookeeper
package main
import (
"fmt"
"sync"
"time"
"github.com/samuel/go-zookeeper/zk"
)
var zkconn *zk.Conn
var count int64
func incrby() {
lock := zk.NewLock(zkconn, "/lock", zk.WorldACL(zk.PermAll))
err := lock.Lock()
if err != nil {
panic(err)
}
count++
lock.Unlock()
}
func main() {
c, _, err := zk.Connect([]string{
"127.0.0.1"}, time.Second)
if err != nil {
fmt.Println("connect zookeeper failed, err=", err)
return
}
zkconn = c
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
incrby()
}()
}
wg.Wait()
fmt.Println(" cnt :", count)
}
Running results :
$ > go run test.go
Connected to 127.0.0.1:2181
authenticated: id=72138376348368897, timeout=4000
re-submitting `0` credentials after reconnect
cnt : 10
be based on ZooKeeper Lock and based on Redis The difference between locks is lock It will be blocked until it succeeds , This is related to sync.Mutex Of Lock The method is similar to .
The principle is based on temporary Sequence Nodes and watch API, For example, what we use here is /lock node .Lock It will insert its own value in the node column under the node , As long as the child nodes under the node change , Will notify all watch The program of this node . At this time, the program will check the status of the smallest child node under the current node id Whether it is consistent with your own , If it is consistent, the locking is successful .
This distributed blocking lock is more suitable for Distributed task scheduling scene , However, it is not suitable for lock grabbing scenes with high frequency and short lock holding time . according to Google Of Chubby The exposition in the paper , Lock based on strong consistency protocol is suitable for Coarse grained locking operation . Coarse granularity here means that the lock takes a long time . When using it, we should also consider whether it is appropriate to use it in our own business scenarios
3、 ... and 、 be based on etcd
This etcd My bag "github.com/zieckey/etcdsync"
Pull go mod There will be two problems
# for the first time
/etcd imports
github.com/coreos/etcd/clientv3 tested by
github.com/coreos/etcd/clientv3.test imports
github.com/coreos/etcd/auth imports
github.com/coreos/etcd/mvcc/backend imports
github.com/coreos/bbolt: github.com/coreos/[email protected]: parsing go.mod:
module declares its path as: go.etcd.io/bbolt
but was required as: github.com/coreos/bbolt
# The second time
imports
google.golang.org/grpc/naming: module google.golang.org/grpc@latest found (v1.32.0), but does not contain package google.golang.org/grpc/naming
Need to be in go.mod Medium plus
replace (
github.com/coreos/bbolt v1.3.4 => go.etcd.io/bbolt v1.3.4
go.etcd.io/bbolt v1.3.4 => github.com/coreos/bbolt v1.3.4
google.golang.org/grpc => google.golang.org/grpc v1.26.0
)
import (
"log"
"github.com/zieckey/etcdsync"
)
func main() {
m, err := etcdsync.New("/lock", 10, []string{
"http://127.0.0.1:2379"})
if m == nil || err != nil {
log.Printf("etcdsync.New failed")
return
}
err = m.Lock()
if err != nil {
log.Println("etcdsync.Lock failed, err=", err)
return
}
log.Printf("etcdsync.Lock OK")
log.Printf("Get the lock. Do something here.")
err = m.Unlock()
if err != nil {
log.Println("etcdsync.Unlock failed, err=", err)
} else {
log.Printf("etcdsync.Unlock OK")
}
}
etcd There is no such thing as ZooKeeper Like that Sequence node . Therefore, its lock implementation is based on ZooKeeper The implementation is different . Used in the above example code etcdsync Of Lock The process is :
- 1、 First check /lock Whether there is a value under the path , If it's worth it , It means that the lock has been robbed by others
- 2、 If there is no value , Then write your own value . Write success returns , It indicates that the lock is successful . When writing, if the node has been written by other nodes , Then locking will fail .
- 3、watch /lock Next event , At this time, it is blocked
- 4、 When /lock When an event occurs under the path , The current process is awakened . Check whether the event occurred is a delete event ( It means that the lock is actively locked by the holder unlock), Or expiration events ( The lock expires ), If so , go back to 1, Follow the lock grabbing process .
How to choose the right lock
Single machine level
The business is still in the order of single machine , Then you can use any single lock scheme according to your needs .
Distributed order of magnitude
- Lower order
If we develop to the stage of distributed service , But the business scale is small ,QPS Very small case , The scheme of which lock to use is almost the same . If there is something that can be used inside the company ZooKeeper、etcd perhaps Redis colony , So try not to introduce new technology stacks . - Higher order of magnitude
If the lock is under the condition of bad task, data loss is not allowed , Then you can't use Redis Of setnx Simple lock .
If the reliability of lock data is very high , You can only use etcd perhaps ZooKeeper This kind of locking scheme ensures data reliability through consistency protocol .( But behind the reliability is always the low throughput and high latency . You need to stress test the business according to its magnitude , To ensure that distributed locks are used etcd and ZooKeeper The cluster can withstand the pressure of actual business requests .
版权声明
本文为[canger_]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204220552063534.html
边栏推荐
- Leetcode -- heuristic search
- Custom switch control
- Innovation training (II) task division
- [2021] Spatio-Temporal Graph Contrastive Learning
- Day.js 常用方法
- 信息学奥赛一本通 1955:【11NOIP普及组】瑞士轮 | OpenJudge 4.1 4363:瑞士轮 | 洛谷 P1309 [NOIP2011 普及组] 瑞士轮
- Introduction to load balancing
- Learning Android II from scratch - activity
- How to exit VIM
- A trinomial expression that causes a null pointer
猜你喜欢
View, modify and delete [database] table
Live delivery form template - automatically display pictures - automatically associate series products
Backup MySQL database with Navicat
DIY is an excel version of subnet calculator
Innovation training (IV) preliminary preparation - server
#define 定义常量和宏,指针和结构体
How can continuous integration (CI) / continuous delivery (CD) revolutionize automated testing
[WinUI3]編寫一個仿Explorer文件管理器
Repair of self calibration SPC failure of Tektronix oscilloscope dpo3054
Perfect test of coil in wireless charging system with LCR meter
随机推荐
Perfect test of coil in wireless charging system with LCR meter
JS determines whether the numeric string contains characters
MySQL slow query
JS engine loop mechanism: synchronous, asynchronous, event loop
Acid of MySQL transaction
Uglifyjs compress JS
Machine learning - linear regression
Custom switch control
Mac enters MySQL terminal command
Cross border e-commerce | Facebook and instagram: which social media is more suitable for you?
Use AES encryption - reuse the wisdom of predecessors
The difference between static pipeline and dynamic pipeline
[database] MySQL multi table query (I)
MySQL memo (for your own query)
SCP command details
深度学习笔记 —— 微调
Deep learning notes - data expansion
AQS source code reading
[2021] Spatio-Temporal Graph Contrastive Learning
In aggregated query without group by, expression 1 of select list contains nonaggregated column