当前位置:网站首页>Basic use of CEPH
Basic use of CEPH
2022-04-21 14:03:00 【daydayup9527】
install Ceph colony
ceph Introduction
Block storage (rbd): A device can be understood as a hard disk , Excellent reading and writing performance , But you can't read and write at the same time
file system cephfs: The reading and writing performance is worse than that of the block device interface , But it has excellent sharing .
Object storage : Shared storage with better storage performance than file system storage
Ceph Storage clusters need at least one Ceph Monitor,Ceph Manager and Ceph OSD( Object storage daemons ).
Monitors:Ceph The monitor (ceph-mon) Maintain the mapping of cluster state , Including monitor mapping , Manager mapping ,OSD Mapping and CRUSH mapping .
These mappings are Ceph Key cluster states required for daemons to coordinate with each other . The monitor is also responsible for managing authentication between the daemons and clients .
Redundancy and high availability usually require at least three monitors .
Managers:Ceph Manager A daemon (ceph-mgr) Responsible for tracking runtime metrics and Ceph The current state of the cluster , Including storage utilization ,
Current performance indicators and system load . Ceph Manager The daemons are also hosted based on python Modules to manage and expose Ceph Cluster information ,
Including based on Web Of Ceph Dashboard and REST API. High availability usually requires at least two Managers.
Ceph OSD:Ceph OSD( Object storage daemons ,ceph-osd) Store the data , Dealing with data replication , recovery , Rebalance , And by checking other
Ceph OSD Daemons to get heartbeat , by Ceph Monitors and managers provide some monitoring information . Redundancy and high availability usually require at least 3 individual Ceph OSD.
MDS:Ceph metadata server (MDS,ceph-mds) representative Ceph File system storage metadata ( namely ,Ceph Block device and Ceph Object storage is not used
MDS). Ceph The metadata server allows POSIX File system users execute basic commands ( Such as ls,find etc. ), Not to Ceph Storage clusters are a huge burden .
Installation preparation
1)ceph Servers are added 1 block 20G disk .
2) All hosts are equipped with firewalls and SELinux
[root@ms-adm ~]# firewall-cmd --set-default-zone=trusted
[root@ms-adm ~]# sed -i '/SELINUX/s/enforcing/permissive/' /etc/selinux/config
[root@ms-adm ~]# setenforce 0
3) Configure password free connection ( Including myself, I don't need a password ), stay node1 operation .
[root@ms-adm ~]# ssh-keygen -f /root/.ssh/id_rsa -N ''
[root@node1 ~]# for i in 11 12 13
do
ssh-copy-id 192.168.1.$i
done
4) modify /etc/hosts And synchronize to all hosts .
[root@ms-adm ~# cat >>/etc/hosts<<EOF
192.168.1.11 ms-adm
192.168.1.12 node-mon
192.168.1.13 node-osd
EOF
[root@ms-adm ~]# for i in node-mon node-osd
do
scp /etc/hosts $i:/etc/
done
5)ms-adm Host configuration NTP The server
[root@ms-adm ~]# ntpdate cn.pool.ntp.org
[root@ms-adm ~]# systemctl start ntpd &&systemctl enable ntpd
6) Node modification NTP Client configuration
[root@node-mon ~]# ntpdate ms-adm
Planning tasks :
[root@node-mon ~]# crontab -l
* */1 * * * /usr/sbin/ntpdate ms-adm
[root@ms-adm ~# systemctl restart chronyd
To configure yum Source
# To configure Ceph Install source , stay ms-adm、node-mon、node-osd The upper synchronization is as follows yum Source
Ali cloud, yum Source
[root@ms-adm ~]# cat >/etc/yum.repos.d/ceph.repo<<EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOF
163 Alibaba cloud yum Source
[root@ms-adm ~]# cat >/etc/yum.repos.d/ceph.repo <<-'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
EOF
[root@ms-adm ~]# yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@ms-adm ~]# yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[root@ms-adm ~]# rm -f /etc/yum.repos.d/dl.fedoraproject.org*
3)、 install ceph-deploy
[root@ms-adm ~]# yum -y install python-setuptools ceph-deploy
[root@ms-adm ~]# yum -y install ceph ceph-radosgw
[root@ms-adm ~]# ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
ceph-base.x86_64 2:10.2.11-0.el7 ceph-common.x86_64 2:10.2.11-0.el7
ceph-mds.x86_64 2:10.2.11-0.el7 ceph-mon.x86_64 2:10.2.11-0.el7
ceph-osd.x86_64 2:10.2.11-0.el7 ceph-selinux.x86_64 2:10.2.11-0.el7
node Installation on
[root@node-mon ~]# yum -y install ceph ceph-radosgw
[root@node-mon ~]# ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
establish Monitor node
[root@ms-adm ~]# mkdir ceph && cd ceph
[root@ms-adm ceph]# ceph-deploy new ms-adm node-mon node-osd
[root@ms-adm ceph]# ls
ceph.conf #ceph Cluster profile
ceph-deploy-ceph.log #ceph-deploy Log of command execution results
ceph.mon.keyring #ceph mon Character's key
[root@ms-adm ceph]# vim ceph.conf
[global]
fsid = 7aca1836-1145-4b18-b452-9cb90faf98af
mon_initial_members = ms-adm, node-mon, node-osd
mon_host = 192.168.1.11,192.168.1.12,192.168.1.13
#3 platform host Realized monitor High availability
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 0.500
mon clock drift warn backoff = 10
osd_pool_default_size = 2
mon clock drift allowed # The default value of the allowable clock drift between monitors 0.05
mon clock drift warn backoff # Backoff index of clock offset warning . The default value is 5
ceph For each mon The time synchronization delay between is required to be at by default 0.05s Between , This time is sometimes too short . So if ceph If cluster appears clock Check the problem ntp Time synchronization or appropriate relaxation of this error time .
[root@ms-adm ceph]# ceph-deploy mon create-initial
// There is no secret free between nodes , So you have to enter the password many times
[root@ms-adm ceph]# ls
ceph.bootstrap-mds.keyring #MDS start-up key
ceph.bootstrap-osd.keyring #OSD start-up key
ceph.client.admin.keyring # Administrators key
ceph-deploy-ceph.log ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring
ceph.conf ceph.mon.keyring
Deploy osd
[root@ms-adm ceph]# ceph-deploy osd prepare ms-adm:/dev/sdb
[ceph_deploy.osd][DEBUG ] Host ms-adm is now ready for osd use.
[root@ms-adm ceph]# lsblk
sdb 8:16 0 20G 0 disk
├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 5G 0 part
# Each disk is automatically divided into two partitions ; One fixed 5G size ; One for all remaining capacity
#5G Zoning Journal Log cache ; All space left is data disk .
[root@ms-adm ceph]# ceph-deploy osd prepare node-mon:/dev/sdb
// There is no password free, you need to enter the password
[ceph_deploy.osd][DEBUG ] Host node-mon is now ready for osd use.
[root@ms-adm ceph]# ssh-copy-id [email protected]
[root@ms-adm ceph]# ssh node-osd
Last login: Tue Apr 5 11:40:53 2022 from 192.168.1.1
[root@ms-adm ceph]# ceph-deploy osd prepare node-osd:/dev/sdb
[ceph_deploy.osd][DEBUG ] Host node-osd is now ready for osd use.
Activate disk
[root@ms-adm ceph]# ceph-deploy osd activate ms-adm:/dev/sdb1
[root@ms-adm ceph]# ceph-deploy osd activate node-mon:/dev/sdb1
[root@ms-adm ceph]# ceph-deploy osd activate node-osd:/dev/sdb1
[root@ms-adm ceph]# ceph-deploy osd list ms-adm node-osd node-mon // Check disk status
[root@ms-adm ceph]# ceph-deploy osd list ms-adm
...
[ms-adm][DEBUG ] connected to host: ms-adm
...
[ms-adm][INFO ] ceph-0
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] Path /var/lib/ceph/osd/ceph-0
[ms-adm][INFO ] ID 0
[ms-adm][INFO ] Name osd.0
[ms-adm][INFO ] Status up
[ms-adm][INFO ] Reweight 1.0
[ms-adm][INFO ] Magic ceph osd volume v026
[ms-adm][INFO ] Journal_uuid 1097292f-7941-4aac-9d37-98a15ab3eb47
[ms-adm][INFO ] Active ok
[ms-adm][INFO ] Device /dev/sdb1
[ms-adm][INFO ] Whoami 0
[ms-adm][INFO ] Journal path /dev/sdb2
[ms-adm][INFO ] ----------------------------------------
establish ceph file system
[root@ms-adm ceph]# ceph-deploy mds create ms-adm node-mon node-osd
...
[node-osd][INFO ] Running command: systemctl start ceph-mds@node-osd
[node-osd][INFO ] Running command: systemctl enable ceph.target
[root@ms-adm ceph]# ceph fs ls // Check the current system
No filesystems enabled
One cephfs At least two are required librados Storage pool , One for data, One for metadata. When configuring these two storage pools , Be careful :c
1. by metadata pool Set the replica level at a higher level , because metadata Corruption may cause the entire file system to go unused
2. Suggest ,metadata pool Using low latency storage , such as SSD, because metadata It will directly affect the response speed of the client .
Create storage pools
[root@ms-adm ceph]# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[root@ms-adm ceph]# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created
About creating storage pools -
determine pg_num Values are mandatory , Because it can't calculate automatically . Here are some common values :
OSD Less than 5 You can put pg_num Set to 128
OSD Quantity in 5 To 10 Time , Can handle pg_num Set to 512
OSD Quantity in 10 To 50 Time , Can handle pg_num Set to 4096
OSD The number is larger than 50 when , You have to understand the trade-offs 、 And how to calculate pg_num Value
Calculate for yourself pg_num You can use pgcalc Tools
With OSD An increase in quantity , Correct pg_num Values become more important , Because it significantly affects the behavior of clusters 、 And data persistence in the event of an error ( That is, the probability of data loss caused by catastrophic events ).
Create file system
After creating the storage pool , You can use it fs new The command creates the file system e
[root@ms-adm ceph]# ceph fs new ceph-test cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1 // Create success
[root@ms-adm ceph]# ceph fs ls // View the created cephfs
name: ceph-test, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ms-adm ceph]# ceph mds stat // see osd Node status
e7: 1/1/1 up {0=node-mon=up:active}, 2 up:standby //2up Hot backup
[root@ms-adm ceph]# ceph -s
cluster 7aca1836-1145-4b18-b452-9cb90faf98af
health HEALTH_WARN
too many PGs per OSD (320 > max 300)
monmap e1: 3 mons at {ms-adm=192.168.1.11:6789/0,node-mon=192.168.1.12:6789/0,node-osd=192.168.1.13:6789/0}
election epoch 6, quorum 0,1,2 ms-adm,node-mon,node-osd
fsmap e7: 1/1/1 up {0=node-mon=up:active}, 2 up:standby
osdmap e19: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v44: 320 pgs, 3 pools, 2068 bytes data, 20 objects
328 MB used, 45718 MB / 46046 MB avail
320 active+clean
[root@ms-adm ceph]# ceph -s
health HEALTH_WARN
too many PGs per OSD (320 > max 300)
The reason for the problem is that the cluster osd The small number , During my testing , Due to the construction of rgw gateway 、 and OpenStack Integration, etc. , Created a lot of pool, Every pool Take up some pg ,ceph Each disk in the cluster has a default value , Like everyone osd by 300 individual pgs, However, this default value can be adjusted , However, if the adjustment is too large or too small, it will have a certain impact on the performance of the cluster . Because our test environment , As long as the error can be eliminated . Query each of the current osd The next biggest pg Alarm value :
[root@ms-adm ceph]# ceph --show-config | grep mon_pg_warn_max_per_osd
mon_pg_warn_max_per_osd = 300
Solution
In the configuration file , Increase the alarm threshold of this option of the cluster ; The method is as follows , stay mon Node ceph.conf(/etc/ceph/ceph.conf) Add... To the configuration file :
vi /etc/ceph/ceph.conf
[global]
.......
mon_pg_warn_max_per_osd = 1000
systemctl restart ceph-mon.target
Look again ceph State of the cluster .
$ ceph -s
Capacity expansion Make the catalogue osd, Of course, it can also be another hard disk
Join in ms-adm、node-mon Directory below
[root@ms-adm ~]# mkdir /var/local/osd1
[root@ms-adm ~]# chmod 777 /var/local/osd1/
[root@node-mon ~]# mkdir /var/local/osd1
[root@node-mon ~]# chmod 777 /var/local/osd1/
# go back to ms-adm, Get ready osd
[root@ms-adm ~]# cd ceph/
[root@ms-adm ceph]# ceph-deploy osd prepare ms-adm:/var/local/osd1 node-mon:/var/local/osd1
# Activate osd
[root@ms-adm ceph]# ceph-deploy osd activate ms-adm:/var/local/osd1 node-mon:/var/local/osd1
[root@ms-adm ceph]# ceph-deploy osd list ms-adm node-osd node-mon
altogether 5 individual osd The first batch of osd.0-2 , The second batch just created osd.3-4
[root@ms-adm ceph]# ceph-deploy osd list ms-adm
// View here ms-adm Upper osd.
[ms-adm][INFO ] ceph-0
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] Path /var/lib/ceph/osd/ceph-0
[ms-adm][INFO ] ID 0
[ms-adm][INFO ] Name osd.0
[ms-adm][INFO ] Status up
[ms-adm][INFO ] Reweight 1.0
[ms-adm][INFO ] Magic ceph osd volume v026
[ms-adm][INFO ] Journal_uuid 1097292f-7941-4aac-9d37-98a15ab3eb47
[ms-adm][INFO ] Active ok
[ms-adm][INFO ] Device /dev/sdb1 //osd.0 There are disks , Shows device
[ms-adm][INFO ] Whoami 0
[ms-adm][INFO ] Journal path /dev/sdb2
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] ceph-3
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] Path /var/lib/ceph/osd/ceph-3
[ms-adm][INFO ] ID 3
[ms-adm][INFO ] Name osd.3 //osd.3 Is a directory , No equipment
[ms-adm][INFO ] Status up
[ms-adm][INFO ] Reweight 1.0
[ms-adm][INFO ] Active ok // active
[ms-adm][INFO ] Magic ceph osd volume v026
[ms-adm][INFO ] Whoami 3
[ms-adm][INFO ] Journal path /var/local/osd1/journal
[ms-adm][INFO ] ----------------------------------------
Shrinkage capacity
With osd.0 give an example , see osd.0 state
[root@ms-adm ceph]# ceph-deploy osd list ms-adm
// View here ms-adm Upper osd.
[ms-adm][INFO ] ceph-0
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] Path /var/lib/ceph/osd/ceph-0
[ms-adm][INFO ] ID 0
[ms-adm][INFO ] Name osd.0
[ms-adm][INFO ] Status up // be in up state
Stop the process
[root@ms-adm ceph]# systemctl stop ceph-osd@0
[root@ms-adm ceph]# ceph-deploy osd list ms-adm
[ms-adm][INFO ] ceph-0
[ms-adm][INFO ] ----------------------------------------
[ms-adm][INFO ] Path /var/lib/ceph/osd/ceph-0
[ms-adm][INFO ] ID 0
[ms-adm][INFO ] Name osd.0
[ms-adm][INFO ] Status down // In down state
[ms-adm][INFO ] Reweight 1.0
Delete osd Delete node
// Stop the process
[root@node-osd ~]# systemctl stop [email protected]
// Mark nodes as out down state tell mon The node can no longer serve .
[root@ms-adm ceph]# ceph osd out 0
marked out osd.0.
[root@ms-adm ceph]# ceph osd down 0
osd.0 is already down.
// Remove node
[root@ms-adm ceph]# ceph osd crush remove osd.0
removed item id 0 name 'osd.0' from crush map
// Delete node
[root@ms-adm ceph]# ceph osd rm 0
removed osd.0
// Remove node authentication
[root@ms-adm ceph]# ceph auth del osd.0
updated
[root@ms-adm ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.01659 root default
-2 0.01659 host node-osd
[root@ms-adm ceph]# ceph osd crush rm cephtest003
[root@ms-adm ceph]# ceph osd tree //host node-osd Has deleted
Add a node 192.168.1.14 add
selinux A firewall hosts Password free login time synchronization installation basic package ok after
[root@ms-adm ceph]# vim ceph.conf // It's new here mon, Added configuration
mon_initial_members = node-osd,add
mon_host = 192.168.1.17,192.168.1.14
[root@ms-adm ceph]# ceph-deploy install add
[root@ms-adm ceph]# ceph-deploy mon create add // It's added here mon role
[root@ms-adm ceph]# ceph-deploy admin add
[root@ms-adm ceph]# ceph-deploy --overwrite-conf config push ms-adm node-mon node-osd add
[root@ms-adm ceph]# ceph mon stat
e2: 2 mons at {add=192.168.1.14:6789/0,node-osd=192.168.1.17:6789/0},
[root@ms-adm ceph]# ceph-deploy disk list add
add][DEBUG ] /dev/sdb other, unknown
[root@ms-adm ceph]# ceph-deploy osd prepare add:/dev/sdb
[add][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[root@ms-adm ceph]# ceph-deploy osd activate add:/dev/sdb1
[root@ms-adm ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.03117 root default
-2 0.01659 host node-osd
0 0.01659 osd.0 up 1.00000 1.00000
-3 0.01459 host add // On the new node osd Create success
1 0.01459 osd.1 up 1.00000 1.00000
Delete node
[root@node-osd ~]# ls /var/lib/ceph/osd/
ceph-0
[root@node-osd ~]# umount /var/lib/ceph/osd/ceph-0/
remove mon
[root@ms-adm ceph]# ceph mon stat
e1: 1 mons at {node-osd=192.168.1.17:6789/0}, election epoch 3, quorum 0 node-osd
[root@ms-adm ceph]# ceph mon remove node-osd
modify ceph.conf
[root@ms-adm ceph]# cat ceph.conf
[global]
fsid = 9dc4f319-9510-4142-85ca-b92a90d68f85
mon_initial_members = node-osd
mon_host = 192.168.1.17 // Delete the removed mon
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
push Modified ceph.conf To the node
[root@ms-adm ceph]# ceph-deploy --overwrite-conf admin ms-adm node-mon
modify /etc/ceph The directory permissions ( All nodes execute )
chown -R cephadmin:cephadmin /etc/ceph
[root@ms-adm ceph]# ceph -s
cluster 9dc4f319-9510-4142-85ca-b92a90d68f85
health HEALTH_OK
monmap e3: 1 mons at {node-osd=192.168.1.17:6789/0}
election epoch 5, quorum 0 node-osd
osdmap e18: 2 osds: 2 up, 1 in
flags sortbitwise,require_jewel_osds
pgmap v2020: 64 pgs, 1 pools, 0 bytes data, 0 objects
7024 MB used, 10369 MB / 17394 MB avail
64 active+clean
[root@ms-adm ceph]# ceph mon stat
e3: 1 mons at {node-osd=192.168.1.17:6789/0}, election epoch 5, quorum 0 node-osd
[root@ms-adm ceph]# ceph-deploy uninstall add
if there be , Close on the node to be deleted mgr The service can be
[root@ms-adm ceph]# ceph -s
[root@node-osd ~]# systemctl status ceph-mgr.target
Ceph Maintenance order
1、osd
OSD Full name Object Storage Device, It is mainly responsible for the process of returning specific data in response to client requests . One Ceph Generally, there are many clusters OSD.
Use rbd The premise of storage , Need kernel support rbd
[root@ms-adm ~]# modprobe rbd
[root@ms-adm ~]# lsmod | grep rbd // See if the kernel supports rbd, Upgrade the kernel without
2、pool pool
pool yes ceph Logical partition for storing data , Every pool Contain a certain amount of PG,PG Objects in are mapped to different OSD On , therefore pool It's distribution
To the entire cluster . Apart from isolating data , We can also treat different POOL Set up different optimization strategies , For example, the number of copies 、 Data cleaning times 、 data
Block and object size, etc .
3、image
It will be striped into N Sub data block , Each data block will be treated as an object (object) The form is preserved in RADOS Object store
rbd Common commands
1)、 Create a pool pool
[root@ms-adm ~]# ceph osd pool create testpool 256 //256 pg_num, See the previous definition
2)、 establish rbd
[root@ms-adm ~]# rbd create testpool/myrbd --size 10240 //10240MB
3)、 Mapping block devices to their own machines
[root@ms-adm ~]# rbd feature disable testpool/myrbd object-map fast-diff deep-flatten
// Need to disable , Otherwise, the mount will not succeed
[root@ms-adm ~]# rbd map testpool/myrbd It is shown as follows
/dev/rbd0
4)、 Mount the use
[root@ms-adm ~]# mkdir /mnt/firstrbd
[root@ms-adm ~]# mkfs.xfs /dev/rbd0
[root@ms-adm ~]# mount /dev/rbd0 /mnt/firstrbd
mount cephfs file system
1、 establish ceph subdirectories
In order to mount... Somewhere else cephfs, So let's create one secretfile
[root@ms-adm ~]# cat /etc/ceph/ceph.client.admin.keyring | grep key|awk -F " " {print $3}’ \
> /etc/ceph/admin.secret
mount cephfs From the root directory of to the cluster mon A directory under the node , such as test_data, Because after mounting , We can go straight to test_data The following is used Linux The command creates a subdirectory .
[root@ms-adm ~]# mkdir test_data
[root@ms-adm ~]# mount -t ceph 192.168.1.111:6789:/ /root/test_data -o name=admin, secretfile=/etc/ceph/admin.secrete
[root@ms-adm ~]# df -h
192.168.1.11:6789:/ 15G 106M 15G 1% /root/test_data
stay cephfs A subdirectory is created in the root directory of kube,k8s You can mount this directory later
[root@ms-adm ~]# cd /root/test_data
[root@ms-adm ~]# mkdir /root/test_data/kube
[root@ms-adm ~]# chmod 0777 /root/test_data/kube
版权声明
本文为[daydayup9527]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204211351296694.html
边栏推荐
- <2021SC@SDUSC>山东大学软件工程应用与实践JPress代码分析(九)
- 网络端口号和协议号(大全)
- centos 卸载mysql
- The common interface of priority queue (heap) introduces the storage of heap and the creation of heap
- Zabbix5系列-报表工具ZbxTable (十八)
- RHCE搭建简单web网站
- MySQL storage engine
- RHCE builds a simple web site
- < 2021SC@SDUSC > Application and practice of software engineering in Shandong University jpress code analysis (8)
- redis集群搭建管理入门
猜你喜欢

The pit of c3p0 leads to concurrent performance problems

<2021SC@SDUSC>山东大学软件工程应用与实践JPress代码分析(六)

Zabbix5系列-监控华为、H3C交换机(snmpv2c/snmpv3/snmptrap) (二)

Shandong University project training raspberry pie promotion plan phase II (III) SSH Remote Connection

stm32笔记

Zabbix5系列-监控海康威视摄像头 (七)

< 2021SC@SDUSC > Application and practice of software engineering in Shandong University jpress code analysis (10)

stm32的内存分布

Zabbix5 series - monitoring redis (XI)

基于Anki+Vocabulary的英语单词记忆法
随机推荐
Zabbix5系列-创建自动发现模板 (十六)
Zabbix5 series - nail alarm (XV)
函数单调性与凹凸性
Software testing common problems development model PC QQ login test case bug related problems test case design common methods
洛必达法则
终止线程使用interrupt
CognitiveComputationalNeuroscienceonlineReadingClub第三季成员招募
爬虫实例:爬取淘宝商品信息
微分中值定理
无穷小的比较
Chapter II commercial password application and security evaluation policies and regulations on commercial password application and security evaluation - Summary of deleted version
集合线程安全
RHCE搭建简单web网站
centos 离线安装mysql
优先级队列 (堆)常用接口介绍 堆的存储 堆的创建
汇编语言程序设计 中国大学Mooc郑州大学 网课 测试题目和答案
流量分析(CTF)
Getting started with redis
MySQL dblink的实现以及密码中含有@问题的解决
7类排序总结 插入 选择 交换 快排 归并排序 快来冲冲冲