2019/07/09 23:42 1/2 Pacemaker/Corosync 클러스터에서 GFS2 구성 목차 Pacemaker/Corosync 클러스터에서 GFS2구성... 1 작업개요... 1 구성환경... 1 패키지설치... 2 Clustered Locking LVM사용... 2 DLM 및 CLVMD 클러스터리소스작성... 3 리소스순서및코로케이션... 4 LVM 객체구성... 4 Pacemaker에마운트설정... 5 참고사항... 7
Last update: 2017/03/17 14:46 pacemaker_corosync_ 클러 스터에서 _gfs2구성 http://allthatlinux.com/dokuwiki/ Printed on 2019/07/09 23:42
2019/07/09 23:42 1/7 Pacemaker/Corosync 클러스터에서 GFS2 구성 Pacemaker/Corosync 클러스터에서 GFS2 구성 원본출처 : http://www.unixarena.com/2016/01/rhel7-configuring-gfs2-on-pacemakercorosync-cluster.html 이기사에서는두개의클러스터노드간에 GFS2 파일시스템을구성하는방법에대해간략하게설명합니다. GFS2는클러스터파일시스템이며한번에둘이상의서버에마운트될수있음을알고여러서버가동일한파일시스템을마운트할수 DLM 있으므로 (Dynamic Lock Manager) 을사용하여데이터손상을방지합니다. GFS2에는구성및관리할수있는클러스터제품군이. 필요합니다 RHEL 7에서 Pacemaker / corosync는클러스터인프라를제공합니다. GFS2는 Linux 커널파일시스템인터페이스 (VFS 계층 ) 와직접인터페이스하는기본파일시스템입니다. Red Hat은고가용성추가기능 ( 클러스터 ) 에서만구현 GFS2 된파일시스템의사용을지원합니다. 작업개요 다음은두노드클러스터 (Pacemaker) 사이에서 GFS2를구성하기위한활동목록입니다. GFS2 및 lvm2 클러스터패키지를설치하십시오. LVM에서클러스터된잠금사용맥박조정기에DLM 및 CLVMD 리소스생성자원의순서와코로케이션을설정한다. LVM 개체구성 GFS2 및파일시스템만들기논리볼륨및파일시스템을페이스메이커컨트롤에추가하십시오. (gfs2는 / etc / fstab을사용하지않습니다 ). 구성환경 RHEL 7.x 노드이름 : Node1 & Node2. Fencing / STONITH : GFS2 의필수사항. 공유 LUN /dev/sda 클러스터상태 : [root@node2-lab ~]# pcs status Cluster name: GFSCLUS Last updated: Thu Jan 21 18:00:25 2016 Last change: Wed Jan 20 16:12:24 2016 via cibadmin on Node1 Stack: corosync Current DC: Node1 (1) - partition with quorum Version: 1.1.10-29.el7-368c726 2 Nodes configured 5 Resources configured Online: [ Node1 Node2 ] Full list of resources:
Last update: 2017/03/17 14:46 pacemaker_corosync_ 클러 스터에서 _gfs2구성 xvmfence (stonith:fence_xvm): Started Node1 PCSD Status: Node1: Online Node2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@node2-lab ~]# 패키지설치 1. 두클러스터노드에모두로그인하고 gfs2 및 lvm2 클러스터패키지를설치합니다. [root@node2-lab ~]# yum -y install gfs2-utils lvm2-cluster Loaded plugins: product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Package gfs2-utils-3.1.6-13.el7.x86_64 already installed and latest version Package 7:lvm2-cluster-2.02.105-14.el7.x86_64 already installed and latest version Nothing to do [root@node2-lab ~]# ssh Node1 yum -y install gfs2-utils lvm2-cluster Loaded plugins: product-id, subscription-manager Package gfs2-utils-3.1.6-13.el7.x86_64 already installed and latest version Package 7:lvm2-cluster-2.02.105-14.el7.x86_64 already installed and latest version Nothing to do [root@node2-lab ~]# Clustered Locking LVM 사용 1. 두클러스터 ndoes에서 LVM에대해클러스터된잠금을활성화합니다. [root@node2-lab ~]# lvmconf --enable-cluster [root@node2-lab ~]# ssh Node1 lvmconf --enable-cluster [root@node2-lab ~]# cat /etc/lvm/lvm.conf grep locking_type grep -v "#" locking_type = 3 [root@node2-lab ~]# 2. 클러스터노드를리부팅합니다. http://allthatlinux.com/dokuwiki/ Printed on 2019/07/09 23:42
2019/07/09 23:42 3/7 Pacemaker/Corosync 클러스터에서 GFS2 구성 DLM 및 CLVMD 클러스터리소스작성 1. 클러스터노드중하나에로그인하십시오. 2. DLM 및 CLVMD 의복제자원을만듭니다. 복제옵션을사용하면리소스가두노드에서모두실있습니다. pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true 3. 클러스터의상태를확인하십시요 pcs status Cluster name: GFSCLUS Last updated: Thu Jan 21 18:15:48 2016 Last change: Thu Jan 21 18:15:38 2016 via cibadmin on Node1 Stack: corosync Current DC: Node2 (2) - partition with quorum Version: 1.1.10-29.el7-368c726 2 Nodes configured 5 Resources configured Online: [ Node1 Node2 ] Full list of resources: xvmfence (stonith:fence_xvm): Started Node1 Clone Set: dlm-clone [dlm] Started: [ Node1 Node2 ] Clone Set: clvmd-clone [clvmd] Started: [ Node1 Node2 ] PCSD Status: Node1: Online Node2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled 리소스가두노드모두에서온라인임을확인할수있습니다.
Last update: 2017/03/17 14:46 pacemaker_corosync_ 클러 스터에서 _gfs2구성 리소스순서및코로케이션 1. 리소스순서를구성하십시오 ( 기동순서 ) pcs constraint order start dlm-clone then clvmd-clone Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start) 2. 리소스코로케이션을구성하십시오 pcs constraint colocation add clvmd-clone with dlm-clone 3. 제한조건을확인하십시오. pcs constraint Location Constraints: Ordering Constraints: start dlm-clone then start clvmd-clone Colocation Constraints: clvmd-clone with dlm-clone LVM 객체구성 1. 클러스터노드중하나에로그인하고 LVM 필요한객체를만듭니다. 2. 이설정에서 /dev/sda는두노드사이의 LUN 공유입니다. 3. 새볼륨그룹을만듭니다. vgcreate -Ay -cy gfsvg /dev/sda Physical volume "/dev/sda" successfully created Clustered volume group "gfsvg" successfully created [root@node1-lab kvmpool]# vgs VG #PV #LV #SN Attr VSize VFree gfsvg 1 1 0 wz--nc 996.00m 96.00m rhel 1 2 0 wz--n- 7.51g 0 [root@node1-lab kvmpool]# 4. 논리볼륨을만듭니다. http://allthatlinux.com/dokuwiki/ Printed on 2019/07/09 23:42
2019/07/09 23:42 5/7 Pacemaker/Corosync 클러스터에서 GFS2 구성 lvcreate -L 900M -n gfsvol1 gfsvg Logical volume "gfsvol1" created [root@node1-lab kvmpool]# lvs -o +devices gfsvg LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices gfsvol1 gfsvg -wi-ao---- 900.00m /dev/sda(0) [root@node1-lab kvmpool]# 5. 새볼륨에파일시스템을만듭니다. mkfs.gfs2 -p lock_dlm -t GFSCLUS:gfsvolfs -j 3 /dev/gfsvg/gfsvol1 /dev/gfsvg/gfsvol1 is a symbolic link to /dev/dm-2 This will destroy any data on /dev/dm-2 Are you sure you want to proceed? [y/n]y Device: /dev/gfsvg/gfsvol1 Block size: 4096 Device size: 0.88 GB (230400 blocks) Filesystem size: 0.88 GB (230400 blocks) Journals: 2 Resource groups: 4 Locking protocol: "lock_dlm" Lock table: "GFSCLUS:gfsvolfs" UUID: 8dff8868-3815-d43c-dfa0-f2a9047d97a2 GFSCLUS - 클러스터이름 gfsvolfs - 파일시스템이름 -j 3 =Journal- 두노드가액세스합니다. 저널갯수인데노드수 + 1 권장 ( 즉 2 노드라면 3) Pacemaker 에마운트설정 1. 클러스터노드중하나에로그인하십시오. 2. GFS2 파일시스템용새클러스터리소스를만듭니다. pcs resource create gfsvolfs_res Filesystem device="/dev/gfsvg/gfsvol1" directory="/kvmpool" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true 3. 볼륨상태를확인하십시오. 두클러스터노드에모두마운트해야합니다.
Last update: 2017/03/17 14:46 pacemaker_corosync_ 클러 스터에서 _gfs2구성 df -h /kvmpool Filesystem Size Used Avail Use% Mounted on /dev/mapper/gfsvg-gfsvol1 900M 259M 642M 29% /kvmpool ssh Node2 df -h /kvmpool Filesystem Size Used Avail Use% Mounted on /dev/mapper/gfsvg-gfsvol1 900M 259M 642M 29% /kvmpool 4. 리소스순서및코로케이션을구성하십시오. pcs constraint order start clvmd-clone then gfsvolfs_res-clone Adding clvmd-clone gfsvolfs_res-clone (kind: Mandatory) (Options: firstaction=start then-action=start) pcs constraint order Ordering Constraints: start clvmd-clone then start gfsvolfs_res-clone start dlm-clone then start clvmd-clone pcs constraint colocation add gfsvolfs_res-clone with clvmd-clone pcs constraint colocation Colocation Constraints: clvmd-clone with dlm-clone gfsvolfs_res-clone with clvmd-clone 5. 두노드모두읽기 / 쓰기모드에서동일한파일시스템을볼수있습니다. cd /kvmpool/ [root@node1-lab kvmpool]# ls -lrt total 0 [root@node1-lab kvmpool]# touch test1 test2 test3 [root@node1-lab kvmpool]# ls -lrt total 12 -rw-r--r-- 1 root root 0 Jan 21 18:38 test1 -rw-r--r-- 1 root root 0 Jan 21 18:38 test3 -rw-r--r-- 1 root root 0 Jan 21 18:38 test2 [root@node1-lab kvmpool]# ssh Node2 ls -lrt /kvmpool/ total 12 -rw-r--r-- 1 root root 0 Jan 21 18:38 test1 -rw-r--r-- 1 root root 0 Jan 21 18:38 test3 -rw-r--r-- 1 root root 0 Jan 21 18:38 test2 [root@node1-lab kvmpool]# RHEL 7 클러스터노드에서 GFS2 를성공적으로구성했습니다. http://allthatlinux.com/dokuwiki/ Printed on 2019/07/09 23:42
2019/07/09 23:42 7/7 Pacemaker/Corosync 클러스터에서 GFS2 구성 참고사항 No Quorum Policy 설정 : GFS2를사용할때는 no-quorum-policy 를구성해야합니다. freeze 로설정한다면시스템이정족수를잃어버리면정족수가회복될때까지시스템이아무것도할수없게됩니다. 이렇게해야지만에서데이터유실을방지할수있습니다. pcs property set no-quorum-policy=freeze OCFS2 (Oracle Cluster File System 2) 는 Red Hat Enterprise Linux에서실행될수있지만 Red Hat에서는출하, 유지관리또는지원하지않습니다. From: http://allthatlinux.com/dokuwiki/ - AllThatLinux! Permanent link: Last update: 2017/03/17 14:46