Sun Cluster 3.0 Sun Microsystems, Inc. 901 San Antonio Road Palo Alto,, CA 94303-4900 U.S.A. 650-960-1300 : 806-6732 2000 11, Revision A
Copyright Copyright 2000 Sun Microsystems, Inc. 901 San Antonio Road, Palo Alto, California 94303-4900 U.S.A..,,. Sun ( )., Sun. Berkeley BSD. UNIX, X/Open Company, Ltd. Netscape Communicator TM,. (c) Copyright 1995 Netscape Communications Corporation. All rights reserved. Sun, Sun Microsystems, Sun, AnswerBook2, docs.sun.com, Solstice DiskSuite, Sun Enterprise, Sun Enterprise SyMON, JumpStart, StorEdge, Sun Management Center, OpenBoot Solaris Sun Microsystems, Inc.. SPARC SPARC International, Inc.. SPARC Sun Microsystems, Inc. OPEN LOOK Sun TM (Graphical User Interface) Sun Microsystems, Inc.. Sun Xerox. Sun Xerox Xerox. OPEN LOOK GUI Sun, Sun.,,.,. Copyright 2000 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, Californie 94303 Etats-Unis. Tous droits réservés. Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l utilisation, la copie, la distribution, et la décompilation. Aucune partie de ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit, sans l autorisation préalable et écrite de Sun et de ses bailleurs de licence, s il y en a. Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun. Des parties de ce produit pourront être dérivées des systèmes Berkeley BSD licenciés par l Université de Californie. UNIX est une marque déposée aux Etats-Unis et dans d autres pays et licenciée exclusivement par X/Open Company, Ltd. La notice suivante est applicable à Netscape Communicator TM : (c) Copyright 1995 Netscape Communications Corporation. Tous droits réservés. Sun, Sun Microsystems, le logo Sun, AnswerBook2, docs.sun.com, Solstice DiskSuite, Sun Enterprise, Sun Enterprise SyMON, OpenBoot, et Solaris sont des marques de fabrique ou des marques déposées, ou marques de service, de Sun Microsystems, Inc. aux Etats-Unis et dans d autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de fabrique ou des marques déposées de SPARC International, Inc. aux Etats-Unis et dans d autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun Microsystems, Inc. L interface d utilisation graphique OPEN LOOK et Sun TM a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun reconnaît les efforts de pionniers de Xerox pour la recherche et le développement du concept des interfaces d utilisation visuelle ou graphique pour l industrie de l informatique. Sun détient une licence non exclusive de Xerox sur l interface d utilisation graphique Xerox, cette licence couvrant également les licenciés de Sun qui mettent en place l interface d utilisation graphique OPEN LOOK et qui en outre se conforment aux licences écrites de Sun. CETTE PUBLICATION EST FOURNIE "EN L ETAT" ET AUCUNE GARANTIE, EXPRESSE OU IMPLICITE, N EST ACCORDEE, Y COMPRIS DES GARANTIES CONCERNANT LA VALEUR MARCHANDE, L APTITUDE DE LA PUBLICATION A REPONDRE A UNE UTILISATION PARTICULIERE, OU LE FAIT QU ELLE NE SOIT PAS CONTREFAISANTE DE PRODUIT DE TIERS. CE DENI DE GARANTIE NE S APPLIQUERAIT PAS, DANS LA MESURE OU IL SERAIT TENU JURIDIQUEMENT NUL ET NON AVENU.
9 1. Sun Cluster 13 1.1 Sun Cluster 13 1.2 14 1.3 15 1.4 16 1.4.1 Sun Cluster 16 1.4.2 17 1.4.3 17 1.5 17 Sun Cluster 19 scsetup 21 Sun Cluster 21, 22 23 25 26 2. 29 3
2.1 29 30 31 33 2.2 34 35 36 37 39 2.3 40 /var 41 3. 43 3.1 44 3.1.1 Solstice DiskSuite 44 3.1.2 VERITAS Volume Manager 45 3.2 45 3.3 46 48 (Solstice DiskSuite) 49 (VERITAS Volume Manager) 50 (VERITAS Volume Manager) 50 (VERITAS Volume Manager) 52 (VERITAS Volume Manager) 53 2 (VERITAS Volume Manager) 53 (VERITAS Volume Manager) 54 (VERITAS Volume Manager) 56 4 Sun Cluster 3.0 2000 11, Revision A
(Solstice DiskSuite) 57 (VERITAS Volume Manager) 57 (VERITAS Volume Manager) 58 (VERITAS Volume Manager) 59 60 61 1 62 3.4 63 63 66 69 3.4.1 (Solstice DiskSuite) 69 3.4.2 (VERITAS Volume Manager) 70 4. 73 4.1 73 75 4.1.1 76 4.1.2 78 80 80 81 83 85 5. 87 5.1 88 89 Cluster Transport Cable 90 5
Cluster Transport Cable 92 Cluster Transport Cable 96 Cluster Transport Cable 97 5.2 99 NAFO 100 NAFO 102 NAFO 103 NAFO 104 NAFO 105 NAFO 106 107 6. 109 6.1 109 110 ID 110 111 112 OBP(OpenBoot PROM) 113 6.2 114 6.2.1 115 6.3 117 6.3.1 118 7. Sun Cluster 121 7.1 Sun Cluster 121 7.1.1 Sun Cluster 122 7.2 Sun Cluster 123 ( ) 123 ( ) 125 6 Sun Cluster 3.0 2000 11, Revision A
Sun Cluster 127 Sun Cluster 128 8. 131 8.1 131 132 133 (/) 133 (Solstice DiskSuite) 135 (VERITAS Volume Manager) 138 8.2 142 8.3 143 (Solstice DiskSuite) 144 (/) (Solstice DiskSuite) 144 (/) (Solstice DiskSuite) 147 (/) (VERITAS Volume Manager) 152 (/) (VERITAS Volume Manager) 155 7
8 Sun Cluster 3.0 2000 11, Revision A
The Sun Cluster 3.0 Sun TM. Cluster 3.0 Sun.,., Solaris TM, Sun Cluster. UNIX Sun Cluster. UNIX.. 4 Solaris AnswerBook2 TM 4 4 Solaris man 9
AaBbCc123,,.login. ls -a. % You have mail. AaBbCc123 % su Password: AaBbCc123,,.. User s Guide 6. class.., rm filename. C machine_name% C machine_name# Bourne Korn $ Bourne Korn # 10 Sun Cluster 3.0 2000 11, Revision A
Sun Cluster 3.0 Hardware Guide 806-1420 Sun Cluster 3.0 Installation Guide 806-6726 Sun Cluster 3.0 Data Services Installation and Configuration Guide 806-1421 API Sun Cluster 3.0 Data Services Developers Guide 806-1422 Sun Cluster 3.0 Concepts 806-6720 Sun Cluster 3.0 Error Messages Manual 806-1426 Sun Cluster 3.0 Release Notes 805-1428 Sun Fatbrain.com Sun Microsystems, Inc.. Fatbrain.com Sun Documentation Center. http://www1.fatbrain.com/documentation/sun Sun docs.sun.com SM Sun. docs.sun.com. 11
http://docs.sun.com Sun Cluster,.,. 4 ( ) 4, 4 4 (, Solaris 8) 4 Sun Cluster (, Sun Cluster 3.0). prtconf -v psrinfo -v showrev --p prtdiag -v /usr/cluster/bin/scinstall -pv.... Sun Cluster. /var/adm/messages. 12 Sun Cluster 3.0 2000 11, Revision A
1 Sun Cluster Sun Cluster.. 4 (19 ) Sun Cluster 4 (21 ) scsetup 4 (21 ) Sun Cluster 4 (22 ), 4 (23 ) 4 (25 ) 4 (26 ) 1.1Sun Cluster Sun Cluster. Sun Cluster., Sun Cluster 3.0 Installation Guide Sun Cluster 3.0 Concepts. Sun Cluster. 4 ( ).. 13
4,. Sun Cluster 3.0 Data Services Installation and Configuration Guide. 4. Sun Cluster 3.0 Hardware Guide., Sun Cluster.,.,. 1.2 Sun Management Center TM ( Sun Enterprise SyMON TM ) GUI Console Sun Cluster,.. Sun Management Center Sun Cluster Sun Cluster.. 1.3 (15 ) Sun Management Center Sun Cluster Sun Cluster Sun Cluster 3.0 Installation Guide. Sun Cluster Sun Management Center Sun Cluster SNMP MIB. MIB SNMP. Sun Cluster MIB $BASEDIR/SUNWsymon/modules/cfg. Sun Cluster MIB Sun Cluster ASN.1. Sun Management Center MIB. Sun Cluster MIB, Sun MC 2.1 Software User s Guide, Appendix B, SNMP MIBs for Sun Management Center Modules. Sun Management Center MIB. 14 Sun Cluster 3.0 2000 11, Revision A
1.3 scsetup(1m) Sun Cluster., scsetup. scsetup. 4 4 4 4 4 4 Sun Cluster.. 1 1 Sun Cluster ccp(1m). pmfadm(1m). pnmset(1m) PNM(Public Network Management). pnmstat(1m) sccheck(1m) scconf(1m) scdidadm(1m) PNM NAFO(Network Adapter Failover). /etc/vfstab. Sun Cluster. -p. ID. scgdevs(1m). Sun Cluster 15
1 1 Sun Cluster scinstall(1m) scrgadm(1m) scsetup(1m) Sun Cluster. -p Sun Cluster.,. -p,.. scshutdown(1m). scstat(1m) scswitch(1m)., Sun Cluster., Solstice DiskSuite TM VERITAS Volume Manager. 1.4. 1.4.1Sun Cluster Sun Cluster,...,. 16 Sun Cluster 3.0 2000 11, Revision A
1.4.2, administrative console SPARC., CCP(Cluster Control Panel) Sun Management Center. CCP (19 ) Sun Cluster. Sun Management Center Sun Cluster 3.0 Installation Guide... Sun Enterprise TM 10000, SSP(System Service Processor) netcon. Sun Cluster. 4. 4 Enterprise Services. 1.4.3,. Sun Cluster HA. Sun Cluster,.,... 4 4 DBMS 4 4 Solstice DiskSuite TM md.tab 1.5 1 2. Sun Cluster 17
1 2 Sun Cluster 3.0 ccp CCP(Cluster Control Panel). cconsole, crlogin ctelnet. (19 ) Sun Cluster scsetup. (21 ) scsetup Sun Cluster -p -pv scinstall. (21 ) Sun Cluster,. scgradm -p. (22 ), Sun Management Center Sun Cluster. Sun Management Center Sun Cluster scstat. (23 ) scconf -p. (25 ) 18 Sun Cluster 3.0 2000 11, Revision A
1 2 Sun Cluster 3.0 sccheck. (26 ) Sun Cluster Solstice DiskSuite VERITAS Volume Manager /var/adm/messages. metastat metatool. vxstat vxva. solaris Solstice DiskSuite VERITAS Volume Manager Sun Cluster CCP(Cluster Control Panel) cconsole, crlogin ctelnet.... ccp(1m) cconsole(1m) man. 1.. CCP(Cluster Control Panel),. 4 Sun Cluster (SUNWccon ). 4 PATH Sun Cluster tools, /opt/sunwcluster/ bin /usr/cluster/bin. $CLUSTER_HOME tools. 4 clusters, serialports nsswitch.conf. /etc NIS/NIS+. clusters(4) serialports(4). 2. Sun Enterprise E10000 server. Sun Cluster 19
4, (20 ) 3. 4, SSP(System Service Processor) netcon., Shift~@. 3. CCP.,. # ccp clustername CCP. 4., CCP (cconsole, crlogin ctelnet). 1.5.0.1 CCP(Cluster Control Panel). 1 1 CCP(Cluster Control Panel) 1.5.0.2 cconsole, crlogin ctelnet. cconsole(1m). 20 Sun Cluster 3.0 2000 11, Revision A
scsetup scsetup(1m),,,. 1.. 2. scsetup. # scsetup. 3.. scsetup. Sun Cluster. Sun Cluster. % scinstall -p Sun Cluster Sun Cluster. % scinstall -pv 1.5.0.1 Sun Cluster. % scinstall -p 3.0 1.5.0.2 Sun Cluster. Sun Cluster 21
% scinstall -pv SunCluster 3.0 SUNWscr: 3.0.0,REV=1999.10.20.15.01 SUNWscdev: 3.0.0,REV=1999.10.20.15.01 SUNWscu: 3.0.0,REV=1999.10.20.15.01 SUNWscman: 3.0.0,REV=1999.10.20.15.01 SUNWscsal: 3.0.0,REV=1999.10.20.15.01 SUNWscsam: 3.0.0,REV=1999.10.20.15.01 SUNWrsmop: 3.0.0,REV=1999.10.20.15.01 SUNWsci: 3.0,REV=1999.09.08.17.43 SUNWscid: 3.0,REV=1999.09.08.17.43 SUNWscidx: 3.0,REV=1999.09.08.17.43 SUNWscvm: 3.0.0,REV=1999.10.20.15.01,.,. % scrgadm -p 1.5.0.1, schost (RT Name), (RG Name) (RS Name). % scgradm -p RT Name: SUNW.SharedAddress RT Description: HA Shared Address Resource Type RT Name: SUNW.LogicalHostname RT Description: Logical Hostname Resource Type RG Name: schost-sa-1 RG Description: RS Name: schost-1 RS Description: RS Type: SUNW.SharedAddress RS Resource Group: schost-sa-1 RG Name: schost-lh-1 RG Description: RS Name: schost-3 RS Description: RS Type: SUNW.LogicalHostname RS Resource Group: schost-lh-1 22 Sun Cluster 3.0 2000 11, Revision A
. % scstat -p % scstat -p -- Cluster Nodes -- 1.5.0.1 scstat(1m). Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online Cluster node: phys-schost-3 Online Cluster node: phys-schost-4 Online ------------------------------------------------------------------ -- Cluster Transport Paths -- Transport path: Transport path: Endpoint Endpoint Status -------- -------- ------ phys-schost-1:qfe1 phys-schost-4:qfe1 Path online phys-schost-1:hme1 phys-schost-4:hme1 Path online ------------------------------------------------------------------ -- Quorum Summary -- Quorum votes possible: 6 Quorum votes needed: 4 Quorum votes present: 6 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online -- Quorum Votes by Device -- ( ) Sun Cluster 23
( ) Device Name Present Possible Status Owner ----------- ------- -------- ------ ----- Device votes: /dev/did/rdsk/d2s2 1 1 Online phys-schost-2 Device votes: /dev/did/rdsk/d8s2 1 1 Online phys-schost-4 -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: rmt/1 - - Device group servers: rmt/2 - - Device group servers: schost-1 phys-schost-2 phys-schost-1 Device group servers: schost-3 - - -- Device Group Status -- Device Group Status ------------ ------ Device group status: rmt/1 Offline Device group status: rmt/2 Offline Device group status: schost-1 Online Device group status: schost-3 Offline ------------------------------------------------------------------ -- Resource Groups and Resources -- Group Name Resources ---------- --------- Resources: test-rg test_1 Resources: real-property-rg - Resources: failover-rg - Resources: descript-rg-1 - -- Resource Groups -- Group Name Node Name State ---------- --------- ----- Group: test-rg phys-schost-1 Offline Group: test-rg phys-schost-2 Offline -- Resources -- Resource Name Node Name State Status Message ------------- --------- ----- -------------- Resource: test_1 phys-schost-1 Offline Offline Resource: test_1 phys-schost-2 Offline Offline 24 Sun Cluster 3.0 2000 11, Revision A
.. % scconf -p 1.5.0.1. % scconf -p Cluster name: cluster-1 Cluster ID: 0x3908EE1C Cluster install mode: disabled Cluster private net: 172.16.0.0 Cluster private netmask: 255.255.0.0 Cluster new node authentication: unix Cluster new node list: <NULL - Allow any node> Cluster nodes: phys-schost-1 phys-schost-2 phys-schost-3 phys-schost- 4 Cluster node name: phys-schost-1 Node ID: 1 Node enabled: yes Node private hostname: clusternode1-priv Node quorum vote count: 1 Node reservation key: 0x3908EE1C00000001 Node transport adapters: hme1 qfe1 qfe2 Node transport adapter: Adapter enabled: Adapter transport type: Adapter property: Adapter property: Adapter property: Cluster transport junctions: hme1 yes dlpi device_name=hme device_instance=1 dlpi_heartbeat_timeout=10000 hub0 hub1 hub2 Cluster transport junction: hub0 Junction enabled: yes Junction type: switch Junction port names: 1 2 3 4 Sun Cluster 25
Junction port: 1 Port enabled: Junction port: 2 Port enabled: yes Cluster transport cables Endpoint Endpoint State -------- -------- ----- Transport cable: phys-schost-1:hme1@0 hub0@1 Enabled Transport cable: phys-schost-1:qfe1@0 hub1@1 Enabled Transport cable: phys-schost-1:qfe2@0 hub2@1 Enabled Transport cable: phys-schost-2:hme1@0 hub0@2 Enabled Quorum devices: d2 d8 Quorum device name: d2 Quorum device votes: 1 Quorum device enabled: yes Quorum device name: /dev/did/rdsk/ d2s2 Quorum device hosts (enabled): phys-schost-1 phys-schost-2 Quorum device hosts (disabled): Device group name: schost-3 Device group type: SDS Device group failback enabled: no Device group node list: phys-schost-3, phys-schost- 4 Diskset name: schost-3 yes sccheck(1m) /etc/vfstab. sccheck., sccheck. : sccheck. 1.. 2.. # sccheck 26 Sun Cluster 3.0 2000 11, Revision A
1.5.0.1 phys-schost-3 /global/schost-1. # sccheck vfstab-check: WARNING - phys-schost-3 - Missing mount point /global/schost-1 Sun Cluster 27
28 Sun Cluster 3.0 2000 11, Revision A
2.. 4 (30 ) 4 (31 ) 4 (33 ) 4 (35 ) 4 (36 ) 4 (37 ) 4 (39 ) 4 (41 ) /var 2 1 2 2. 2.1 Sun Cluster scshutdown(1m). : shutdown halt scshutdown. Solaris shutdown. 29
scshutdown. 1. 2. 3. 4. init 0 ok PROM. :,,.. (39 ).. 2 1 :. - scshutdown.,. (30 )... - scshutdown. (31 ) (33 ) ok, boot.. 1. ( ). Oracle Parallel Server(OPS), OPS. Oracle Parallel Server. 30 Sun Cluster 3.0 2000 11, Revision A
2.. 3. scshutdown(1m).. # scshutdown -g 0 -y 4. ok PROM. 5.,. 2.1.0.1 ok. g 0 grace 0, -y yes.. # scshutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. System services are now being stopped. /etc/rc0.d/k05initrgm: Calling scswitch -S (evacuate) The system is down. syncing file systems done Program terminated ok 2.1.0.2 (31 ). 1. ok PROM,..,. ok boot 31
. :. 2.. scstat(1m). # scstat -n 2.1.0.1 phys-schost-1.. ok boot Rebooting with command: boot Hostname: phys-schost-1 Booting as part of a cluster NOTICE: Node 1 with votecount = 1 added. NOTICE: Node 2 with votecount = 1 added. NOTICE: Node 3 with votecount = 1 added. NOTICE: Node 1: attempting to join cluster NOTICE: Node 2 (incarnation # 937690106) has become reachable. NOTICE: Node 3 (incarnation # 937690290) has become reachable. NOTICE: cluster has reached quorum. NOTICE: node 1 is up; new incarnation number = 937846227. NOTICE: node 2 is up; new incarnation number = 937690106. NOTICE: node 3 is up; new incarnation number = 937690290. NOTICE: Cluster members: 1 2 3 NOTICE: Node 1: joined cluster The system is coming up. Please wait. checking ufs filesystems reservation program successfully exiting Print services started. volume management starting. The system is ready. phys-schost-1 console login: 32 Sun Cluster 3.0 2000 11, Revision A
scshutdown(1m), boot. 1. ( ). Oracle Parallel Server(OPS), OPS. Oracle Parallel Server. 2.. 3. scshutdown(1m).. # scshutdown -g 0 -y ok PROM. :. 4...,. ok boot. 5.. scstat. # scstat -n 2.1.0.1 ok,. g 0 grace 0, -y yes.. 33
# scshutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. The system is down. syncing file systems done Program terminated ok boot Rebooting with command: boot Hostname: phys-schost-1 Booting as part of a cluster NOTICE: Node 1: attempting to join cluster NOTICE: Node 2 (incarnation # 937690106) has become reachable. NOTICE: Node 3 (incarnation # 937690290) has become reachable. NOTICE: cluster has reached quorum. NOTICE: Cluster members: 1 2 3 NOTICE: Node 1: joined cluster The system is coming up. Please wait. checking ufs filesystems reservation program successfully exiting Print services started. volume management starting. The system is ready. phys-schost-1 console login: 2.2 : Solaris shutdown scswitch. scshutdown. 34 Sun Cluster 3.0 2000 11, Revision A
2 2 :,.. - scswitch(1m) shutdown(1m). (35 ).. (36 ) ( ). (37 ) - scswitch shutdown... - scswitch shutdown, boot -x. (39 ) 1. ( ). Oracle Parallel Server(OPS), OPS. Oracle Parallel Server. 2.. 3. scswitch shutdown.. # scswitch -S -h node # shutdown -g 0 -y ( ) 35
( ) 4. ok PROM. 5.,. 2.2.0.1 phys-schost-1 g 0 grace 0, -y yes.. # scswitch -S -h phys-schost-1 # shutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled.phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. Notice: rgmd is being stopped. Notice: rpc.pmfd is being stopped. Notice: rpc.fed is being stopped. umount: /global/.devices/node@1 busy umount: /global/phys-schost-1 busy The system is down. syncing file systems done Program terminated ok 2.2.0.2 (36 ). :., 3 ( ).,., (2). 36 Sun Cluster 3.0 2000 11, Revision A
1.. ok boot. :. 2.. scstat(1m). # scstat -n 2.2.0.1 phys-schost-1. ok boot Rebooting with command: boot Hostname: phys-schost-1 Booting as part of a cluster NOTICE: Node 1: attempting to join cluster NOTICE: Node 1: joined cluster The system is coming up. Please wait. checking ufs filesystems reservation program successfully exiting Print services started. volume management starting. The system is ready. phys-schost-1 console login: 1. ( ). Oracle Parallel Server(OPS), OPS. 37
Oracle Parallel Server. 2.. 3. scswitch shutdown.. # scswitch -S -h node # shutdown -g 0 -y -i 6 shutdown -i 6 ok PROM. :. 4.. scstat(1m). # scstat -n 2.2.0.1 phys-schost-1 g 0 grace 0, -y yes.. # scswitch -S -h phys-schost-1 # shutdown -g 0 -y -i 6 Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# INIT: New run level: 6 The system is coming down. Please wait. System services are now being stopped. Notice: rgmd is being stopped. Notice: rpc.pmfd is being stopped. Notice: rpc.fed is being stopped. umount: /global/.devices/node@1 busy ( ) 38 Sun Cluster 3.0 2000 11, Revision A
( ) umount: /global/phys-schost-1 busy The system is down. syncing file systems done rebooting Resetting,,, Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard OpenBoot 3.11, 128 MB memory installed, Serial #7982421. Ethernet address 8:0:20:79:cd:55, Host ID: 8079cd55. Rebooting with command: boot Hostname: phys-schost-1 Booting as part of a cluster NOTICE: Node 1: attempting to join cluster NOTICE: Node 1: joined cluster The system is coming up. Please wait. The system is ready. phys-schost-1 console login:,.. 1.. 2. scswitch shutdown. # scswitch -S -h node # shutdown -g 0 -y 3. ok PROM. 4. -x boot(1m). 39
ok boot -x. 2.2.0.1 phys-schost-1,. g 0 grace 0, -y yes.. # scswitch -S -h phys-schost-1 # shutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# rg_name = schost-sa-1 offline node = phys-schost-2 num of node = 0 phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. System services are now being stopped. Print services stopped. syslogd: going down on signal 15 The system is down. syncing file systems done WARNING: node 1 is being shut down. Program terminated ok boot -x Not booting as part of cluster The system is ready. phys-schost-1 console login: 2.3. 40 Sun Cluster 3.0 2000 11, Revision A
/var Solaris Sun Cluster /var/adm/messages. /var. /var, Sun Cluster.,. /var Sun Cluster,. 1. /var. 2..,. 41
42 Sun Cluster 3.0 2000 11, Revision A
3.. 4 (48 ) 4 (49 ) (Solstice DiskSuite) 4 (50 ) (VERITAS Volume Manager) 4 (50 ) (VERITAS Volume Manager) 4 (52 ) (VERITAS Volume Manager) 4 (53 ) (VERITAS Volume Manager) 4 (53 ) 2 (VERITAS Volume Manager) 4 (54 ) (VERITAS Volume Manager) 4 (56 ) (VERITAS Volume Manager) 4 (57 ) (Solstice DiskSuite) 4 (57 ) (VERITAS Volume Manager) 4 (58 ) (VERITAS Volume Manager) 4 (59 ) (VERITAS Volume Manager) 43
4 (60 ) 4 (61 ) 4 (62 ) 1 4 (63 ) 4 (66 ) 4 (69 ) 4 3.4.1 (Solstice DiskSuite) (69 ) 4 3.4.2 (VERITAS Volume Manager) (70 ) 3 1.,, Sun Cluster 3.0 Concepts. 3.1 Sun Cluster. Solstice DiskSuite, Solstice DiskSuite metaset(1m),. VERITAS Volume Manager(VxVM), VxVM. scsetup(1m) Sun Cluster. VxVM, scsetup VxVM., 1., Solaris.,, scgdevs(1m).. 3.1.1Solstice DiskSuite Solstice DiskSuite., 44 Sun Cluster 3.0 2000 11, Revision A
., /dev/global/dsk/d3s0 644, # chmod 644 /dev/global/dsk/d3s0. VxVM chmod. VxVM, VxVM. 3.1.2VERITAS Volume Manager VxVM Sun Cluster, Sun Cluster.., ID. : Sun Cluster VxVM VxVM. Sun Cluster. VxVM 2., VxVM 2 1000.,., 2 2., Sun Cluster., 2, Sun Cluster., DRL(Dirty Region Logging). DRL. 3.2 mount, newfs Solaris. mount -g.. : Sun Cluster. Solaris. 45
3.3 : scsetup(1m) scconf(1m). scsetup, scconf.. 3 1 :, ( ) - scgdevs. Solstice DiskSuite. - metaset. (48 ) (49 ) (Solstice DiskSuite) 46 Sun Cluster 3.0 2000 11, Revision A
3 1 : VERITAS Volume Manager. - VxVM scsetup., (50 ) (VERITAS Volume Manager) (50 ) (VERITAS Volume Manager) (52 ) (VERITAS Volume Manager) (53 ) (VERITAS Volume Manager) (53 ) 2 (VERITAS Volume Manager) (54 ) (VERITAS Volume Manager) (56 ) (VERITAS Volume Manager) Solstice DiskSuite. - metaset metaclear. VERITAS Volume Manager - scsetup (scconf ). (57 ) (Solstice DiskSuite) (57 ) (VERITAS Volume Manager) (58 ) (VERITAS Volume Manager) 47
3 1 : VERITAS Volume Manager. - scsetup (scconf ). - scsetup (scconf ).. - scconf 1. - scswitch., (59 ) (VERITAS Volume Manager) (60 ) (61 ) (62 ) 1 scgdevs(1m). : /global/.devices/ node@nodeid scgdevs. 1.. 2. scgdevs. # scgdevs 3.3.0.1 scgdevs. 48 Sun Cluster 3.0 2000 11, Revision A
# scgdevs Configuring the /dev/global directory (global devices) obtaining access to all attached disks reservation program successfully exiting (Solstice DiskSuite) Solstice DiskSuite. metaset(1m), Sun Cluster 1.. 2. metaset Solstice DiskSuite Sun Cluster. # metaset -s diskset a h node-list -s diskset. -a -h node-list. 3.. metaset. # scconf -p egrep disk-device-group 3.3.0.1 Solstice DiskSuite. # metaset -s dg-schost-1 # scconf -p egrep dg-schost-1 Device group name: dg-schost-1 49
(VERITAS Volume Manager) :., (50 ) (VERITAS Volume Manager). VxVM, 1.. 2. VxVM.. :, DRL(Dirty Region Logging). VERITAS Volume Manager. 3.3.0.1 VxVM Sun Cluster (54 ) (VERITAS Volume Manager). (VERITAS Volume Manager) :., (50 ) (VERITAS Volume Manager). Sun Cluster, VxVM Sun Cluster. VxVM.,. 1.. 50 Sun Cluster 3.0 2000 11, Revision A
2. /etc/vfstab, mount at boot no. Sun Cluster yes. 3.. vxdiskadm. VxVM. 2. vxdiskadm(1m). 4.. scswitch(1m). shutdown(1m). # scswitch -S -h node # shutdown -g 0 -i 6 -y 5.,. 1,. # scswitch -z -h node -D devgrp1 [,devgrp2, ] # scswitch -z -h node -g resgrp1 [,resgrp2, ] 3.3.0.1 VxVM Sun Cluster (54 ) (VERITAS Volume Manager). 51
(VERITAS Volume Manager) :, (56 ) (VERITAS Volume Manager). VxVM, 1 1.. 2. 1. # scstat -D 3.. 4, (52 ) 4. 4,. # scswitch -z -D disk-device-group -h node -z -D disk-device-group. -h node. 4. 1 ( ), VxVM. VxVM VERITAS Volume Manager. 3.3.0.1 VxVM (56 ) (VERITAS Volume Manager). 52 Sun Cluster 3.0 2000 11, Revision A
(VERITAS Volume Manager), Sun Cluster VxVM Sun Cluster. 1.. 2. VxVM. # vxdg import diskgroup 3.3.0.1 VxVM Sun Cluster (54 ) (VERITAS Volume Manager). 2 (VERITAS Volume Manager) VxVM 2, 2 2, Sun Cluster. 1.. 2. 2. # ls -l /dev/vx/dsk/* 3. 2 1000. 4. 2. # vxdg reminor diskgroup base_minor_number 53
3.3.0.1 2 2 16000-16002 4000-4001. vxdg reminor 2 5000. # ls -l /dev/vx/dsk/* /dev/vx/dsk/dg1 brw------- 1 root root 56,16000 Oct 7 11:32 dg1v1 brw------- 1 root root 56,16001 Oct 7 11:32 dg1v2 brw------- 1 root root 56,16002 Oct 7 11:32 dg1v3 /dev/vx/dsk/dg2 brw------- 1 root root 56,4000 Oct 7 11:32 dg2v1 brw------- 1 root root 56,4001 Oct 7 11:32 dg2v2 # vxdg reminor dg3 5000 3.3.0.2 VxVM Sun Cluster. (54 ) (VERITAS Volume Manager). (VERITAS Volume Manager) scsetup(1m) VxVM Sun Cluster : VxVM, (56 ) (VERITAS Volume Manager). VxVM. 4. 4 VxVM. 4.,. failback scconf(1m). 1.. 54 Sun Cluster 3.0 2000 11, Revision A
2. scsetup. # scsetup. 3. VxVM, 3( ).. 4. VxVM, 1(VxVM ). Sun Cluster VxVM., (53 ) 2 (VERITAS Volume Manager). 2 2. scconf: Failed to add device group - in use 5... # scstat -D 3.3.0.1 VERITAS Volume Manager VxVM (dg1) scsetup scconf. VxVM. # scconf -a -D type=vxvm,name=dg1,nodelist=phys-schost-1:phys-schost-2 # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg1 phys-schost-1 phys-schost-2 -- Device Group Status -- ( ) 55
( ) Device Group Status ------------ ------ Device group status: dg1 Online 3.3.0.2 VxVM, (63 ). 2, (53 ) 2 (VERITAS Volume Manager). (VERITAS Volume Manager) VxVM, Sun Cluster.. 1.. 2. scsetup(1m). # scsetup. 3. VxVM, 3( ).. 4., 2(VxVM ). VxVM 56 Sun Cluster 3.0 2000 11, Revision A
3.3.0.1 VERITAS Volume Manager VxVM (dg1) scsetup scconf. VxVM. # scconf -c -D name=dg1,sync (Solstice DiskSuite) Solstice DiskSuite., Solstice DiskSuite, metaclear(1m) metaset(1m). Sun Cluster. Solstice DiskSuite. (VERITAS Volume Manager) :, (56 ) (VERITAS Volume Manager). 1.. 2. 1. # scstat -D 3.. 4, (58 ) 4. 4,. # scswitch -z -D disk-device-group -h node 57
-z. -D disk-device-group. -h node 1. 4. 1 ( ), VxVM. # vxedit -g diskgroup -rf rm volume -g diskgroup VxVM. -rf rm volume. 3.3.0.1,., (56 ) (VERITAS Volume Manager). (VERITAS Volume Manager) Sun Cluster VxVM. VxVM,. scsetup(1m) VxVM Sun Cluster. 1.. 2.. # scswitch -F -D disk-device-group -F. -D disk-device-group. 3. scsetup.. # scsetup 58 Sun Cluster 3.0 2000 11, Revision A
4. VxVM, 3( ).. 5. VxVM, 3(VxVM ). VxVM. 3.3.0.1 VERITAS Volume Manager VxVM dg1 scsetup scconf(1m). # scswitch -F -D dg1 # scconf -r -D name=dg1 (VERITAS Volume Manager) scsetup(1m). VxVM. 4 4 VxVM 4 ID 1.. 2. scsetup(1m).. # scsetup 3. VxVM, 3( ).. 59
4. VxVM, 4(VxVM ).. 5... # scconf -p 3.3.0.1 VERITAS Volume Manager VxVM (dg1) (phys-schost-3) scsetup scconf. # scconf a D type=vxvm,name=dg1,nodelist=phys-schost-3 # scconf -p Device group name: dg1 Device type: VXVM Failback enabled: yes Node preference list: phys-schost-1, phys-schost-3 1 preferenced., 1.,. preferenced, failback. preferenced, failback. preferenced, 1. scsetup(1m) Solstice DiskSuite VxVM preferenced failback. 60 Sun Cluster 3.0 2000 11, Revision A
,. 1.. 2. scsetup(1m).. # scsetup 3., 3( ).. 4., 6(VxVM Solstice DiskSuite ). preferenced failback. 5... # scconf -p 3.3.0.1 (dg-schost-1) scsetup scconf. # scconf c D name=dg-schost-1,nodelist=phys-schost-1:phys-schost-2,\ preferenced=true,failback=enabled # scconf -p Device group name: dg-schost-1 Device type: SDS Failback enabled: yes Node preference list: phys-schost-1, phys-schost-2 Diskset name: dg-schost-1. scconf(1m). 61
% scconf p 3.3.0.1 scconf,. # scconf p Device group name: dg-schost-1 Device type: SDS Failback enabled: yes Node preference list: phys-schost-2, phys-schost-3 Diskset name: dg-schost-1 1 ( ). 1.. 2. scswitch(1m). # scswitch z D disk-device-group h node -z. -D disk-device-group. -h node 1. 3... # scstat D 3.3.0.1. 62 Sun Cluster 3.0 2000 11, Revision A
# scswitch z D dg-schost-1 h phys-schost-1 # scstat D Device Group Name: Status: Primary: dg-schost-1 Online phys-schost-1 3.4 3 2 :, Sun Cluster - newfs makedir.. - fuser umount.. - sccheck. (63 ) (66 ) (69 ) Sun Cluster. :..,.. 4. 63
4 (Solstice DiskSuite VxVM ). 1.. :, 1. 2. newfs(1m). # newfs raw-disk-device 3 3 raw-disk-device.. 3 3 Solstice DiskSuite VERITAS Volume Manager /dev/md/oracle/rdsk/d1 /dev/vx/rdsk/oradg/vol01 oracle d1. oradg vol01. /dev/global/rdsk/d1s3 d1s3. 3.,.. # mkdir -p /global/device-group/mount-point device-group. 64 Sun Cluster 3.0 2000 11, Revision A
mount-point. :, /global/device-group.. 4., /etc/vfstab. a., mount at boot yes. b.. 4 global.. 4. UFS Solstice DiskSuite metatrans Solaris UFS.,. Solaris UFS, logging. metatrans,. c., /etc/vfstab. d..,., phys-schost-1 /global/oracle d0, phys-schost-2 /global/oracle/logs d1., phys-schost-2 phys-schost-1 /global/oracle /global/oracle/logs. e. /etc/vfstab. phys-schost-1 phys-schost-2 d0, d1 d2, /etc/vfstab d0, d1 d2. vfstab(4). 5. /etc/vfstab. # sccheck 65
,. 6.,. # mount /global/device-group/mount-point 7.,. df(1m) mount(1m). 3.4.0.1 Solstice DiskSuite /dev/md/oracle/rdsk/d1 UFS. # newfs /dev/md/oracle/rdsk/d1 [on each node:] # mkdir -p /global/oracle/d1 # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging [save and exit] [on one node:] # sccheck # mount /global/oracle/d1 # mount /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/ largefiles on Sun Oct 3 08:56:16 1999., ( ). 66 Sun Cluster 3.0 2000 11, Revision A
: scshutdown(1m). shutdown.,.. 4. 4... 1.. 2.. # mount -v 3.,,. # fuser -c [ -u ] mount-point -c. -u ( ) ID. mount-point. 4.,..,. # fuser -c -k mount-point SIGKILL. 5.. 67
# fuser -c mount-point 6.,. # umount mount-point mount-point.. 7. ( ) /etc/vfstab. /etc/vfstab. 8. ( ) group/metadevice/plex.. 3.4.0.1 Solstice DiskSuite /dev/md/oracle/rdsk/d1 UFS. # mount -v /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/ largefiles on Sun Oct 3 08:56:16 1999 # fuser -c /global/oracle/d1 /global/oracle/d1: 4006c # fuser -c -k /global/oracle/d1 /global/oracle/d1: 4006c # fuser -c /global/oracle/d1 /global/oracle/d1: # umount /global/oracle/d1 (on each node, remove the highlighted entry:) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging [Save and exit.] 68 Sun Cluster 3.0 2000 11, Revision A
:,.. sccheck(1m) /etc/vfstab.,. : ( : ) sccheck. 1.. 2.. # sccheck 3.4.1 (Solstice DiskSuite) Solstice DiskSuite ( ). 1. ( ). # scstat -D 2.. 3... # metaset -s setname -d -f -h node -s setname ( ) -f -d 69
-h nodelist. :. 4.. metaset. # scstat -D 3.4.1.1 (SDS) ( ).,. metaset. [Determine the disk device group(s) for the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 phys-schost-2 [Become superuser.] [Remove the hostname from all disk device groups:] # metaset -s dg-schost-1 -d -f -h phys-schost-2 [Verify removal of the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1-3.4.2 (VERITAS Volume Manager) VxVM(VERITAS Volume Manager) ( ). 1.. # scstat -D 70 Sun Cluster 3.0 2000 11, Revision A
2.. 3. scsetup. # scsetup. 4. 3( ). 5. 5(VxVM ) VxVM... VxVM 6. VxVM. # scstat -D Device group name: devicegroupname Device group type: VxVM Device group failback enabled: no Device group node list: nodename Diskgroup name: diskgroupname 3.4.2.1 (VxVM) dg1 VxVM phys-schost-4. [Determine the disk device group for the node:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: dg-schost-1 phys-schost-1 phys-schost-2 [Become superuser and execute the scsetup utility:] # scsetup [Select option 3:] *** Main Menu *** ( ) 71
( ) Please select from one of the following options: 3) Device groups and volumes Option: 3 [Select option 5:] *** Device Groups Menu *** Please select from one of the following options: 5) Remove a node from a VxVM device group Option: 5 [Answer the questions to remove the node:] >>> Remove a Node from a VxVM Device Group <<< Is it okay to continue (yes/no) [yes]? yes Name of the VxVM device group from which you want to remove a node? Name of the node to remove from this group? phys-schost-4 Is it okay to proceed with the update (yes/no) [yes]? yes dg1 scconf -r -D name=dg1,nodelist=phys-schost-4 Command completed successfully. Hit ENTER to continue: [Quit the scsetup Device Groups Menu and Main Menu:] Option: q [Verify that the node was removed:] # scstat -D Device group name: dg1 Device group type: VxVM Device group failback enabled: no Device group node list: phys-schost-3 Diskgroup name: dg1 72 Sun Cluster 3.0 2000 11, Revision A
4 Sun Cluster.. 4 (75 ) 4 4.1.1 (76 ) 4 (80 ) 4 (80 ) 4 (81 ) 4 (83 ) 4 (85 ) 4 4.1.1 (76 ) 4 4.1.2 (78 ). Sun Cluster 3.0 Concepts. 4.1 scconf(1m)., scsetup(1m)., scsetup. 73
scconf,., reset scconf. : scsetup(1m) scconf(1m). scsetup, scconf.. (scstat -q scconf -p). scconf, scstat -q. 4 1 : - scsetup - scsetup (scconf ). - scsetup (scconf ). -.. (,.) - scsetup (scconf ).. (75 ) 4.1.1 (76 ) 4.1.2 (78 ) (80 ) (80 ) 74 Sun Cluster 3.0 2000 11, Revision A
4 1 :.. (,.) - scsetup (scconf ). (81 ). - scsetup (scconf ). (83 ). - scconf (85 ), DID( ID). scdidadm -L DID. scdidadm(1m). scsetup(1m) scconf(1m). 1.. 2. scsetup. # scsetup. 3., 1( ).. 4., 1( ).. 75
5.. # scstat q 6. (76 ) 5 (75 ) 3. 4.1.0.1 scsetup scconf. # scconf -a -q globaldev=d20 # scstat -q -- Quorum Summary -- Quorum votes possible: 4 Quorum votes needed: 3 Quorum votes present: 4 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online -- Quorum Votes by Device -- Device Name Present Possible Status Owner ----------- ------- -------- ------ ----- Device votes: /dev/did/rdsk/d3s2 1 1 Online phys-schost-1 Device votes: /dev/did/rdsk/d4s2 1 1 Online phys-schost-1 4.1.1,.., scconf(1m). :, 4.1.2 (78 ). 1.. 76 Sun Cluster 3.0 2000 11, Revision A
2.. # scconf -pv grep Quorum 3. scsetup(1m). # scsetup. 4. 1( ). 5. 2( ).. 6. scsetup. 7.. # scstat -q 4.1.1.1. [Become superuser on a node and place the node to be removed in maintenance state.] [Determine the quorum device to be removed:] # scconf -pv grep Quorum [Execute the scsetup utility:] # scsetup [Select option 1:] *** Main Menu *** Please select from one of the following options: 1) Quorum Option: 1 [Select option 2:] *** Quorum Menu *** Please select from one of the following options: 2) Remove a quorum disk Option: 2 ( ) 77
( ) [Answer the questions when prompted:] >>> Remove a Quorum Disk <<< Is it okay to continue (yes/no) [yes]? <Return> Which quorum disk do you want to remove (d<n>)? d4 Is it okay to proceed with the update (yes/no) [yes]? <Return> scconf -r -q globaldev=d4 Command completed successfully. Hit ENTER to continue: [Quit the scsetup Quorum Menu and Main Menu:] Option: q [Verify the quorum device is removed:] # scstat -q -- Quorum Summary -- Quorum votes possible: 3 Quorum votes needed: 2 Quorum votes present: 3 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online -- Quorum Votes by Device -- Device Name Present Possible Status Owner ----------- ------- -------- ------ ----- Device votes: /dev/did/rdsk/d3s2 1 1 Online phys-schost-1 4.1.2, 4.1.1 (76 ). :., scconf(1m).. 1.. 78 Sun Cluster 3.0 2000 11, Revision A
(81 ). 2.. # scconf -c -q installmode 3. scconf. # scconf -r -q globaldev=device 4.. # scstat -q 4.1.2.1. [Become superuser on any node.] [Place the cluster in install mode:] # scconf -c -q installmode [Remove the quorum device:] # scconf -r -q globaldev=d3 [Verify the quorum device has been removed:] # scstat -q -- Quorum Summary -- Quorum votes possible: 2 Quorum votes needed: 2 Quorum votes present: 2 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online -- Quorum Votes by Device -- Device Name Present Possible Status Owner ----------- ------- -------- ------ ----- 79
1... (75 ). 2.. 4.1.1 (76 ). 3.. Sun Cluster 3.0 Hardware Guide.., 0. scconf(1m). scsetup. :., scconf. 1.. 2.. # scconf -c -q globaldev=device,maintstate -c scconf. -q. globaldev=device maintstate DID., d4.. 80 Sun Cluster 3.0 2000 11, Revision A
3.. 0. # scconf p grep -i quorum 4.1.2.1. # scconf -c -q globaldev=d20,maintstate # scconf -p grep -i quorum Node quorum vote count: 1 Node quorum vote count: 1 Quorum devices: d20 Quorum device name: d20 Quorum device votes: 0 Quorum device enabled: no Quorum device path: /dev/did/rdsk/d20s2 Quorum device hosts (enabled): phys-schost-2 phys-schost-3 Quorum device hosts (disabled): 4.1.2.2, (83 )..,., scswitch(1m) shutdown(1m). : Solaris shutdown. scshutdown.,.. 81
scconf(1m). scsetup. 1.. 2., scswitch shutdown. phys-schost-1# scswitch -S -h node phys-schost-1# shutdown -g 0 -y 3. (82 ) 2. phys-schost-2# scconf -c -q node=node,maintstate -c scconf. -q. node=node maintstate ID.. 4.. phys-schost-2# scconf p grep -i vote 4.1.2.1. phys-schost-1# scswitch -S -h phys-schost-1 phys-schost-1# shutdown -g 0 -y phys-schost-2# scconf -c -q node=phys-schost-1,maintstate phys-schost-2# scconf p grep -i quorum Node quorum vote count: 1 Quorum devices: d20 Quorum device name: d20 ( ) 82 Sun Cluster 3.0 2000 11, Revision A
( ) Quorum device votes: 1 Quorum device enabled: yes Quorum device path: /dev/did/rdsk/d5s2 Quorum device hosts (disabled): phys-schost-1 Quorum device hosts (enabled): phys-schost-2 4.1.2.2 (83 ).,.,., N-1, N 0.,..,.. : globaldev node,. 1.. 2... (84 ) 4.1.2.2 ( ).,. 83
# scconf -c -q globaldev=device,reset -c scconf. -q. globaldev=device reset DID., d4.. 3.,. 4.. # scconf p grep -i quorum 4.1.2.1 ( ). # scconf -c -q globaldev=d20,reset # scconf -p grep -i quorum Node quorum vote count: 1 Node quorum vote count: 1 Quorum devices: d20 Quorum device name: d20 Quorum device votes: 1 Quorum device enabled: yes Quorum device path: /dev/did/rdsk/d20s2 Quorum device hosts (enabled): phys-schost-2 phys-schost-3 Quorum device hosts (disabled): 4.1.2.2 ( ). 84 Sun Cluster 3.0 2000 11, Revision A
# scconf -c -q node=phys-schost-1,reset # scconf -pv grep -i vote Node quorum vote count: 1 Node quorum vote count: 1 Node quorum vote count: 1 (d20) Quorum device votes: 1 (d21) Quorum device votes: 1. scconf(1m). # scconf p grep -i quorum 4.1.2.1 # scconf -p egrep Quorum vote Node quorum vote count: 1 Node quorum vote count: 1 Quorum devices: d20 Quorum device name: d20 Quorum device votes: 1 Quorum device enabled: yes Quorum device path: /dev/did/rdsk/d20s2 Quorum device hosts (enabled): phys-schost-2 phys-schost-3 Quorum device hosts (disabled): 85
86 Sun Cluster 3.0 2000 11, Revision A
5 Sun Cluster.., NAFO.,.. 4 (89 ) 4 (90 ) Cluster Transport Cable 4 (92 ) Cluster Transport Cable 4 (96 ) Cluster Transport Cable 4 (97 ) Cluster Transport Cable 4 (100 ) NAFO 4 (102 ) NAFO 4 (103 ) NAFO 4 (104 ) NAFO 4 (105 ) NAFO 4 (106 ) NAFO 4 (107 ) 5 1 5 2. 87
Sun Cluster 3.0 Concepts. 5.1 cluster transport adapter cluster transport cable. Sun Cluster., scsetup. scsetup(1m). Sun Cluster 3.0 Installation Guide. Sun Cluster 3.0 Hardware Guide. 5 1 :,. - scsetup - scstat. - scsetup - scsetup (21 ) scsetup (89 ) (90 ) Cluster Transport Cable (92 ) Cluster Transport Cable 88 Sun Cluster 3.0 2000 11, Revision A
5 1 : - scsetup - scsetup,. (96 ) Cluster Transport Cable (97 ) Cluster Transport Cable. 1.. # scstat -W 2........ 5.1.0.1. # scstat -W -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:qfe1 phys-schost-2:qfe1 Path online ( ) 89
( ) Transport path: phys-schost-1:qfe0 phys-schost-2:qfe0 Path online Transport path: phys-schost-1:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-1:qfe0 phys-schost-3:qfe0 Path online Transport path: phys-schost-2:qfe1 phys-schost-3:qfe1 Path online Transport path: phys-schost-2:qfe0 phys-schost-3:qfe0 Path online Cluster Transport Cable 1. cluster transport cable. cluster transport cable Sun Cluster 3.0 Hardware Guide. 2.. 3. scsetup. # scsetup. 4. 2( ). 5. 1( ).... >>> Add a Cluster Transport Cable <<< Is it okay to continue (yes/no) [yes]? <Return> To which node do you want to add the cable? nodename Name of the adapter to use on node? adaptername Name of the junction at the other end of the cable? junctionname Okay to use the default for this cable connection [yes]? <Return> Is it okay to proceed with the update (yes/no) [yes]? <Return> 6. 2( ). 90 Sun Cluster 3.0 2000 11, Revision A
... >>> Add a Cluster Transport Adapter <<< Is it okay to continue (yes/no) [yes]? <Return> To which node do you want to add the cable? nodename Name of the adapter to use on node? adaptername Name of the junction at the other end of the cable? junctionname Okay to use the default for this cable connection [yes]? <Return> Is it okay to proceed with the update (yes/no) [yes]? <Return> 7. cluster transport cable. # scconf -p grep cable # scconf -p grep adapter 5.1.0.1 scsetup. [Ensure the physical cable is installed.] [Become superuser on a node in the cluster.] [Execute the scsetup utility:] # scsetup [Enter option 2:] *** Main Menu *** Please select from one of the following options: 2) Cluster interconnect Option: 2 [Enter option 1 to add a cable:] *** Cluster Interconnect Menu *** Please select from one of the following options: 1) Add a transport cable Option: 1 [Enter the requested information:] ( ) 91
( ) >>> Add a Cluster Transport Cable <<< Is it okay to continue (yes/no) [yes]? <Return>. To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-0 Is it okay to proceed with the update (yes/no) [yes]? <Return>. scconf -a -m endpoint=phys-schost-2:hme1,endpoint=hub0 Command completed successfully. Hit ENTER to continue: <Return> [Enter option 2 to add an adapter:] *** Cluster Interconnect Menu *** Please select from one of the following options: 2) Add a transport adapter to a node Option: 2 [Enter the requested information:] >>> Add a Cluster Transport Adapter <<< Is it okay to continue (yes/no) [yes]? <Return>. To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-0 Is it okay to proceed with the update (yes/no) [yes]? <Return>. scconf -a -m endpoint=phys-schost-2:hme1,endpoint=hub0 Command completed successfully. Hit ENTER to continue: <Return> [Verify that the cable and adapter are added:] # scconf -p grep cable Transport cable: phys-schost-2:qfe0@1 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled # scconf -p grep adapter Node transport adapters: qfe2 hme1 qfe0 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Cluster Transport Cable.,.. 92 Sun Cluster 3.0 2000 11, Revision A
:..,.. 1.. 2.. # scstat -W :,... 3. scsetup. # scsetup. 4. 2( ). 5. 4( )... *** Cluster Interconnect Menu *** Please select from one of the following options: 4) Remove a transport cable Option: 4 >>> Remove a Cluster Transport Cable <<< ( ) 93
( ) Is it okay to continue (yes/no) [yes]? <Return> To which node is the cable attached? nodename Name of the adapter on node? adaptername Is it okay to proceed with the update (yes/no) [yes]? <Return> Command completed successfully. Hit ENTER to continue: <Return> :,. 6., 5( )... *** Cluster Interconnect Menu ** Please select from one of the following options: 5) Remove a transport adapter Option: 5 >>> Remove a Cluster Transport Adapter <<< Is it okay to continue (yes/no) [yes]? <Return> To which node is the cable attached? nodename Name of the adapter on node? adaptername Is it okay to proceed with the update (yes/no) [yes]? <Return> Command completed successfully. Hit ENTER to continue: <Return> :, Sun Cluster 3.0 Hardware Guide. 7.. 94 Sun Cluster 3.0 2000 11, Revision A
# scconf -p grep cable # scconf -p grep adapter. 5.1.0.1 scsetup. [Become superuser on a node in the cluster.] [Execute the scsetup utility:] # scsetup [Enter option 2:] *** Main Menu *** Please select from one of the following options: 2) Cluster interconnect Option: 2 [Enter option 4 to remove a cable:] *** Cluster Interconnect Menu *** Please select from one of the following options: 4) Remove a transport cable Option: 4 [Enter the requested information:] >>> Remove a Cluster Transport Cable <<< Is it okay to continue (yes/no) [yes]? <Return>. To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-0 Is it okay to proceed with the update (yes/no) [yes]? <Return>. scconf -a -m endpoint=phys-schost-2:hme1,endpoint=hub0 Command completed successfully. Hit ENTER to continue: <Return> [Enter option 5 to remove an adapter:] *** Cluster Interconnect Menu *** Please select from one of the following options: 5) Remove a transport adapter from a node Option: 5 [Enter the requested information:] >>> Remove a Cluster Transport Adapter <<< Is it okay to continue (yes/no) [yes]? <Return>. ( ) 95
( ) To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-0 Is it okay to proceed with the update (yes/no) [yes]? <Return>. scconf -r -m endpoint=phys-schost-2:hme1 Command completed successfully. Hit ENTER to continue: <Return> [Verify that the cable or adapter is removed:] # scconf -p grep cable Transport cable: phys-schost-2:qfe0@1 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled # scconf -p grep adapter Node transport adapters: qfe2 hme1 qfe0 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Node transport adapters: qfe0 qfe2 hme1 Node transport adapter: qfe0 Cluster Transport Cable. 1.. 2. scsetup. # scsetup. 3. 2( ). 4. 7( )... 5.. # scconf -p grep cable 96 Sun Cluster 3.0 2000 11, Revision A
5.1.0.1 phys-schost-2 qfe-1. [Become superuser on any node.] [Enter the scsetup utility:] # scsetup [Enter option 7:] *** Cluster Interconnect Menu *** Please select from one of the following options: 7) Enable a transport cable Option: 7 [Enter the requested information:] >>> Enable a Cluster Transport Cable <<< Is it okay to continue (yes/no) [yes]? <Return> To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-1 Is it okay to proceed with the update (yes/no) [yes]? <Return> scconf -c -m endpoint=phys-schost-2:qfe1,state=enabled Command completed successfully. Hit ENTER to continue: <Return> [Verify that the cable is enabled:] # scconf -p grep cable Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Enabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled Cluster Transport Cable..,.. :..,.. 97
1.. 2.. # scstat -W :,... 3. scsetup. # scsetup. 4. 2( ). 5. 8( ).... 6.. # scconf -p grep cable 5.1.0.1 phys-schost-2 qfe-1. [Become superuser on any node.] [Enter the scsetup utility:] # scsetup [Enter option 8:] *** Cluster Interconnect Menu *** ( ) 98 Sun Cluster 3.0 2000 11, Revision A
( ) Please select from one of the following options: 8) Disable a transport cable Option: 8 [Enter the requested information:] >>> Disable a Cluster Transport Cable <<< Is it okay to continue (yes/no) [yes]? <Return> To which node is the cable attached? phys-schhost-2 Name of the adapter on phys-schost-2? qfe-1 Is it okay to proceed with the update (yes/no) [yes]? <Return> scconf -c -m endpoint=phys-schost-2:qfe1,state=disabled Command completed successfully. Hit ENTER to continue: <Return> [Verify that the cable is disabled:] # scconf -p grep cable Transport cable: phys-schost-2:qfe1@0 ethernet-1@2 Disabled Transport cable: phys-schost-3:qfe0@1 ethernet-1@3 Enabled Transport cable: phys-schost-1:qfe0@0 ethernet-1@1 Enabled 5.2,.. 4 NAFO ( ). (105 ) NAFO. 4 NAFO. 4. 4.. 99
Sun Cluster 3.0 Installation Guide. Sun Cluster 3.0 Hardware Guide. 5 2 :. NAFO (100 ) NAFO. (103 ) NAFO failover NAFO. (102 ) NAFO NAFO, NAFO. (104 ) NAFO NAFO. (105 ) NAFO NAFO. (106 ) NAFO PNM failover. (107 ) NAFO 1. NAFO. 4 NAFO. 4, NAFO. 4 NAFO. 100 Sun Cluster 3.0 2000 11, Revision A
4 NAFO, /etc/ hostname.adapter. 4 NAFO. 2. NAFO. 3.,. NAFO. 4. /etc/hostname.adapter. 4 ( ), (101 ) 5. 4 ( ), IP.. # vi /etc/hostname.hme0 phys-schost-1 5. /etc/inet/hosts IP., /etc/inet/hosts IP 192.29.75.101 phys-schost-1. # vi /etc/inet/hosts 192.29.75.101 phys-schost-1 :,. 6. NAFO. # pnmset -c nafo-group -o create adapter [adapter ] -c nafo-group NAFO. NAFO must be named nafon, N.. 101
, NAFO. -o create NAFO. adapter [ adapter ]. (101 ) 3. :, pnmset., NAFO /etc/hostname.adapter IP. 7. NAFO. # pnmstat -l 5.2.0.1 NAFO (qfe0 qfe1) NAFO (nafo0). # pnmstat -l # pnmset -c nafo0 -o create qfe0 qfe1 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 NAFO failover NAFO., NAFO. 1. NAFO. 2. NAFO. # scrgadm -pv scrgadm -pvv( v ) NAFO. 102 Sun Cluster 3.0 2000 11, Revision A
3. NAFO. # scswitch -z -g resource-group -h node -z -g resource-group. -h node. 4. NAFO. # pnmset -c nafo-group -o delete -c nafo-group NAFO. -o delete NAFO. 5. NAFO. NAFO. # pnmstat -l 5.2.0.1 NAFO nafo1 NAFO. NAFO lh-rg-1. # scswitch -z -g lh-rg-1 -h phys-schost-2 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 nafo1 qfe2 OK NEVER qfe2 # pnmset -c nafo1 -o delete # pnmstat -l group adapters status fo_time act_adp nafo0 qfe0:qfe1 OK NEVER qfe0 NAFO NAFO NAFO. 1.? 103