早年间的实施记录,重新排版了一下:

基础环境

当年是计划做一套10G的rac:

1、用的主机是两台IBM P740;

2、硬件存储使用的是V7000;

3、使用共享存储搭建GPFS集群文件系统给ORACLE做存储。

当时一共划了4块磁盘hdisk1-hdisk4(hdisk0系统用了,raid5),做好主机层面多路径后,主机识别磁盘,开始下面的操作。

搭建过程记录

创建配置文件目录

1
2
3
mkdir -p /etc/gpfs
cd /etc/gpfs/
touch nodeslist

编辑文件

1
2
3
bash-3.2# cat nodeslist
p740-gpfs1:manager-quorum
p740-gpfs2:manager-quorum

PATH里添加gpfs命令目录

1
2
bash-3.2# echo $PATH
/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java5/jre/bin:/usr/java5/bin:/usr/lpp/mmfs/bin:/u01/app/oracle/crs/bin

修改磁盘属性

1
2
3
4
chdev -l hdisk1 -a reserve_policy=no_reserve
chdev -l hdisk2 -a reserve_policy=no_reserve
chdev -l hdisk3 -a reserve_policy=no_reserve
chdev -l hdisk4 -a reserve_policy=no_reserve

命令帮助说明

1
2
3
4
5
6
7
bash-3.2# mmcrcluster -h
Usage:
mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile}
[--ccr-enable | {--ccr-disable -p PrimaryServer [-s SecondaryServer]}]
[-r RemoteShellCommand] [-R RemoteFileCopyCommand]
[-C ClusterName] [-U DomainName] [-A]
[-c ConfigFile | --profile ProfileName]

创建gpfs

1
2
3
4
5
6
7
8
9
mmcrcluster -C jljjgpfs -N /etc/gpfs/nodeslist -p p740-gpfs1 -s p740-gpfs2 -r /usr/bin/ssh -R /usr/bin/scp
mmcrcluster: Performing preliminary node verification ...
mmcrcluster: Processing quorum and other critical nodes ...
mmcrcluster: Finalizing the cluster data structures ...
mmcrcluster: Command successfully completed
mmcrcluster: 6027-1254 Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
mmcrcluster: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
bash-3.2# mmlscluster
===============================================================================
| Warning: |
| This cluster contains nodes that do not have a proper GPFS license |
| designation. This violates the terms of the GPFS licensing agreement. |
| Use the mmchlicense command and assign the appropriate GPFS licenses |
| to each of the nodes in the cluster. For more information about GPFS |
| license designation, see the Concepts, Planning, and Installation Guide. |
===============================================================================
GPFS cluster information
========================
GPFS cluster name: jljjgpfs.p740-gpfs1
GPFS cluster id: 2512513396710449389
GPFS UID domain: jljjgpfs.p740-gpfs1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
---------------------------------------------------------------------
1 p740-gpfs1 200.200.200.2 p740-gpfs1 quorum-manager
2 p740-gpfs2 200.200.200.3 p740-gpfs2 quorum-manager

查看llic。这里使用DB2 for aix里带的GPFS包,可以不用lic就能用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
bash-3.2# mmchlicense server --accept -N p740-gpfs1,p740-gpfs2
The following nodes will be designated as possessing server licenses:
p740-gpfs1
p740-gpfs2
mmchlicense: Command successfully completed
mmchlicense: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
bash-3.2# mmlslicense -L
Node name Required license Designated license
---------------------------------------------------------------------
p740-gpfs1 server server
p740-gpfs2 server server
Summary information
---------------------
Number of nodes defined in the cluster: 2
Number of nodes with server license designation: 2
Number of nodes with client license designation: 0
Number of nodes still requiring server license designation: 0
Number of nodes still requiring client license designation: 0
This node runs IBM Spectrum Scale Standard Edition

再次查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
bash-3.2# mmlscluster
GPFS cluster information
========================
GPFS cluster name: jljjgpfs.p740-gpfs1
GPFS cluster id: 2512513396710449389
GPFS UID domain: jljjgpfs.p740-gpfs1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
---------------------------------------------------------------------
1 p740-gpfs1 200.200.200.2 p740-gpfs1 quorum-manager
2 p740-gpfs2 200.200.200.3 p740-gpfs2 quorum-manager

创建临时文件

1
2
3
4
5
6
bash-3.2# touch oradata_disklist
bash-3.2# cat oradata_disklist
hdisk1:::dataAndMetadata:1:oradatansd1
hdisk2:::dataAndMetadata:1:oradatansd2
hdisk3:::dataAndMetadata:1:oradatansd3
hdisk4:::dataAndMetadata:1:oradatansd4

生成NSD磁盘

1
2
3
4
5
6
7
bash-3.2# mmcrnsd -F /etc/gpfs/oradata_disklist
mmcrnsd: Processing disk hdisk1
mmcrnsd: Processing disk hdisk2
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk4
mmcrnsd: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

查看NSD磁盘

1
2
3
4
5
6
7
bash-3.2# mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
(free disk) oradatansd1 (directly attached)
(free disk) oradatansd2 (directly attached)
(free disk) oradatansd3 (directly attached)
(free disk) oradatansd4 (directly attached)

启动所有并行文件系统集群列表中的服务器的并行文件系统服务

1
2
3
4
5
6
7
8
9
10
11
12
bash-3.2# mmstartup -a
Sun Dec 18 13:57:35 CST 2016: 6027-1642 mmstartup: Starting GPFS ...
bash-3.2# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 p740-gpfs1 arbitrating
2 p740-gpfs2 arbitrating
bash-3.2# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 p740-gpfs1 active
2 p740-gpfs2 active

创建文件系统命令帮助查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bash-3.2# mmcrfs -h
Usage:
mmcrfs Device {"DiskDesc[;DiskDesc...]" | -F StanzaFile}
[-A {yes | no | automount}] [-B BlockSize] [-D {posix | nfs4}]
[-E {yes | no}] [-i InodeSize] [-j {cluster | scatter}]
[-k {posix | nfs4 | all}] [-K {no | whenpossible | always}]
[-L LogFileSize] [-m DefaultMetadataReplicas]
[-M MaxMetadataReplicas] [-n NumNodes] [-Q {yes | no}]
[-r DefaultDataReplicas] [-R MaxDataReplicas]
[-S {yes | no | relatime}] [-T Mountpoint] [-t DriveLetter]
[-v {yes | no}] [-z {yes | no}] [--filesetdf | --nofilesetdf]
[--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
[--log-replicas LogReplicas] [--metadata-block-size BlockSize]
[--perfileset-quota | --noperfileset-quota]
[--mount-priority Priority] [--version VersionString]
[--write-cache-threshold HAWCThreshold]

创建GPFS文件系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
bash-3.2# mmcrfs /oradata oradatalv -F /etc/gpfs/oradata_disklist -B 1M -n 10 -A yes -v no
GPFS: 6027-531 The following disks of oradatalv will be formatted on node p740-rac1:
oradatansd1: size 5242880 MB
oradatansd2: size 5242880 MB
oradatansd3: size 5242880 MB
oradatansd4: size 5242880 MB
GPFS: 6027-540 Formatting file system ...
GPFS: 6027-535 Disks up to size 44 TB can be added to storage pool system.
Creating Inode File
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
GPFS: 6027-572 Completed creation of file system /dev/oradatalv.
mmcrfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
查看文件系统

bash-3.2# lsfs
Name Nodename Mount Pt VFS Size Options Auto Accounting
/dev/hd4 -- / jfs2 62914560 rw yes no
/dev/hd1 -- /home jfs2 41943040 rw yes no
/dev/hd2 -- /usr jfs2 41943040 rw yes no
/dev/hd9var -- /var jfs2 20971520 rw yes no
/dev/hd3 -- /tmp jfs2 20971520 rw yes no
/dev/hd11admin -- /admin jfs2 2097152 rw yes no
/proc -- /proc procfs -- rw yes no
/dev/hd10opt -- /opt jfs2 20971520 rw yes no
/dev/fwdump -- /var/adm/ras/platform jfs2 2097152 rw no no
/dev/livedump -- /var/adm/ras/livedump jfs2 2097152 rw yes no
/dev/lv00 -- /var/adm/csd jfs 2097152 rw yes no
/dev/lv_u01 -- /u01 jfs2 209715200 rw yes no
/dev/fslv01 -- /oradata jfs2 -- rw yes no
/dev/lv_g01 -- /g01 jfs2 209715200 rw yes no
/dev/odm -- /dev/odm vxodm -- rw no no
/dev/oradatalv - /oradata mmfs -- rw,mtime,atime,dev=oradatalv no no



在所有节点挂载文件系统


bash-3.2# mmmount all -a
Sun Dec 18 14:26:33 CST 2016: 6027-1623 mmmount: Mounting file systems ...
bash-3.2# mmlsmount all
File system oradatalv is mounted on 2 nodes
查看配置


bash-3.2# mmlsconfig
Configuration data for cluster jljjgpfs.p740-gpfs1:
---------------------------------------------------
clusterName jljjgpfs.p740-gpfs1
clusterId 2512513396710449389
autoload no
dmapiFileHandleSize 32
minReleaseLevel 4.1.1.4
ccrEnabled yes
adminMode central
File systems in cluster jljjgpfs.p740-gpfs1:
---------
-----------------------------------

/dev/oradatalv

参数修改


bash-3.2# mmchconfig autoload=yes
mmchconfig: Command successfully completed
mmchconfig: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

查看文件系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
bash-3.2# lsfs
Name Nodename Mount Pt VFS Size Options Auto Accounting
/dev/hd4 -- / jfs2 62914560 rw yes no
/dev/hd1 -- /home jfs2 41943040 rw yes no
/dev/hd2 -- /usr jfs2 41943040 rw yes no
/dev/hd9var -- /var jfs2 20971520 rw yes no
/dev/hd3 -- /tmp jfs2 20971520 rw yes no
/dev/hd11admin -- /admin jfs2 2097152 rw yes no
/proc -- /proc procfs -- rw yes no
/dev/hd10opt -- /opt jfs2 20971520 rw yes no
/dev/fwdump -- /var/adm/ras/platform jfs2 2097152 rw no no
/dev/livedump -- /var/adm/ras/livedump jfs2 2097152 rw yes no
/dev/lv00 -- /var/adm/csd jfs 2097152 rw yes no
/dev/lv_u01 -- /u01 jfs2 209715200 rw yes no
/dev/fslv01 -- /oradata jfs2 -- rw yes no
/dev/lv_g01 -- /g01 jfs2 209715200 rw yes no
/dev/odm -- /dev/odm vxodm -- rw no no
/dev/oradatalv - /oradata mmfs -- rw,mtime,atime,dev=oradatalv no no

在所有节点挂载文件系统

1
2
3
4
bash-3.2# mmmount all -a
Sun Dec 18 14:26:33 CST 2016: 6027-1623 mmmount: Mounting file systems ...
bash-3.2# mmlsmount all
File system oradatalv is mounted on 2 nodes

查看配置

1
2
3
4
5
6
7
8
9
10
11
12
13
bash-3.2# mmlsconfig
Configuration data for cluster jljjgpfs.p740-gpfs1:
---------------------------------------------------
clusterName jljjgpfs.p740-gpfs1
clusterId 2512513396710449389
autoload no
dmapiFileHandleSize 32
minReleaseLevel 4.1.1.4
ccrEnabled yes
adminMode central
File systems in cluster jljjgpfs.p740-gpfs1:
--------------------------------------------
/dev/oradatalv

参数修改

1
2
3
4
bash-3.2# mmchconfig autoload=yes
mmchconfig: Command successfully completed
mmchconfig: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

欢迎联系我一起讨论。我的微信号:Eric_xu_2023

也欢迎关注我的公众号:

1.png