kubernets集群配置GlusterFS实践


一、GlusterFS是什么

GlusterFS概述

  • Glusterfs是一个开源的分布式文件系统,是Scale存储的核心,能够处理千数量级的客户端.在传统的解决 方案中Glusterfs能够灵活的结合物理的,虚拟的和云资源去体现高可用和企业级的性能存储.
  • Glusterfs通过TCP/IP或InfiniBand RDMA网络链接将客户端的存储资块源聚集在一起,使用单一的全局命名空间来管理数据,磁盘和内存资源.
  • Glusterfs基于堆叠的用户空间设计,可以为不同的工作负载提供高优的性能.
  • Glusterfs支持运行在任何标准IP网络上标准应用程序的标准客户端,用户可以在全局统一的命名空间中使用NFS/CIFS等标准协议来访问应用数据.

GlusterFS主要特征

  • 扩展性和高性能
  • 高可用
  • 全局统一命名空间
  • 弹性hash算法
  • 弹性卷管理
  • 基于标准协议

常用卷类型

  • 分布(distributed)
  • 复制(replicate)
  • 条带(striped)

二、创建数据卷

机器环境:k8sm0、k8sm1、k8sm2

添加节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 在k8sm0上操作
# 查看节点状态
[root@k8sm0 ~]# gluster peer status
Number of Peers: 0

# 添加节点
[root@k8sm0 ~]# gluster peer probe k8sm1
peer probe: success.
[root@k8sm0 ~]# gluster peer probe k8sm2
peer probe: success.

# 查看节点状态
[root@k8sm0 ~]# gluster peer status
Number of Peers: 2
Hostname: k8sm1
Uuid: 2cc42c18-bead-4970-acdf-b482b0628b46
State: Peer in Cluster (Connected)
Hostname: k8sm2
Uuid: 336b4342-fb96-4b4f-8d0c-1bc019c90fce
State: Peer in Cluster (Connected)

# 如果要删除节点
[root@k8sm0 ~]# gluster peer detach k8sm1
peer detach: success
[root@k8sm0 ~]# gluster peer probe k8sm2
peer probe: success.

创建复制卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 创建3个复制卷
[root@k8sm0 ~]# gluster volume create app-server-data replica 3 transport tcp k8sm0:/data/gfs-data/app-server-data k8sm1:/data/gfs-data/app-server-data k8sm2:/data/gfs-data/app-server-data force volume create: app-server-data: success: please start the volume to access data

# 列出卷
[root@k8sm0 ~]# gluster volume list
app-server-data

# 查看卷信息
[root@k8sm0 ~]# gluster volume info
Volume Name: app-server-data
Type: Replicate
Volume ID: e5206852-0da6-431f-8493-3e7b2f3317be
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: k8sm0:/data/gfs-data/app-server-data
Brick2: k8sm1:/data/gfs-data/app-server-data
Brick3: k8sm2:/data/gfs-data/app-server-data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

# 启动卷
[root@k8sm0 ~]# gluster volume start app-server-data
volume start: app-server-data: success
[root@k8sm0 ~]# gluster volume quota app-server-data enable
volume quota : success
[root@k8sm0 ~]# gluster volume quota app-server-data limit-usage / 5GB
volume quota : success

# 查看卷状态
[root@k8sm0 app]# gluster volume status
Status of volume: app-server-data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick k8sm0:/data/gfs-data/app-server-d
ata N/A N/A N N/A
Brick k8sm1:/data/gfs-data/app-server-d
ata 49152 0 Y 10716
Brick k8sm2:/data/gfs-data/app-server-d
ata 49152 0 Y 2732

使用数据卷

我们k8s环境已经安装好了,现在集成GlusterFS。

添加Endpoints

glusterfs-endpoints.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster",
"namespace": "app-bi"
},
"subsets": [
{
"addresses": [
{
"ip": "100.20.111.145"
}
],
"ports": [
{
"port": 2990
}
]
},
{
"addresses": [
{
"ip": "1o0.20.111.146"
}
],
"ports": [
{
"port": 2990
}
]
},
{
"addresses": [
{
"ip": "100.20.111.148"
}
],
"ports": [
{
"port": 2990
}
]
}
]
}

应用: kubectl create -f glusterfs-endpoints.json

添加Service

glusterfs-service.json

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster",
"namespace": "app-bi"
},
"spec": {
"ports": [
{"port": 2990}
]
}
}

应用: kubectl create -f glusterfs-service.json

添加PV

app-server-data-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-app-server-data
namespace: app-bi
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: app-server-data
readOnly: false

应用: kubectl create -f app-server-data-pv.yaml

添加PVC

app-server-data-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-app-server-data
namespace: app-bi
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi

应用: kubectl create -f app-server-data-pvc.yaml

查看:

1
2
3
4
5
6
[root@k8sm0 app]# kubectl get pv -n app-bi
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-app-server-data 5Gi RWX Retain Bound app-bi/pvc-app-server-data 2d
[root@k8sm0 app]# kubectl get pvc -n app-bi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-app-server-data Bound pv-app-server-data 5Gi RWX 2d

Pod使用存储卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: app-server
provider: kavenran
version: "6.4.3"
name: app-server
namespace: app-bi
spec:
replicas: 1
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: app-server
image: "app-server:6.4.3"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: "2"
memory: 4Gi
ports:
- containerPort: 8080
protocol: TCP
name: http
volumeMounts:
- name: app-server-data
mountPath: /home/app/apache-tomcat-7.0.88/resources
volumes:
- name: app-server-data
persistentVolumeClaim:
claimName: pvc-app-server-data

遇到错误信息:

部署pod时一直pending,查看pod 信息: kubectl describe pod xxxx
看到信息:
pod has unbound PersistentVolumeClaims

查看PV状态发现: 数据卷为 release状态,可能是跟Pod删除有关,导致状态异常。

解决:删除PVC,PV后重新建立PV,PVC 状态正常后pod重启成功。


文章作者: KavenRan
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 KavenRan !
 上一篇
区块链技术实践培训总结 区块链技术实践培训总结
讲师介绍: 冯翔:区块链兄弟创始人、高级系统架构师 区块链概念数字货币认识 比特币:通缩的,总量有限的2100万个,存放方式SHA码 以太坊:通涨的,实现智能合约。 比特币交易:BTC.COM 区块链三个维度 技术纬度 金融维
2019-10-27
下一篇 
Knowage Dataset type of Script Knowage Dataset type of Script
Dataset selected script type, Language Selectd javascript ,this is a demo: 12345678910function demo(){var dataset =
2019-09-29
  目录