Using FSx for Lustre
Create service account using IAM role
Install FSx Driver
Use FSx for Lustre File System
Static Provisioning
Note
You can see examples in HERE.
persistent-volume.yaml |
---|
| apiVersion: v1
kind: PersistentVolume
metadata:
name: fsx-pv
spec:
capacity:
storage: 1200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
mountOptions:
- flock
persistentVolumeReclaimPolicy: Retain
csi:
driver: fsx.csi.aws.com
volumeHandle: fs-XXXXXXXXXXXXXXXXX
volumeAttributes:
dnsname: fs-XXXXXXXXXXXXXXXXX.fsx.us-east-1.amazonaws.com
mountname: fsx
|
Note
Replace volumeHandle
with FileSystemId
, dnsname
with DNSName
and mountname
with MountName
. You can get both FileSystemId
, DNSName
and MountName
using AWS CLI:
aws fsx describe-file-systems
persistent-volume-claim.yaml |
---|
| apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsx-claim
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1200Gi
volumeName: fsx-pv
|
pod.yaml |
---|
| apiVersion: v1
kind: Pod
metadata:
name: fsx-app
namespace: default
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fsx-claim
|
Dynamic Provisioning
Note
You can see examples in HERE.
storage-class.yaml |
---|
| kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fsx-sc
provisioner: fsx.csi.aws.com
parameters:
subnetId: subnet-0eabfaa81fb22bcaf
securityGroupIds: sg-068000ccf82dfba88
deploymentType: PERSISTENT_1
automaticBackupRetentionDays: "1"
dailyAutomaticBackupStartTime: "00:00"
copyTagsToBackups: "true"
perUnitStorageThroughput: "200"
dataCompressionType: "NONE"
weeklyMaintenanceStartTime: "7:09:00"
fileSystemTypeVersion: "2.12"
extraTags: "Tag1=Value1,Tag2=Value2"
mountOptions:
- flock
|
Note
You should check StorageClass's parameters in HERE.
persistent-volume-claim.yaml |
---|
| apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsx-claim
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: fsx-sc
resources:
requests:
storage: 1200Gi
|
pod.yaml |
---|
| apiVersion: v1
kind: Pod
metadata:
name: fsx-app
namespace: default
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fsx-claim
|
Dynamic Provisioning with Data Repository
Note
You can see examples in HERE.
storage-class.yaml |
---|
| kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fsx-sc
provisioner: fsx.csi.aws.com
parameters:
subnetId: subnet-0d7b5e117ad7b4961
securityGroupIds: sg-05a37bfe01467059a
s3ImportPath: s3://ml-training-data-000
s3ExportPath: s3://ml-training-data-000/export
deploymentType: SCRATCH_2
mountOptions:
- flock
|
Note
You should check StorageClass's parameters in HERE.
persistent-volume-claim.yaml |
---|
| apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsx-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: fsx-sc
resources:
requests:
storage: 1200Gi
|
pod.yaml |
---|
| apiVersion: v1
kind: Pod
metadata:
name: fsx-app
spec:
containers:
- name: app
image: amazonlinux:2
command: ["/bin/sh"]
securityContext:
privileged: true
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
lifecycle:
postStart:
exec:
command: ["amazon-linux-extras", "install", "lustre2.10", "-y"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fsx-claim
|