Introduction
MinIO is an open-source, object storage server compatible with Amazon S3. HashiCorp Nomad is a flexible and scalable orchestrator for deploying and managing applications. This guide will walk you through the process of setting up MinIO Object Storage on the Linux platform and integrating it as a Container Storage Interface (CSI) plugin in HashiCorp Nomad.
Prerequisites
- A running Nomad cluster (This article is based on 1 server node and 3 client nodes)
- Enable privileged Docker jobs: CSI Node plugins must run as privileged Docker jobs because they use bidirectional mount propagation in order to mount disks to the underlying host. Nomad’s default configuration does not allow privileged Docker jobs and must be edited to allow them. If your Nomad client configuration does not already specify a Docker plugin configuration, this minimal one will allow privileged containers. Add it to your Nomad client configuration and restart Nomad.
plugin "docker" {
config {
allow_privileged = true
}
}
- Access to a server or cloud infrastructure where MinIO will be deployed.
- Access to MinIO URL and S3 API port connectivity from Nomad cluster nodes. You can use the
telnet
utility for the same. MinIO should be accessible from Nomad Cluster Nodes.
Steps to Integrate MinIO Object Storage with Nomad Cluster
Step 1: Deploy MinIO
Installed and configured the MinIO Object Storage. You can follow the official MinIO Object Storage for Linux Documentation for the same. Create one bucket named test-vol
in the MinIO console. Set the bucket test-vol
access policy as public
.
Step 2: Setup MinIO CSI Plugin in Nomad Cluster
Deployed the below controller and node for the plugin job -
$ cat plugin-s3-controller.nomad
job "plugin-s3-controller" {
datacenters = ["dc1"]
group "controllers" {
task "plugin" {
driver = "docker"
resources {
memory = 300
}
config {
image = "ctrox/csi-s3:v1.2.0-rc.2"
args = [
"--endpoint=unix://csi/csi.sock",
"--nodeid=${node.unique.name}",
"--logtostderr",
"--v=5",
]
privileged = true
}
csi_plugin {
id = "s3"
type = "controller"
mount_dir = "/csi"
stage_publish_base_dir = "/local/csi"
}
}
}
}
$ cat plugin-s3-node.nomad
job "plugin-s3-node" {
datacenters = ["dc1"]
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
resources {
memory = 300
}
config {
image = "ctrox/csi-s3:v1.2.0-rc.2"
args = [
"--endpoint=unix://csi/csi.sock",
"--nodeid=${node.unique.name}",
"--logtostderr",
"--v=5",
]
privileged = true
}
csi_plugin {
id = "s3"
type = "node"
mount_dir = "/csi"
}
}
}
}
Step 3: Create Volume in Nomad
- Prepare the volume specification file using the below -
$ cat test-volume.nomad.hcl
id = "test-vol"
name = "test-vol"
type = "csi"
plugin_id = "s3"
external_id = "test-vol"
capability {
access_mode = "multi-node-multi-writer"
attachment_mode = "file-system"
}
secrets {
accessKeyID = "<username>"
secretAccessKey = "<password>"
endpoint = "https://<minio_host_or_dns>:<minio_port>"
}
parameters {
mounter = "s3fs"
}
Please note that the external ID mentioned in the above volume specification file must be the same as your MinIO bucket name. In this example, same bucket name is used which has been created in Step 1 (i.e. test-vol
).
- Create the volume using the command -
nomad volume create test-volume.nomad.hcl
. - After creation, validate volume using the command -
nomad volume status test-vol
. This will prompt output like the below -
ID = test-vol
Name = test-vol
Namespace = default
External ID = test-vol
Plugin ID = s3
Provider = ch.ctrox.csi.s3-driver
Version = v1.2.0-rc.2
Schedulable = true
Controllers Healthy = 3
Controllers Expected = 3
Nodes Healthy = 3
Nodes Expected = 3
Access Mode = <none>
Attachment Mode = <none>
Mount Options = <none>
Namespace = default
Allocations
No allocations placed
Currently, there is no job running that is using this volume, that's why the "allocations" section in the above output shows No allocations placed
.
Step 4: Use MinIO CSI in Nomad Jobs
- Now that MinIO CSI is integrated with Nomad, it can be seen in job specifications. Here is an example of a Nomad job that uses MinIO CSI to create a persistent volume:
job "alpine" {
datacenters = ["dc1"]
type = "service"
group "main" {
count = 1
volume "test-vol" {
type = "csi"
source = "test-vol"
attachment_mode = "file-system"
access_mode = "multi-node-multi-writer"
}
task "alpine" {
driver = "docker"
config {
image = "alpine:latest"
args = ["/bin/sleep", "10000"]
}
volume_mount {
volume = "test-vol"
destination = "/s3data"
read_only = false
}
resources {
cpu = 256
memory = 512
}
}
}
}
Adjust the job specification according to the application requirements.
- Use the Nomad CLI command to deploy -
nomad job run alpine.nomad
Step 5: Validate the CSI Volume and Nomad Allocation
The allocation status will have a section for the CSI volume, and the volume status will show the allocation claiming the volume.
- By running the Nomad CLI command
nomad volume status test-vol
output like below will appear where the volume is being used by the allocation of the alpine job.
$ nomad volume status test-vol
ID = test-vol
Name = test-vol
Namespace = default
External ID = test-vol
Plugin ID = s3
Provider = ch.ctrox.csi.s3-driver
Version = v1.2.0-rc.2
Schedulable = true
Controllers Healthy = 3
Controllers Expected = 3
Nodes Healthy = 3
Nodes Expected = 3
Access Mode = multi-node-multi-writer
Attachment Mode = file-system
Mount Options = <none>
Namespace = default
Allocations
ID Node ID Task Group Version Desired Status Created Modified
93896564 913a26c0 main 0 run running 43s ago 32s ago
Create any file or folder inside the Nomad allocation directory of this job and validate the same in the MinIO S3 bucket.
If job is stopped/purged/deleted or even cleared from memory, if it is redeployed then the new allocation will contain all the pre-existed data from the MinIO S3 bucket which was created and registered volume using the same.
Conclusion
Performed successfully the set up of MinIO and its integration as a CSI plugin in HashiCorp Nomad. This allows the use of MinIO as a persistent storage solution for containerized applications running on the Nomad cluster. Adjust the configurations and specifications based on specific use case and requirements.