Answer the question
In order to leave comments, you need to log in
Why does elasticsearch write no persistent volumes available for this claim and no storage class is set for local-storage?
Good afternoon!
I am doing according to the article
https://scalablesystem.design/ds101/kubernetes-ins...
There are 3 Centos 7 nodes.
Each has a folder /mnt/disks mkdir -p /mnt/disks/vdb1
Mounted 50GB disk in this folder mount /dev/vdb1 /mnt/disks/vdb1
Check
mount | grep mnt
/dev/vdb1 on /mnt/disks/vdb1 type xfs (rw,relatime,attr2,inode64,noquota)
helm_version: "v2.9.1"
helm_enabled: true
docker_dns_servers_strict: no
local_volume_provisioner_enabled: true
ansible-playbook -u apatsev -i inventory/mycluster/hosts.ini cluster.yml -b
kubectl get storageclass
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 18m
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-5bec36e4 49Gi RWO Delete Available local-storage 2m15s
local-pv-aa880f42 49Gi RWO Delete Available local-storage 2m15s
local-pv-b6ffa66b 49Gi RWO Delete Available local-storage 2m15s
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
"incubator" has been added to your repositories
helm install incubator/elasticsearch --name my-release --set data.storageClass=local-storage,data.storage=10Gi
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-release-elasticsearch-data-0 Pending 3m15s
data-my-release-elasticsearch-master-0 Pending 3m15s
kubectl describe pvc data-my-release-elasticsearch-data-0
Name: data-my-release-elasticsearch-data-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=elasticsearch
component=data
release=my-release
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2s (x4 over 31s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
kubectl get po --namespace kube-system | grep local-volume-provisioner
local-volume-provisioner-9wmqm 1/1 Running 0 7m43s
local-volume-provisioner-f9gnn 1/1 Running 0 7m43s
local-volume-provisioner-zpmdn 1/1 Running 0 7m43s
kubectl logs local-volume-provisioner-9wmqm --namespace kube-system
I1009 08:34:19.214312 1 common.go:259] StorageClass "local-storage" configured with MountDir "/mnt/disks", HostDir "/mnt/disks", BlockCleanerCommand ["/scripts/quick_reset.sh"]
I1009 08:34:19.214560 1 main.go:42] Configuration parsing has been completed, ready to run...
I1009 08:34:19.214991 1 common.go:315] Creating client using in-cluster config
I1009 08:34:19.245635 1 main.go:52] Starting controller
I1009 08:34:19.245659 1 controller.go:42] Initializing volume cache
I1009 08:34:19.249252 1 populator.go:85] Starting Informer controller
I1009 08:34:19.249273 1 populator.go:89] Waiting for Informer initial sync
I1009 08:34:20.249475 1 controller.go:72] Controller started
I1009 08:34:20.250066 1 discovery.go:254] Found new volume of volumeMode "Filesystem" at host path "/mnt/disks/vdb1" with capacity 53659832320, creating Local PV "local-pv-5bec36e4"
I1009 08:34:20.301162 1 discovery.go:280] Created PV "local-pv-5bec36e4" for volume at "/mnt/disks/vdb1"
I1009 08:34:20.301826 1 cache.go:55] Added pv "local-pv-5bec36e4" to cache
I1009 08:34:20.343959 1 cache.go:64] Updated pv "local-pv-5bec36e4" to cache
I1009 08:34:20.353803 1 cache.go:64] Updated pv "local-pv-5bec36e4" to cache
kubectl --namespace kube-system exec local-volume-provisioner-9wmqm -- df | grep disks
/dev/vda1 9774628 6355168 2961036 69% /mnt/disks
/dev/vdb1 52402180 32944 52369236 1% /mnt/disks/vdb1
mount |grep /dev/vda1
/dev/vda1 on / type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /var/lib/kubelet/pods/89d44bff-cb9e-11e8-b52a-fa163e47d93f/volume-subpaths/config/elasticsearch/0 type ext4 (rw,relatime,data=ordered)
Answer the question
In order to leave comments, you need to log in
looks like something with elasticsearch
helm install stable/prometheus --name stable-prometheus --set server.persistentVolume.storageClass=local-storage --set alertmanager.persistentVolume.storageClass=local-storage
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
stable-prometheus-alertmanager Bound local-pv-aa880f42 49Gi RWO local-storage 9s
stable-prometheus-server Bound local-pv-5bec36e4 49Gi RWO local-storage 9s
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question