Skip to content

Uninstalling Ceph

Finalizers kick in to prevent complete destruction of the Ceph Cluster. When removing the Rook-Ceph deployment it may end up stalling on deleting the rook-ceph namespace as k8s finalizers hang on.

Rook has some documentation on cluster storage clean up.

Examples

Namespace Finalizer Stuck Example

kubectl describe ns rook-ceph
Name:         rook-ceph-cluster
Labels:       kubernetes.io/metadata.name=rook-ceph-cluster
Annotations:  <none>
Status:       Terminating
Conditions:
  Type                                         Status  LastTransitionTime               Reason                Message
  ----                                         ------  ------------------               ------                -------
  NamespaceDeletionDiscoveryFailure            False   Tue, 11 Apr 2023 20:40:35 +0000  ResourcesDiscovered   All resources successfully discovered
  NamespaceDeletionGroupVersionParsingFailure  False   Tue, 11 Apr 2023 20:40:35 +0000  ParsedGroupVersions   All legacy kube types successfully parsed
  NamespaceDeletionContentFailure              False   Tue, 11 Apr 2023 20:40:35 +0000  ContentDeleted        All content successfully deleted, may be waiting on finalization
  NamespaceContentRemaining                    True    Tue, 11 Apr 2023 20:40:35 +0000  SomeResourcesRemain   Some resources are remaining: cephblockpools.ceph.rook.io has 1 resource instances, cephclusters.ceph.rook.io has 1 resource instances, cephfilesystems.ceph.rook.io has 1 resource instances, cephobjectstores.ceph.rook.io has 1 resource instances, configmaps. has 1 resource instances, secrets. has 1 resource instances
  NamespaceFinalizersRemaining                 True    Tue, 11 Apr 2023 20:40:35 +0000  SomeFinalizersRemain  Some content in the namespace has finalizers remaining: ceph.rook.io/disaster-protection in 2 resource instances, cephblockpool.ceph.rook.io in 1 resource instances, cephcluster.ceph.rook.io in 1 resource instances, cephfilesystem.ceph.rook.io in 1 resource instances, cephobjectstore.ceph.rook.io in 1 resource instances

You can patch those entities that are dependent on Finalizers.

Identifing Stuck Resources

There are two namespaces rook-ceph and rook-ceph-cluster that may have resources with Finalizers in a stuck state. This is intentional, and usually due to diaster-protection finalizers that prevent completely destroying the file system without manual intervention.

Example searches for remaining resources that are probably stuck finanlizing.

kubectl api-resources --verbs=list --namespaced -o name \
  | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph

kubectl api-resources --verbs=list --namespaced -o name \
  | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph-cluster

Example response

kubectl api-resources --verbs=list --namespaced -o name   | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph-cluster
NAME                                        PHASE
cephblockpool.ceph.rook.io/ceph-blockpool   Progressing
NAME                                          ACTIVEMDS   AGE   PHASE
cephfilesystem.ceph.rook.io/ceph-filesystem   1           22m   Progressing
NAME                                            PHASE
cephobjectstore.ceph.rook.io/ceph-objectstore   Progressing

Remove Finalizers

Here is a list of common resources that need their finalizers patched to allow them to be removed from the rook-ceph-cluster namespace.

kubectl -n rook-ceph-cluster patch configmap rook-ceph-mon-endpoints --type merge -p '{"metadata":{"finalizers": []}}'

kubectl -n rook-ceph-cluster patch secrets rook-ceph-mon --type merge -p '{"metadata":{"finalizers": []}}'

kubectl -n rook-ceph-cluster patch cephcluster.ceph.rook.io/rook-ceph-cluster --type merge -p '{"metadata":{"finalizers": []}}'

kubectl -n rook-ceph-cluster patch cephblockpool.ceph.rook.io/ceph-blockpool --type merge -p '{"metadata":{"finalizers": []}}'

kubectl -n rook-ceph-cluster patch cephfilesystem.ceph.rook.io/ceph-filesystem --type merge -p '{"metadata":{"finalizers": []}}'

kubectl -n rook-ceph-cluster patch cephobjectstore.ceph.rook.io/ceph-objectstore --type merge -p '{"metadata":{"finalizers": []}}'
secret/rook-ceph-mon patched
configmap/rook-ceph-mon-endpoints patched
cephcluster.ceph.rook.io/rook-ceph-cluster patched
cephblockpool.ceph.rook.io/ceph-blockpool patched
cephfilesystem.ceph.rook.io/ceph-filesystem patched
cephobjectstore.ceph.rook.io/ceph-objectstore patched

Ceph Cleanup Guide

Example tear-down steps from Rook https://rook.io/docs/rook/v1.5/ceph-teardown.html in the Troubleshooting section they have this script:

for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do
    kubectl get -n rook-ceph "$CRD" -o name | \
    xargs -I {} kubectl patch {} --type merge -p '{"metadata":{"finalizers": [null]}}'
done