Skip to content

HostToContainer Mount Propagation not working with StorageClass gce-pd #95049

@xtreme-sameer-vohra

Description

@xtreme-sameer-vohra

What happened:
Pods with HostToContainer mount propagation do not receive existing overlay mounts on a GCP PD.

What you expected to happen:
Pods that are use HostToContainer value for mountPropagation, inherit the relevant mounts on the host.

How to reproduce it (as minimally and precisely as possible):
We have one pod, mounter, that has a GCP PD where it creates overlay mounts. It is configured with Bidirectional mount propagation. PVC and Mounter deployment manifest.

kubectl create ns vt
kubectl apply -n vt -f pvc-mounter.yml
# wait for pod to come up then check logs to verify mounts were created
kubectl logs -n vt deployment/mounter

After creating overlay mounts we create a second pod, consumer, which is configured with HostToContainer mount propagation. Consumer manifest

kubectl apply -n vt -f consumer.yml
# check logs to see that the mount under `some-pvc` did not propagate.
# The mount under `some-hostpath` does proagate
kubectl logs -n vt deployment/consumer

Expected: Mounts under /some-pvc propagate to all future containers.
Actual: Mounts under /some-pvc do not propagate to future containers. They only propagate to containers that exist before the mount was created.

Anything else we need to know?:

Our initial experiments were with GCP PD's. We then tried hostPath and found that mount propagation worked as expected. We haven't tried other CSI drivers. Not sure what information is relevant to y'all so here's some of what we thought may be useful.

$ k describe sc/standard
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Side-note: after following the reproducible steps we try and clean up everything in the namespace but everything hangs in a terminating state. This might be a more k8s specific issue but we're unsure what's causing all the various k8s objects (deployments, pods, pvc, pv, etc.) to get stuck in a terminating state and never finish. We can eventually clear everything out with --force --grace-period=0 on all the objects.

Environment:

  • Kubernetes version (use kubectl version): Node Version: v1.15.9-gke.8 & Master Version: 1.15.12-gke.2
  • Cloud provider or hardware configuration: GCP
  • OS (e.g: cat /etc/os-release):
        BUILD_ID=12371.89.0
	NAME="Container-Optimized OS"
	KERNEL_COMMIT_ID=33b407437b03b80bb02ce71ae6a9caa3a4b2cdf3
	GOOGLE_CRASH_ID=Lakitu
	VERSION_ID=77
	BUG_REPORT_URL="https://cloud.google.com/container-optimized-os/docs/resources/support-policy#contact_us"
	PRETTY_NAME="Container-Optimized OS from Google"
	VERSION=77
	GOOGLE_METRICS_PRODUCT_ID=26
	HOME_URL="https://cloud.google.com/container-optimized-os/docs"
	ID=cos
  • Kernel (e.g. uname -a):
        Linux gke-cluster-1-cos-aa72a8fb-20x1 4.19.76+ #1 SMP Tue Oct 8 23:17:06 PDT 2019 x86_64 Intel(R) Xeon(R) CPU @ 2.30GHz GenuineIntel GNU/Linux
  • Install tools: n/a
  • Network plugin and version (if this is a network-related bug): n/a

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.sig/storageCategorizes an issue or PR as relevant to SIG Storage.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions