Skip to content

Unable to start kind rootless under podman when using ecryptfs home directory #4061

@Spitfire1900

Description

@Spitfire1900

What happened:

Attempting to start Kind rootless under podman did not succeed.

I got the error

sys-kernel-debug.mount: Mount process exited, code=exited, status=32/n/a
sys-kernel-debug.mount: Failed with result 'exit-code'.
[FAILED] Failed to mount sys-kernel…nt - Kernel Debug File System.

in pod logs when attempting to start.

What you expected to happen:
Successful execution of Rootless with Podman.

How to reproduce it (as minimally and precisely as possible):
sh -c 'KIND_EXPERIMENTAL_PROVIDER=podman systemd-run --scope --user -p "Delegate=yes" kind create cluster --retain'

Anything else we need to know?:

Environment:

  • kind version: (use kind version): kind v0.30.0 go1.24.6 linux/amd64
  • Runtime info: (use docker info, podman info or nerdctl info):
podman info
host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2:2.1.13-0ubuntu24.04+obs21.1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: '
  cpuUtilization:
    idlePercent: 87.24
    systemPercent: 3.07
    userPercent: 9.69
  cpus: 8
  databaseBackend: boltdb
  distribution:
    codename: noble
    distribution: ubuntu
    version: "24.04"
  eventLogger: journald
  freeLocks: 2048
  hostname: mainstay
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.8.0-88-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 11136114688
  memTotal: 33597927424
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.13.1-0ubuntu24.04+obs43.1_amd64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark_1.12.2-0ubuntu24.04+obs39.1_amd64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: crun
    package: crun_101:1.14.4-0ubuntu22.04+obs70.24_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20240220.1e6f92b-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.3.2-0ubuntu24.04+obs16.1_amd64
    version: |-
      slirp4netns version 1.3.2
      commit: 0f13345bcef588d2bb70d662d41e92ee8a816d85
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 0h 41m 46.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/kyle/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/kyle/.local/share/containers/storage
  graphRootAllocated: 984097714176
  graphRootUsed: 668569112576
  graphStatus:
    Backing Filesystem: ecryptfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 10
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/kyle/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.3
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.3
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.3 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
  • Kubernetes version: (use kubectl version):
kubectl version
Client Version: v1.34.2
Kustomize Version: v5.7.1
Server Version: v1.34.0
  • Any proxies or other special environment settings?:
    N/A
kind export logs
ERROR: unknown cluster "kind"
Return 1

so manually exported kind-control-plane logs log.txt

[control-plane-check] kube-apiserver is not healthy after 4m0.000114174s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000413302s
[control-plane-check] kube-scheduler is not healthy after 4m0.000462273s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://10.89.0.14:6443/livez: Get "https://kind-control-plane:6443/livez?timeout=10s": dial tcp 10.89.0.14:6443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:262
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:450
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:133
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:1015
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:1148
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:1071
k8s.io/kubernetes/cmd/kubeadm/app.Run
        k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:48
main.main
        k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        runtime/proc.go:283
runtime.goexit
        runtime/asm_amd64.s:1700
Return 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/provider/podmanIssues or PRs related to podmankind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions