Skip to content

Cilium and configuration update issue #667

@mertcangokgoz

Description

@mertcangokgoz

Hello again after a long time

I've been using the k3s clusters I deployed with this application for a year now, and when it was time to update, I noticed that I couldn't activate the following configuration for cilium.(I customized this Helm configuration according to the template and my needs in this project.)

encryption:
  enabled: true
  nodeEncryption: true
  type: ipsec
hubble:
  enabled: true
  metrics:
    enabled:
      - dns:query;ignoreAAAA
      - drop
      - tcp
      - flow
      - port-distribution
      - icmp
      - http
  relay:
    enabled: true
    rollOutPods: true
  ui:
    enabled: true
    rollOutPods: true
ipam:
  mode: kubernetes
k8sServiceHost: 127.0.0.1
k8sServicePort: 6444
kubeProxyReplacement: true
operator:
  replicas: 1
  resources:
    requests:
      memory: 128Mi
  rollOutPods: true
resources:
  requests:
    memory: 512Mi
routingMode: tunnel
tunnelProtocol: vxlan

I didn't activate any firewall. The entire cluster runs on a private network and goes out through a nat gateway.

My k3s configuration has been like this since the very first version, and I haven't touched it since.

[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
    server \
        '--disable-cloud-controller' \
        '--disable' \
        'servicelb' \
        '--disable' \
        'traefik' \
        '--disable' \
        'metrics-server' \
        '--write-kubeconfig-mode=644' \
        '--node-name=main-k3s-cluster-master1' \
        '--cluster-cidr=10.244.0.0/16' \
        '--service-cidr=10.43.0.0/16' \
        '--cluster-dns=10.43.0.10' \
        '--kube-controller-manager-arg=bind-address=0.0.0.0' \
        '--kube-proxy-arg=metrics-bind-address=0.0.0.0' \
        '--kube-scheduler-arg=bind-address=0.0.0.0' \
        '--node-taint' \
        'CriticalAddonsOnly=true:NoExecute' \
        '--kubelet-arg' \
        'cloud-provider=external' \
        '--kubelet-arg' \
        'resolv-conf=/etc/k8s-resolv.conf' \
        '--etcd-expose-metrics=true' \
        '--flannel-backend=none' \
        '--disable-network-policy' \
        '--disable-kube-proxy' \
        '--embedded-registry' \
        '--advertise-address=aaaa' \
        '--node-ip=aaaa' \
        '--node-external-ip=aaaa' \
        '--cluster-init' \
        '--tls-san=aaaa' \
        '--tls-san=127.0.0.1' \

If I set k8sServiceHost: 127.0.0.1 as the first master machine IP, it works, but this is very unreasonable.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions