반응형

 

 

2023.05.29

★ kube-apiserver에서 사용하고 있는 인증서 파일 위치 확인

1. kube-apiserver.yaml 파일 확인

-> --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep tls
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

controlplane ~ ➜

2. kube-apiserver pod 확인

-> --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

controlplane ~ ➜  kubectl -n kube-system describe pods kube-apiserver-controlplane | grep tls
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

controlplane ~ ➜

 

 

★ kube-apiserver에서 사용하고 있는 etcd-client 인증서 파일 위치 확인

-> etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

controlplane ~ ➜  cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd    
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379

controlplane ~ ➜

 

 

★ kube-apiserver에서 사용하고 있는 kubelet 키 파일 위치 확인

-> --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

controlplane ~ ➜  cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep kubelet
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

controlplane ~ ➜

 

 

★ etcd server 에서 사용하고 있는 인증서 파일 위치 확인

-> --cert-file=/etc/kubernetes/pki/etcd/server.crt

controlplane ~ ➜  cat /etc/kubernetes/manifests/etcd.yaml | grep cert
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
      name: etcd-certs
    name: etcd-certs

controlplane ~ ➜

 

 

★ etcd server 에서 사용하고 있는 자체 ca 인증서 파일 위치 확인

-> --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜  cat /etc/kubernetes/manifests/etcd.yaml | grep ca
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
  priorityClassName: system-node-critical

controlplane ~ ➜

 

 

★ Kube-apiserver 인증서에 구성된 CN 이름은 무엇인가요?

-> Subject: CN = kube-apiserver

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8483000120492181273 (0x75b9abaa2b884719)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: May 29 11:43:33 2023 GMT
            Not After : May 28 11:43:33 2024 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
...

 

 

★ Kube-apiserver 인증서를 발급한 CA의 이름은 무엇인가요?

-> Issuer: CN = kubernetes

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8483000120492181273 (0x75b9abaa2b884719)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: May 29 11:43:33 2023 GMT
            Not After : May 28 11:43:33 2024 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
...

 

 

★ Kube-apiserver 인증서에 구성된 대체 이름은 무엇인가요?

-> DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8483000120492181273 (0x75b9abaa2b884719)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: May 29 11:43:33 2023 GMT
            Not After : May 28 11:43:33 2024 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:a3:2f:ff:75:ed:b2:38:74:01:f9:b1:41:51:aa:
                    f5:bb:9a:39:02:46:c2:5b:05:b1:0e:8f:75:9b:46:
                    18:a5:35:52:2f:2d:22:3b:fe:37:e3:ea:98:32:c5:
                    79:b4:2d:1b:f2:67:cd:f6:7d:4e:fa:e8:a0:69:b4:
                    4b:c8:25:46:20:4b:ad:69:dd:fa:63:56:b4:5c:4f:
                    ce:b7:28:bb:43:de:59:5f:c6:e7:c7:16:08:11:cf:
                    28:b2:4a:7f:20:74:3d:f4:53:6a:b6:33:37:25:98:
                    3e:a7:02:56:da:1b:75:7a:39:bd:0a:31:d5:26:cb:
                    30:8b:3d:bf:a5:58:48:8c:a8:5d:b4:eb:51:0d:72:
                    52:32:85:60:0d:56:2f:46:3c:65:90:4a:9b:a3:01:
                    b3:d9:01:b2:d9:ea:70:68:38:49:d5:1a:29:9f:52:
                    b8:54:72:71:0c:4a:88:4b:73:63:6f:05:a0:b6:23:
                    03:31:12:be:c3:cf:6c:b7:2b:e6:4e:50:a1:1b:7f:
                    ab:2a:ba:5f:92:16:3d:4c:ac:d8:02:11:78:8b:bf:
                    4e:43:3b:e5:0c:57:fb:6f:8a:81:ef:51:7e:a3:92:
                    2a:de:2b:96:ae:95:2e:dc:e3:97:ce:c7:af:8d:42:
                    67:2c:6a:3a:fa:fa:67:79:d2:14:52:47:eb:65:ca:
                    53:af
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier: 
                keyid:AB:7D:E2:A1:2C:F0:E0:27:53:52:72:D8:C9:46:76:09:F8:77:0D:63

            X509v3 Subject Alternative Name: 
                DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9

 

 

★ etcd server 인증서에 구성된 CN은 무엇인가요?

-> Subject: CN = controlplane

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 253720388574558331 (0x38565596142b87b)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = etcd-ca
        Validity
            Not Before: May 29 11:43:34 2023 GMT
            Not After : May 28 11:43:34 2024 GMT
        Subject: CN = controlplane
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
...

 

★ etcd server 인증서는 발급일로부터 얼마 동안 유효하나요?

-> 1 years 

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 253720388574558331 (0x38565596142b87b)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = etcd-ca
        Validity
            Not Before: May 29 11:43:34 2023 GMT
            Not After : May 28 11:43:34 2024 GMT
        Subject: CN = controlplane
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
...

 

 

★ etcd server ca인증서는 발급일로부터 얼마 동안 유효하나요?

->  10 years

controlplane ~ ➜  openssl x509 -in /etc/kubernetes/pki/ca.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: May 29 11:43:33 2023 GMT
            Not After : May 26 11:43:33 2033 GMT
        Subject: CN = kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
...

 

 

★ kubectl 명령어가 동작하지 않습니다. etcd server를 확인해보고 틀린부분을 수정하시오.

-> etcd 서버 yaml 파일을 확인해보면 cert-file 경로에 server-certificate.crt 파일이라고 저장되어있다.

controlplane ~ ➜  cat /etc/kubernetes/manifests/etcd.yaml | grep cert-file
    - --cert-file=/etc/kubernetes/pki/etcd/server-certificate.crt
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt

controlplane ~ ➜

->  해당 경로에 crt 파일을 확인해보면 파일명이 다른걸 확인할 수 있다.

controlplane ~ ➜  ls /etc/kubernetes/pki/etcd/server* | grep .crt
/etc/kubernetes/pki/etcd/server.crt

controlplane ~ ➜

->  경로명을 수정해주고 기다리면 api-server가 정상적으로 작동한다.

controlplane ~ ➜  vi /etc/kubernetes/manifests/etcd.yaml 

controlplane ~ ➜  cat /etc/kubernetes/manifests/etcd.yaml | grep cert-file
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt

controlplane ~ ➜

 

 

★ kubectl 명령어가 동작하지 않습니다. apiserver 서버의 logs를 확인해서 문제를 해결하시오.

-> crictl ps -a | grep kube-apiserver 명령어로 pod 상태 확인 

controlplane ~ ➜  crictl ps -a | grep kube-apiserver
9bd19c0102ca6       a31e1d84401e6       36 seconds ago      Exited              kube-apiserver            5                   5413bff6f15be       kube-apiserver-controlplane

controlplane ~ ➜

-> logs를 확인해보니 etcd ca 인증서에 문제가 있는 것을 확인했습니다.

controlplane ~ ➜  crictl logs --tail=5 9bd19c0102ca6
  "BalancerAttributes": null,
  "Type": 0,
  "Metadata": null
}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
E0529 13:03:01.408411       1 run.go:74] "command failed" err="context deadline exceeded"

controlplane ~ ➜

-> kube-apiserver.yaml 파일의 etcd cafile 위치가 잘못되어있습니다. 수정해주세요.

controlplane ~ ➜  cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd
    - --etcd-cafile=/etc/kubernetes/pki/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379

controlplane ~ ➜  cd /etc/kubernetes/pki/etcd/

controlplane kubernetes/pki/etcd ➜  ls -al
total 40
drwxr-xr-x 2 root root 4096 May 29 08:44 .
drwxr-xr-x 3 root root 4096 May 29 08:44 ..
-rw-r--r-- 1 root root 1086 May 29 08:44 ca.crt
-rw------- 1 root root 1675 May 29 08:44 ca.key
-rw-r--r-- 1 root root 1159 May 29 08:44 healthcheck-client.crt
-rw------- 1 root root 1679 May 29 08:44 healthcheck-client.key
-rw-r--r-- 1 root root 1208 May 29 08:44 peer.crt
-rw------- 1 root root 1679 May 29 08:44 peer.key
-rw-r--r-- 1 root root 1208 May 29 08:44 server.crt
-rw------- 1 root root 1675 May 29 08:44 server.key

controlplane kubernetes/pki/etcd ➜

-> --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜  cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379

controlplane ~ ➜

 

 

 

 

 

 

 

 

 

 

반응형
반응형

 

 

 

2023.05.24

★ 여러개의 Cluster가 존재한다. 그 안에 node가 몇개인지 확인하시오.

 > 클러스터 총 2개

student-node ~ ➜  kubectl config view 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://cluster1-controlplane:6443
  name: cluster1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.12.169.3:6443
  name: cluster2
contexts:
- context:
    cluster: cluster1
    user: cluster1
  name: cluster1
- context:
    cluster: cluster2
    user: cluster2
  name: cluster2
current-context: cluster1
kind: Config
preferences: {}
users:
- name: cluster1
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: cluster2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

student-node ~ ➜  

student-node ~ ➜ kubectl config use-context cluster
cluster1  cluster2  

student-node ~ ➜ kubectl config use-context cluster1
Switched to context "cluster1".

 > cluster1 안에 있는 node 2개 확인

student-node ~ ➜  kubectl get nodes 
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   39m   v1.24.0
cluster1-node01         Ready    <none>          39m   v1.24.0

 > cluster2 안에 있는 node 2개 확인

student-node ~ ➜  kubectl config use-context cluster2
Switched to context "cluster2".

student-node ~ ➜  
student-node ~ ➜  kubectl get nodes 
NAME                    STATUS   ROLES           AGE   VERSION
cluster2-controlplane   Ready    control-plane   43m   v1.24.0
cluster2-node01         Ready    <none>          42m   v1.24.0

student-node ~ ➜

 

 

★ 여러개의 Cluster가 존재한다. 그 안에 node에 ssh로 접속하시오.

student-node ~ ➜  kubectl get nodes 
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   44m   v1.24.0
cluster1-node01         Ready    <none>          44m   v1.24.0

student-node ~ ➜  ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

cluster1-controlplane ~ ➜  

cluster1-controlplane ~ ➜  logout
Connection to cluster1-controlplane closed.

student-node ~ ➜

 

 

★ cluster1 안에 존재하는 ETCD를 확인하세요.

 > 먼저 cluster1 클러스터로 바꾼다.

student-node ~ ➜  kubectl config use-context cluster1
Switched to context "cluster1".

student-node ~ ➜

> controlplane 노드를 확인한다.

student-node ~ ➜  kubectl get nodes 
NAME                    STATUS   ROLES           AGE    VERSION
cluster1-controlplane   Ready    control-plane   100m   v1.24.0
cluster1-node01         Ready    <none>          99m    v1.24.0

student-node ~ ➜

> cluster1-controlplane에 들어간다.

student-node ~ ➜  ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

cluster1-controlplane ~ ➜

> etcd 노드가 있는지 확인한다.

cluster1-controlplane ~ ➜  kubectl get pods -A
NAMESPACE     NAME                                            READY   STATUS    RESTARTS       AGE
kube-system   coredns-6d4b75cb6d-bfgs9                        1/1     Running   0              100m
kube-system   coredns-6d4b75cb6d-lftbq                        1/1     Running   0              100m
kube-system   etcd-cluster1-controlplane                      1/1     Running   0              101m
kube-system   kube-apiserver-cluster1-controlplane            1/1     Running   0              100m
kube-system   kube-controller-manager-cluster1-controlplane   1/1     Running   0              100m
kube-system   kube-proxy-45kft                                1/1     Running   0              100m
kube-system   kube-proxy-qmxkh                                1/1     Running   0              100m
kube-system   kube-scheduler-cluster1-controlplane            1/1     Running   0              100m
kube-system   weave-net-fwvfd                                 2/2     Running   0              100m
kube-system   weave-net-h9tg4                                 2/2     Running   1 (100m ago)   100m

cluster1-controlplane ~ ➜

 

 

★ cluster2 안에 존재하는 ETCD를 확인하세요.

> cluster2 클러스터로 바꾼다.

student-node ~ ➜  kubectl config use-context cluster2 
Switched to context "cluster2".

student-node ~ ➜

> cluster2 클러스터안에 kube-system 네임스페이스안에 etcd가 들어가있는 pod가 있는지 확인한다. 

student-node ~ ➜  kubectl get pods -n kube-system | grep etcd

student-node ~ ➜

> cluster2-controlplane 로 ssh 접속한다.

student-node ~ ➜  ssh cluster2-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Sat May 27 11:31:10 2023 from 192.12.169.22

cluster2-controlplane ~ ➜

> manifests경로안에 etcd가 들어간 문자가 있는지 확인

cluster2-controlplane ~ ➜  ls /etc/kubernetes/manifests/ | grep -i etcd

cluster2-controlplane ~ ✖

> cluster2-controlplane 프로세스를 확인 kube-apiserver가 etcd 외부 데이터를 참조하고있는 것을 확인.

cluster2-controlplane ~ ➜  ps -ef | grep etcd
root        1754    1380  0 09:41 ?        00:06:24 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root       12885   11985  0 11:38 pts/0    00:00:00 grep etcd

cluster2-controlplane ~ ➜

> 또한 kube-apiserver 파드를 확인해보면 etcd경로가 나옵니다.

cluster2-controlplane ~ ➜  kubectl -n kube-system describe pods kube-apiserver-cluster2-controlplane 
Name:                 kube-apiserver-cluster2-controlplane
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 cluster2-controlplane/192.12.169.3
Start Time:           Sat, 27 May 2023 09:42:04 +0000
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.12.169.3:6443
                      kubernetes.io/config.hash: 227d76fd58a21baab84e0eee669ac726
                      kubernetes.io/config.mirror: 227d76fd58a21baab84e0eee669ac726
                      kubernetes.io/config.seen: 2023-05-27T09:42:02.804610708Z
                      kubernetes.io/config.source: file
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   192.12.169.3
IPs:
  IP:           192.12.169.3
Controlled By:  Node/cluster2-controlplane
Containers:
  kube-apiserver:
    Container ID:  containerd://2f32b634619597cd782c9253de5a40154d160681168ced52481ffc333276a0dc
    Image:         k8s.gcr.io/kube-apiserver:v1.24.0
    Image ID:      k8s.gcr.io/kube-apiserver@sha256:a04522b882e919de6141b47d72393fb01226c78e7388400f966198222558c955
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=192.12.169.3
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem
      --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem
      --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem
      --etcd-servers=https://192.12.169.15:2379

 

 

★ 내부에 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.

> /var/lib/etcd

student-node ~ ➜  kubectl -n kube-system describe pods etcd-cluster1-controlplane 
Name:                 etcd-cluster1-controlplane
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 cluster1-controlplane/192.12.169.24
Start Time:           Sat, 27 May 2023 09:42:42 +0000
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379
                      kubernetes.io/config.hash: e4db54d1749cdfee9674a1e9e140fd0b
                      kubernetes.io/config.mirror: e4db54d1749cdfee9674a1e9e140fd0b
                      kubernetes.io/config.seen: 2023-05-27T09:42:25.092806611Z
                      kubernetes.io/config.source: file
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   192.12.169.24
IPs:
  IP:           192.12.169.24
Controlled By:  Node/cluster1-controlplane
Containers:
  etcd:
...
Volumes:
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:            <none>

 

★ 외부로 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.

> cluster2-controlplane 노드로 ssh 접속 후 프로세스 확인 

> etcd-server:https://192.12.169.15:2379 외부 IP 확인

cluster2-controlplane ~ ➜  ps -ef | grep etcd
root        1754    1380  0 09:41 ?        00:07:01 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root       14390   14207  0 11:50 pts/0    00:00:00 grep etcd

cluster2-controlplane ~ ➜

> 외부 etcd-server 접속

student-node ~ ➜ ssh 192.12.169.15
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

etcd-server ~ ➜

> etcd-server에서 프로세스 확인

> data-dir=/var/lib/etcd-data

etcd-server ~ ➜ ps -ef | grep etcd
etcd         917       1  0 10:21 ?        00:01:25 /usr/local/bin/etcd --name etcd-server --data-dir=/var/lib/etcd-data --cert-file=/etc/etcd/pki/etcd.pem --key-file=/etc/etcd/pki/etcd-key.pem --peer-cert-file=/etc/etcd/pki/etcd.pem --peer-key-file=/etc/etcd/pki/etcd-key.pem --trusted-ca-file=/etc/etcd/pki/ca.pem --peer-trusted-ca-file=/etc/etcd/pki/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.12.169.15:2380 --listen-peer-urls https://192.12.169.15:2380 --advertise-client-urls https://192.12.169.15:2379 --listen-client-urls https://192.12.169.15:2379,https://127.0.0.1:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd-server=https://192.12.169.15:2380 --initial-cluster-state new
root        1170    1005  0 11:51 pts/0    00:00:00 grep etcd

etcd-server ~ ➜

 

 

★ 외부에 연결된 etcd-server에 연결된 노드가 몇개인가요 ?

> 1개

etcd-server ~ ➜  ETCDCTL_API=3 etcdctl \
> --endpoints=https://127.0.0.1:2379 \
>  --cacert=/etc/etcd/pki/ca.pem \
>  --cert=/etc/etcd/pki/etcd.pem \
>  --key=/etc/etcd/pki/etcd-key.pem \
>   member list
c810de83f8ff6c49, started, etcd-server, https://192.12.169.15:2380, https://192.12.169.15:2379, false

etcd-server ~ ➜

 

 

 

★ cluster1의 etcd를 백업하자.

> cluster1로 바꾸기

student-node ~ ➜ kubectl config use-context cluster1 
Switched to context "cluster1".

student-node ~ ➜

> etcd파드에서 사용하는 엔드포인트 정보를 확인한다. 해당 정보로 백업을 수행합니다.

student-node ~ ➜  kubectl describe  pods -n kube-system etcd-cluster1-controlplane  | grep advertise-client-urls
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379
      --advertise-client-urls=https://192.12.169.24:2379

student-node ~ ➜

> etcd파드에서 사용하는 인증서 정보를 확인한다. 해당 정보로 백업을 수행합니다.

student-node ~ ➜  kubectl describe pods -n kube-system etcd-cluster1-controlplane | grep pki
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
    Path:          /etc/kubernetes/pki/etcd

student-node ~ ➜

> 수집했던 정보로 스냅샷을 만들어 준다.

cluster1-controlplane ~ ➜  ETCDCTL_API=3 etcdctl --endpoints=https://192.12.169.24:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/cluster1.db
Snapshot saved at /opt/cluster1.db

cluster1-controlplane ~ ➜
cluster1-controlplane ~ ➜  ls -al /opt/
total 2096
drwxr-xr-x 1 root root    4096 May 27 12:08 .
drwxr-xr-x 1 root root    4096 May 27 09:42 ..
-rw-r--r-- 1 root root 2117664 May 27 12:08 cluster1.db
drwxr-xr-x 1 root root    4096 Dec 20 10:08 cni
drwx--x--x 4 root root    4096 Dec 20 10:09 containerd

cluster1-controlplane ~ ➜

> 해당 노드를 나와서 밖에 노드로 복사해준다.

student-node ~ ✖ scp cluster1-controlplane:/opt/cluster1.db /opt
cluster1.db                                         100% 2068KB  83.8MB/s   00:00    

student-node ~ ➜  
student-node ~ ➜  ls -al /opt
total 2088
drwxr-xr-x 1 root root    4096 May 27 12:10 .
drwxr-xr-x 1 root root    4096 May 27 09:41 ..
-rw-r--r-- 1 root root 2117664 May 27 12:10 cluster1.db
drwxr-xr-x 1 root root    4096 May 27 09:41 .init

student-node ~ ➜

 

 

★ 가지고 있는 스냅샷으로 cluster2의 외부로 연결된 etcd를 복원해보자

> cluster2로 바꾸기

student-node ~ ➜  kubectl config use-context cluster2
Switched to context "cluster2".

student-node ~ ➜

> 가지고 있는 스냅샷 파일을 외부 etcd-server /root 경로에 복사

student-node ~ ➜  scp /opt/cluster2.db etcd-server:/root
cluster2.db                                         100% 2088KB 130.2MB/s   00:00    

student-node ~ ➜

> 외부 etcd-serve로 ssh 접속

student-node ~ ➜  ssh 192.13.240.15
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Sat May 27 12:20:19 2023 from 192.13.240.21

etcd-server ~ ➜

> etcd-server에서 복원하는거니 127.0.0.1 사용, 인증서는 기본 사용, 데이터 경로는 /var/lib/etcd-data-new 사용

etcd-server ~ ➜  ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/etcd.pem --key=/etc/etcd/pki/etcd-key.pem snapshot restore /root/cluster2.db --data-dir /var/lib/etcd-data-new
{"level":"info","ts":1685190548.319989,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
{"level":"info","ts":1685190548.3370929,"caller":"mvcc/kvstore.go:388","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":6336}
{"level":"info","ts":1685190548.3447402,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1685190548.3543217,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}

etcd-server ~ ➜

> etcd.service 파일에서 data-dir 부분을 /var/lib/etcd-data-new로 수정해주기

etcd-server ~ ➜  vi /etc/systemd/system/etcd.service 
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name etcd-server \
  --data-dir=/var/lib/etcd-data \

> 새 디렉터리에 대한 권한이 올바른지 확인합니다 (etcd-server 가 권한을 가지고있어야 함):

etcd-server ~ ➜  chown -R etcd:etcd /var/lib/etcd-data-new/

etcd-server ~ ➜  ls -ld /var/lib/etcd-data-new/
drwx------ 3 etcd etcd 4096 May 27 12:29 /var/lib/etcd-data-new/

etcd-server ~ ➜

> etcd daemon 재시작 해주세요.

etcd-server ~ ➜  systemctl daemon-reload
etcd-server ~ ➜  systemctl restart etcd

> 마지막으로 전에 있는 정보에 의존하지 않도록 컨트롤플레인 구성 요소(예: kube-scheduler, kube-controller-manager, kubelet)를 재시작하는 것이 좋습니다.

 

 

 

 

 

 

 

반응형
반응형

 

 

2023.05.20

★ 클러스터에서 실행 중인 ETCD의 버전은 무엇인가요?

-> etcd-version : 3.5.6

controlplane ~ ➜  kubectl -n kube-system logs etcd-controlplane | grep -i 'etcd-version'
{"level":"info","ts":"2023-05-20T05:14:33.291Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":36,"max-cpu-available":36,"member-initialized":false,"name":"controlplane","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.6.237.3:2380"],"listen-peer-urls":["https://192.6.237.3:2380"],"advertise-client-urls":["https://192.6.237.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.6.237.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"controlplane=https://192.6.237.3:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}

controlplane ~ ➜

 

 

★ controlplane node에서 ETCD 클러스터에 연결할 수 있는 주소는 어디인가요?

 ->  https://127.0.0.1:2379

controlplane ~ ➜  kubectl -n kube-system describe pod etcd-controlplane | grep -i 'listen-client-url'
      --listen-client-urls=https://127.0.0.1:2379,https://192.6.237.3:2379

controlplane ~ ➜

 

 

★ ETCD 서버 인증서 파일은 어디에 있나요?

 -> --cert-file=/etc/kubernetes/pki/etcd/server.crt

controlplane ~ ➜  kubectl -n kube-system describe pod etcd-controlplane | grep -i 'cert-file'
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt

controlplane ~ ➜

 

 

★ ETCD CA 인증서 파일은 어디에 있나요?

 -> --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜  kubectl -n kube-system describe pod etcd-controlplane | grep -i 'ca-file'
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜

 

 

★ 클러스터의 master node는 오늘 밤에 재부팅이 예정되어 있습니다. 문제가 발생할 것으로 예상되지는 않지만 필요한 백업을 수행해야 합니다. 기본 제공 스냅샷 기능을 사용하여 ETCD 데이터베이스의 스냅샷을 만듭니다.

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 \
 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --cert=/etc/kubernetes/pki/etcd/server.crt \
 --key=/etc/kubernetes/pki/etcd/server.key \
 snapshot save /opt/snapshot-pre-boot.db

controlplane ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key \
> snapshot save /opt/snapshot-pre-boot.db
Snapshot saved at /opt/snapshot-pre-boot.db

controlplane ~ ➜  ls /opt/
cni  containerd  snapshot-pre-boot.db

controlplane ~ ➜

 

 

★ 재부팅 후 마스터 노드가 다시 온라인 상태가 되었지만 애플리케이션에 액세스할 수 없습니다. 클러스터의 애플리케이션 상태를 확인하세요. 무슨 문제인가요?

- 배포가 없습니다.
- 서비스가 존재하지 않음
- 파드가 없음
 위의 모든 것

 

 

 

★ 백업 파일을 사용하여 클러스터의 원래 상태를 복원합니다.

controlplane ~ ➜  ETCDCTL_API=3 etcdctl  --data-dir /var/lib/etcd-from-backup \
> snapshot restore /opt/snapshot-pre-boot.db
2023-05-20 02:06:58.280313 I | mvcc: restore compact to 2461
2023-05-20 02:06:58.287347 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32

 

 

반응형

+ Recent posts