2023.05.24
★ 여러개의 Cluster가 존재한다. 그 안에 node가 몇개인지 확인하시오.
> 클러스터 총 2개
student-node ~ ➜ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://cluster1-controlplane:6443
name: cluster1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.12.169.3:6443
name: cluster2
contexts:
- context:
cluster: cluster1
user: cluster1
name: cluster1
- context:
cluster: cluster2
user: cluster2
name: cluster2
current-context: cluster1
kind: Config
preferences: {}
users:
- name: cluster1
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: cluster2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
student-node ~ ➜
student-node ~ ➜ kubectl config use-context cluster
cluster1 cluster2
student-node ~ ➜ kubectl config use-context cluster1
Switched to context "cluster1".
> cluster1 안에 있는 node 2개 확인
student-node ~ ➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster1-controlplane Ready control-plane 39m v1.24.0
cluster1-node01 Ready <none> 39m v1.24.0
> cluster2 안에 있는 node 2개 확인
student-node ~ ➜ kubectl config use-context cluster2
Switched to context "cluster2".
student-node ~ ➜
student-node ~ ➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster2-controlplane Ready control-plane 43m v1.24.0
cluster2-node01 Ready <none> 42m v1.24.0
student-node ~ ➜
★ 여러개의 Cluster가 존재한다. 그 안에 node에 ssh로 접속하시오.
student-node ~ ➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster1-controlplane Ready control-plane 44m v1.24.0
cluster1-node01 Ready <none> 44m v1.24.0
student-node ~ ➜ ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
cluster1-controlplane ~ ➜
cluster1-controlplane ~ ➜ logout
Connection to cluster1-controlplane closed.
student-node ~ ➜
★ cluster1 안에 존재하는 ETCD를 확인하세요.
> 먼저 cluster1 클러스터로 바꾼다.
student-node ~ ➜ kubectl config use-context cluster1
Switched to context "cluster1".
student-node ~ ➜
> controlplane 노드를 확인한다.
student-node ~ ➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster1-controlplane Ready control-plane 100m v1.24.0
cluster1-node01 Ready <none> 99m v1.24.0
student-node ~ ➜
> cluster1-controlplane에 들어간다.
student-node ~ ➜ ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
cluster1-controlplane ~ ➜
> etcd 노드가 있는지 확인한다.
cluster1-controlplane ~ ➜ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-bfgs9 1/1 Running 0 100m
kube-system coredns-6d4b75cb6d-lftbq 1/1 Running 0 100m
kube-system etcd-cluster1-controlplane 1/1 Running 0 101m
kube-system kube-apiserver-cluster1-controlplane 1/1 Running 0 100m
kube-system kube-controller-manager-cluster1-controlplane 1/1 Running 0 100m
kube-system kube-proxy-45kft 1/1 Running 0 100m
kube-system kube-proxy-qmxkh 1/1 Running 0 100m
kube-system kube-scheduler-cluster1-controlplane 1/1 Running 0 100m
kube-system weave-net-fwvfd 2/2 Running 0 100m
kube-system weave-net-h9tg4 2/2 Running 1 (100m ago) 100m
cluster1-controlplane ~ ➜
★ cluster2 안에 존재하는 ETCD를 확인하세요.
> cluster2 클러스터로 바꾼다.
student-node ~ ➜ kubectl config use-context cluster2
Switched to context "cluster2".
student-node ~ ➜
> cluster2 클러스터안에 kube-system 네임스페이스안에 etcd가 들어가있는 pod가 있는지 확인한다.
student-node ~ ➜ kubectl get pods -n kube-system | grep etcd
student-node ~ ➜
> cluster2-controlplane 로 ssh 접속한다.
student-node ~ ➜ ssh cluster2-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
Last login: Sat May 27 11:31:10 2023 from 192.12.169.22
cluster2-controlplane ~ ➜
> manifests경로안에 etcd가 들어간 문자가 있는지 확인
cluster2-controlplane ~ ➜ ls /etc/kubernetes/manifests/ | grep -i etcd
cluster2-controlplane ~ ✖
> cluster2-controlplane 프로세스를 확인 kube-apiserver가 etcd 외부 데이터를 참조하고있는 것을 확인.
cluster2-controlplane ~ ➜ ps -ef | grep etcd
root 1754 1380 0 09:41 ? 00:06:24 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 12885 11985 0 11:38 pts/0 00:00:00 grep etcd
cluster2-controlplane ~ ➜
> 또한 kube-apiserver 파드를 확인해보면 etcd경로가 나옵니다.
cluster2-controlplane ~ ➜ kubectl -n kube-system describe pods kube-apiserver-cluster2-controlplane
Name: kube-apiserver-cluster2-controlplane
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: cluster2-controlplane/192.12.169.3
Start Time: Sat, 27 May 2023 09:42:04 +0000
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.12.169.3:6443
kubernetes.io/config.hash: 227d76fd58a21baab84e0eee669ac726
kubernetes.io/config.mirror: 227d76fd58a21baab84e0eee669ac726
kubernetes.io/config.seen: 2023-05-27T09:42:02.804610708Z
kubernetes.io/config.source: file
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 192.12.169.3
IPs:
IP: 192.12.169.3
Controlled By: Node/cluster2-controlplane
Containers:
kube-apiserver:
Container ID: containerd://2f32b634619597cd782c9253de5a40154d160681168ced52481ffc333276a0dc
Image: k8s.gcr.io/kube-apiserver:v1.24.0
Image ID: k8s.gcr.io/kube-apiserver@sha256:a04522b882e919de6141b47d72393fb01226c78e7388400f966198222558c955
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--advertise-address=192.12.169.3
--allow-privileged=true
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem
--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem
--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem
--etcd-servers=https://192.12.169.15:2379
★ 내부에 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.
> /var/lib/etcd
student-node ~ ➜ kubectl -n kube-system describe pods etcd-cluster1-controlplane
Name: etcd-cluster1-controlplane
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: cluster1-controlplane/192.12.169.24
Start Time: Sat, 27 May 2023 09:42:42 +0000
Labels: component=etcd
tier=control-plane
Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379
kubernetes.io/config.hash: e4db54d1749cdfee9674a1e9e140fd0b
kubernetes.io/config.mirror: e4db54d1749cdfee9674a1e9e140fd0b
kubernetes.io/config.seen: 2023-05-27T09:42:25.092806611Z
kubernetes.io/config.source: file
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 192.12.169.24
IPs:
IP: 192.12.169.24
Controlled By: Node/cluster1-controlplane
Containers:
etcd:
...
Volumes:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
Events: <none>
★ 외부로 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.
> cluster2-controlplane 노드로 ssh 접속 후 프로세스 확인
> etcd-server:https://192.12.169.15:2379 외부 IP 확인
cluster2-controlplane ~ ➜ ps -ef | grep etcd
root 1754 1380 0 09:41 ? 00:07:01 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 14390 14207 0 11:50 pts/0 00:00:00 grep etcd
cluster2-controlplane ~ ➜
> 외부 etcd-server 접속
student-node ~ ➜ ssh 192.12.169.15
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
etcd-server ~ ➜
> etcd-server에서 프로세스 확인
> data-dir=/var/lib/etcd-data
etcd-server ~ ➜ ps -ef | grep etcd
etcd 917 1 0 10:21 ? 00:01:25 /usr/local/bin/etcd --name etcd-server --data-dir=/var/lib/etcd-data --cert-file=/etc/etcd/pki/etcd.pem --key-file=/etc/etcd/pki/etcd-key.pem --peer-cert-file=/etc/etcd/pki/etcd.pem --peer-key-file=/etc/etcd/pki/etcd-key.pem --trusted-ca-file=/etc/etcd/pki/ca.pem --peer-trusted-ca-file=/etc/etcd/pki/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.12.169.15:2380 --listen-peer-urls https://192.12.169.15:2380 --advertise-client-urls https://192.12.169.15:2379 --listen-client-urls https://192.12.169.15:2379,https://127.0.0.1:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd-server=https://192.12.169.15:2380 --initial-cluster-state new
root 1170 1005 0 11:51 pts/0 00:00:00 grep etcd
etcd-server ~ ➜
★ 외부에 연결된 etcd-server에 연결된 노드가 몇개인가요 ?
> 1개
etcd-server ~ ➜ ETCDCTL_API=3 etcdctl \
> --endpoints=https://127.0.0.1:2379 \
> --cacert=/etc/etcd/pki/ca.pem \
> --cert=/etc/etcd/pki/etcd.pem \
> --key=/etc/etcd/pki/etcd-key.pem \
> member list
c810de83f8ff6c49, started, etcd-server, https://192.12.169.15:2380, https://192.12.169.15:2379, false
etcd-server ~ ➜
★ cluster1의 etcd를 백업하자.
> cluster1로 바꾸기
student-node ~ ➜ kubectl config use-context cluster1
Switched to context "cluster1".
student-node ~ ➜
> etcd파드에서 사용하는 엔드포인트 정보를 확인한다. 해당 정보로 백업을 수행합니다.
student-node ~ ➜ kubectl describe pods -n kube-system etcd-cluster1-controlplane | grep advertise-client-urls
Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379
--advertise-client-urls=https://192.12.169.24:2379
student-node ~ ➜
> etcd파드에서 사용하는 인증서 정보를 확인한다. 해당 정보로 백업을 수행합니다.
student-node ~ ➜ kubectl describe pods -n kube-system etcd-cluster1-controlplane | grep pki
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--key-file=/etc/kubernetes/pki/etcd/server.key
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd from etcd-certs (rw)
Path: /etc/kubernetes/pki/etcd
student-node ~ ➜
> 수집했던 정보로 스냅샷을 만들어 준다.
cluster1-controlplane ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://192.12.169.24:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/cluster1.db
Snapshot saved at /opt/cluster1.db
cluster1-controlplane ~ ➜
cluster1-controlplane ~ ➜ ls -al /opt/
total 2096
drwxr-xr-x 1 root root 4096 May 27 12:08 .
drwxr-xr-x 1 root root 4096 May 27 09:42 ..
-rw-r--r-- 1 root root 2117664 May 27 12:08 cluster1.db
drwxr-xr-x 1 root root 4096 Dec 20 10:08 cni
drwx--x--x 4 root root 4096 Dec 20 10:09 containerd
cluster1-controlplane ~ ➜
> 해당 노드를 나와서 밖에 노드로 복사해준다.
student-node ~ ✖ scp cluster1-controlplane:/opt/cluster1.db /opt
cluster1.db 100% 2068KB 83.8MB/s 00:00
student-node ~ ➜
student-node ~ ➜ ls -al /opt
total 2088
drwxr-xr-x 1 root root 4096 May 27 12:10 .
drwxr-xr-x 1 root root 4096 May 27 09:41 ..
-rw-r--r-- 1 root root 2117664 May 27 12:10 cluster1.db
drwxr-xr-x 1 root root 4096 May 27 09:41 .init
student-node ~ ➜
★ 가지고 있는 스냅샷으로 cluster2의 외부로 연결된 etcd를 복원해보자
> cluster2로 바꾸기
student-node ~ ➜ kubectl config use-context cluster2
Switched to context "cluster2".
student-node ~ ➜
> 가지고 있는 스냅샷 파일을 외부 etcd-server /root 경로에 복사
student-node ~ ➜ scp /opt/cluster2.db etcd-server:/root
cluster2.db 100% 2088KB 130.2MB/s 00:00
student-node ~ ➜
> 외부 etcd-serve로 ssh 접속
student-node ~ ➜ ssh 192.13.240.15
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
Last login: Sat May 27 12:20:19 2023 from 192.13.240.21
etcd-server ~ ➜
> etcd-server에서 복원하는거니 127.0.0.1 사용, 인증서는 기본 사용, 데이터 경로는 /var/lib/etcd-data-new 사용
etcd-server ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/etcd.pem --key=/etc/etcd/pki/etcd-key.pem snapshot restore /root/cluster2.db --data-dir /var/lib/etcd-data-new
{"level":"info","ts":1685190548.319989,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
{"level":"info","ts":1685190548.3370929,"caller":"mvcc/kvstore.go:388","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":6336}
{"level":"info","ts":1685190548.3447402,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1685190548.3543217,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
etcd-server ~ ➜
> etcd.service 파일에서 data-dir 부분을 /var/lib/etcd-data-new로 수정해주기
etcd-server ~ ➜ vi /etc/systemd/system/etcd.service
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
--name etcd-server \
--data-dir=/var/lib/etcd-data \
> 새 디렉터리에 대한 권한이 올바른지 확인합니다 (etcd-server 가 권한을 가지고있어야 함):
etcd-server ~ ➜ chown -R etcd:etcd /var/lib/etcd-data-new/
etcd-server ~ ➜ ls -ld /var/lib/etcd-data-new/
drwx------ 3 etcd etcd 4096 May 27 12:29 /var/lib/etcd-data-new/
etcd-server ~ ➜
> etcd daemon 재시작 해주세요.
etcd-server ~ ➜ systemctl daemon-reload
etcd-server ~ ➜ systemctl restart etcd
> 마지막으로 전에 있는 정보에 의존하지 않도록 컨트롤플레인 구성 요소(예: kube-scheduler, kube-controller-manager, kubelet)를 재시작하는 것이 좋습니다.
'Kubernetes > Kubernetes Exam' 카테고리의 다른 글
[K8s] CKA 취득 연습문제#6 (csr approve, reject) (0) | 2023.05.30 |
---|---|
[K8s] CKA 취득 연습문제#5 (etcd, apiserver 인증서) (0) | 2023.05.29 |
[K8s] CKA 취득 연습문제#3 (etcd backup, snapshot) (0) | 2023.05.20 |
[K8s] CKA 취득 연습문제#2 (node upgrade) (0) | 2023.05.17 |
[K8s] CKA 취득 연습문제#1 (cordor, drain) (0) | 2023.05.17 |