★ 현재 컨텍스트를 연구로 설정한 상태에서 클러스터에 액세스하려고 합니다. 그러나 뭔가 잘못된 것 같습니다. 문제를 식별하고 수정하세요.
-> pod를 확인하려는데 오류 발생 (사용자 인증서를 읽을 수 없음)
controlplane ~ ➜ kubectl get podserror: unable to read client-cert /etc/kubernetes/pki/users/dev-user/developer-user.crt for dev-user due to open /etc/kubernetes/pki/users/dev-user/developer-user.crt: no such file or directorycontrolplane ~ ✖
controlplane~➜opensslx509-in/etc/kubernetes/pki/apiserver.crt-textCertificate:Data:Version:3(0x2)Serial Number:8483000120492181273(0x75b9abaa2b884719)Signature Algorithm:sha256WithRSAEncryptionIssuer:CN=kubernetesValidityNot Before:May2911:43:332023 GMTNot After :May2811:43:332024 GMTSubject:CN=kube-apiserverSubject Public Key Info:Public Key Algorithm:rsaEncryptionRSA Public-Key:(2048bit)...
★ Kube-apiserver 인증서에 구성된 대체 이름은 무엇인가요?
-> DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9
controlplane ~ ➜ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -textCertificate:Data:Version: 3 (0x2)Serial Number: 8483000120492181273 (0x75b9abaa2b884719)Signature Algorithm: sha256WithRSAEncryptionIssuer: CN = kubernetesValidityNot Before: May 29 11:43:33 2023 GMTNot After : May 28 11:43:33 2024 GMTSubject: CN = kube-apiserverSubject Public Key Info:Public Key Algorithm: rsaEncryptionRSA Public-Key: (2048 bit)Modulus:00:a3:2f:ff:75:ed:b2:38:74:01:f9:b1:41:51:aa:f5:bb:9a:39:02:46:c2:5b:05:b1:0e:8f:75:9b:46:18:a5:35:52:2f:2d:22:3b:fe:37:e3:ea:98:32:c5:79:b4:2d:1b:f2:67:cd:f6:7d:4e:fa:e8:a0:69:b4:4b:c8:25:46:20:4b:ad:69:dd:fa:63:56:b4:5c:4f:ce:b7:28:bb:43:de:59:5f:c6:e7:c7:16:08:11:cf:28:b2:4a:7f:20:74:3d:f4:53:6a:b6:33:37:25:98:3e:a7:02:56:da:1b:75:7a:39:bd:0a:31:d5:26:cb:30:8b:3d:bf:a5:58:48:8c:a8:5d:b4:eb:51:0d:72:52:32:85:60:0d:56:2f:46:3c:65:90:4a:9b:a3:01:b3:d9:01:b2:d9:ea:70:68:38:49:d5:1a:29:9f:52:b8:54:72:71:0c:4a:88:4b:73:63:6f:05:a0:b6:23:03:31:12:be:c3:cf:6c:b7:2b:e6:4e:50:a1:1b:7f:ab:2a:ba:5f:92:16:3d:4c:ac:d8:02:11:78:8b:bf:4e:43:3b:e5:0c:57:fb:6f:8a:81:ef:51:7e:a3:92:2a:de:2b:96:ae:95:2e:dc:e3:97:ce:c7:af:8d:42:67:2c:6a:3a:fa:fa:67:79:d2:14:52:47:eb:65:ca:53:afExponent: 65537 (0x10001)X509v3 extensions:X509v3 Key Usage: criticalDigital Signature, Key EnciphermentX509v3 Extended Key Usage: TLS Web Server AuthenticationX509v3 Basic Constraints: criticalCA:FALSEX509v3 Authority Key Identifier: keyid:AB:7D:E2:A1:2C:F0:E0:27:53:52:72:D8:C9:46:76:09:F8:77:0D:63X509v3 Subject Alternative Name: DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9
student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSIONcluster1-controlplane Ready control-plane 39m v1.24.0cluster1-node01 Ready <none> 39m v1.24.0
> cluster2 안에 있는 node 2개 확인
student-node ~ ➜ kubectl config use-context cluster2Switched to context "cluster2".student-node ~ ➜ student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSIONcluster2-controlplane Ready control-plane 43m v1.24.0cluster2-node01 Ready <none> 42m v1.24.0student-node ~ ➜
★ 여러개의 Cluster가 존재한다. 그 안에 node에 ssh로 접속하시오.
student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSIONcluster1-controlplane Ready control-plane 44m v1.24.0cluster1-node01 Ready <none> 44m v1.24.0student-node ~ ➜ ssh cluster1-controlplaneWelcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)* Documentation: https://help.ubuntu.com* Management: https://landscape.canonical.com* Support: https://ubuntu.com/advantageThis system has been minimized by removing packages and content that arenot required on a system that usersdo not log into.To restore this content, you can run the 'unminimize'command.cluster1-controlplane ~ ➜ cluster1-controlplane ~ ➜ logoutConnection to cluster1-controlplane closed.student-node ~ ➜
student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSIONcluster1-controlplane Ready control-plane 100m v1.24.0cluster1-node01 Ready <none> 99m v1.24.0student-node ~ ➜
> cluster1-controlplane에 들어간다.
student-node ~ ➜ ssh cluster1-controlplaneWelcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)* Documentation: https://help.ubuntu.com* Management: https://landscape.canonical.com* Support: https://ubuntu.com/advantageThis system has been minimized by removing packages and content that arenot required on a system that usersdo not log into.To restore this content, you can run the 'unminimize'command.cluster1-controlplane ~ ➜
student-node ~ ➜ ssh cluster2-controlplaneWelcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)* Documentation: https://help.ubuntu.com* Management: https://landscape.canonical.com* Support: https://ubuntu.com/advantageThis system has been minimized by removing packages and content that arenot required on a system that usersdo not log into.To restore this content, you can run the 'unminimize'command.Last login: Sat May 27 11:31:10 2023 from 192.12.169.22cluster2-controlplane ~ ➜
student-node ~ ➜ ssh 192.12.169.15Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)* Documentation: https://help.ubuntu.com* Management: https://landscape.canonical.com* Support: https://ubuntu.com/advantageThis system has been minimized by removing packages and content that arenot required on a system that usersdo not log into.To restore this content, you can run the 'unminimize'command.etcd-server ~ ➜
student-node ~ ➜ ssh 192.13.240.15Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64)* Documentation: https://help.ubuntu.com* Management: https://landscape.canonical.com* Support: https://ubuntu.com/advantageThis system has been minimized by removing packages and content that arenot required on a system that usersdo not log into.To restore this content, you can run the 'unminimize'command.Last login: Sat May 27 12:20:19 2023 from 192.13.240.21etcd-server ~ ➜
> etcd-server에서 복원하는거니 127.0.0.1 사용, 인증서는 기본 사용, 데이터 경로는 /var/lib/etcd-data-new 사용
★ 클러스터를 업그레이드하는 작업을 수행해야 합니다. 애플리케이션에 접속하는 사용자에게 영향을 미치지 않아야 하며 새 VM을 프로비저닝할 수 없습니다. 클러스터를 업그레이드하기 위해 어떤 방법을 사용하시겠습니까?
-> Worker node를 다른 노드로 이동하면서 한 번에 한 노드씩 업그레이드
★ 현재 쿠버네티스의 안정적인 최신 버전은 무엇인가요?
-> v1.27.2
controlplane ~ ➜ kubeadm upgrade plan[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[preflight] Running pre-flight checks.[upgrade] Running cluster health checks[upgrade] Fetching available versions to upgrade to[upgrade/versions] Cluster version: v1.25.0[upgrade/versions] kubeadm version: v1.25.0I0519 23:10:12.713401 17274 version.go:256] remote version is much newer: v1.27.2; falling back to: stable-1.25[upgrade/versions] Target version: v1.25.10[upgrade/versions] Latest version in the v1.25 series: v1.25.10
★ 현재 버전의 kubeadm 도구가 설치된 상태에서 업그레이드할 수 있는 최신 버전은 무엇인가요?
-> v1.25.10
controlplane ~ ➜ kubeadm upgrade plan[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[preflight] Running pre-flight checks.[upgrade] Running cluster health checks[upgrade] Fetching available versions to upgrade to[upgrade/versions] Cluster version: v1.25.0[upgrade/versions] kubeadm version: v1.25.0I0519 23:10:12.713401 17274 version.go:256] remote version is much newer: v1.27.2; falling back to: stable-1.25[upgrade/versions] Target version: v1.25.10[upgrade/versions] Latest version in the v1.25 series: v1.25.10Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':COMPONENT CURRENT TARGETkubelet 2 x v1.25.0 v1.25.10Upgrade to the latest version in the v1.25 series:COMPONENT CURRENT TARGETkube-apiserver v1.25.0 v1.25.10kube-controller-manager v1.25.0 v1.25.10kube-scheduler v1.25.0 v1.25.10kube-proxy v1.25.0 v1.25.10CoreDNS v1.9.3 v1.9.3etcd 3.5.4-0 3.5.4-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.25.10Note: Before you can perform this upgrade, you have to update kubeadm to v1.25.10.