반응형

 

 

2023.10.03

드디어 작년부터 준비하던 CKA시험을 응시하고 취득을 완료했다.

22년 사이버 먼데이때 50% 할인을 받아서 신청을했다. 정가 $ 395달러 기준 ($ 197.50)

생각보다 어렵지 않았던거 같다.

 

 

  •  접수 사이트

https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/

 

 

 

 

 

 

공부 참고 사이트

1. 뭄샤드 강의

2. kubernets 공식 문서

3. KodeKloud(뭄샤드 실습)

4. Killer.sh (자격증 접수하면 2회 가능, 마지막 날 풀어보기 반복, 실제 시험보다 어려움)

 

 

 

중요하지않지만 간단한 꿀팁 !

1. 간단하게 배점 먹고들어갈 수 있는 jsonpath 사용법 배우기

2. kubectl cheat sheet 어느정도 배우고 가기

 - https://kubernetes.io/docs/reference/kubectl/cheatsheet/

3. 북마크를 이용해서 쿠버네티스 문서 yaml 파일 빨리 만들기

4. 시험 시작할때 무조건 context 수정하기 꼭! 꼭! 꼭!

5. 시험 원격화면이 굉장히 작으므로 해상도 큰거 사용하기

6. 시험환경은 많이 개선되어서 생각보다 빠르고 쾌적

 

 

시험에 나온 문제 (기억나는 부분만...)

1. 현재 사용가능한 Node들의 정보 출력 및 저장 ( Node Name, Taint=NoSchedule, 사용가능 list)

2. 현재 사용중인 Pods중에 특정 Label 가지고있는 Pods 출력 및 저장

3. Stateful Scale size 조절 

4. Deploy Scale Size 조절, Deploy된 Pods들의 image update

5. 현재 사용중인 Controlplane node만 Cluster upgrade 하기 (Kubeadm, Kubelet, Kubectl)

6. 하나의 Pod에서 2개 Container사용 (Sidecar, EmptyDir)

7. Deployment 생성, Pods 생성 간단한 생성 등등..

8. 사용자 Rule, Rulebinding, Serviceaccount 생성 및 binding 작업

9. ClusterRule, ClusterRulebinding 연동 설정

10. Storage 관련 Pod에 PV, PVC 생성 및 연동

11. 배점이 제일 높은 Node Troubleshooting 문제

 

 

 

다음 목표는 11월 말까지 CKAD 취득 !!

레쓰꼬  !!

 

 

 

반응형
반응형

 

2023.06.06

★ 클러스터에 몇 개의 clusterrole이 정의되어 있나요?

-> 69개

controlplane ~ ➜ kubectl get clusterrole --no-headers | wc -l 69 controlplane ~ ➜

 

 

클러스터에 몇 개의 clusterrolebindings이 존재하나요?

-> 54개

controlplane ~ ➜ kubectl get clusterrolebindings --no-headers | wc -l 54 controlplane ~ ➜

 

 

cluster-admin clusterrole은 어떤 네임스페이스에 속하나요?

-> 클러스터 역할은 클러스터 전체에 속하며 네임스페이스의 일부가 아니다.

controlplane ~ ➜ kubectl describe clusterrole cluster-admin Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​*.* [] [] [*] ​​​​​​​​​​​​​[*] [] [*] controlplane ~ ➜

 

 

cluster-admin 역할은 어떤 사용자/그룹에 바인딩되나요?

-> Group  system:masters

controlplane ~ ➜ kubectl describe clusterrolebinding cluster-admin Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: ​​Kind: ClusterRole ​​Name: cluster-admin Subjects: ​​Kind Name Namespace ​​---- ---- --------- ​​Group system:masters controlplane ~ ➜

 

 

cluster-admin 역할은 어떤 수준의 권한을 부여하나요?

-> 클러스터의 모든 리소스에 대해 모든 작업 수행

controlplane ~ ➜ kubectl describe clusterrole cluster-admin Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​*.* [] [] [*] ​​​​​​​​​​​​​[*] [] [*] controlplane ~ ➜

 

 

새로운 사용자 미셸이 팀에 합류했습니다. 그녀가 노드에 액세스할 수 있도록 필요한 ClusterRoles 및 ClusterRoleBinding을 만듭니다.

-> 기존 cluster-admin 을 yaml 파일로 복사

controlplane ~ ➜ kubectl get clusterrole cluster-admin -o yaml > michelle.yaml controlplane ~ ➜

-> 복사한 yaml 파일 확인

controlplane ~ ➜ cat michelle.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: ​​annotations: ​​​​rbac.authorization.kubernetes.io/autoupdate: "true" ​​creationTimestamp: "2023-06-06T07:08:22Z" ​​labels: ​​​​kubernetes.io/bootstrapping: rbac-defaults ​​name: cluster-admin ​​resourceVersion: "68" ​​uid: 0d93b435-10ba-475e-ae77-d46510f93d75 rules: - apiGroups: ​​- '*' ​​resources: ​​- '*' ​​verbs: ​​- '*' - nonResourceURLs: ​​- '*' ​​verbs: ​​- '*'

-> 복사한 yaml 파일 수정

controlplane ~ ➜ cat michelle.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: ​​name: node-admin rules: - apiGroups: [""] ​​resources: ["nodes"] ​​verbs: ["get", "watch", "list", "create", "delete"] controlplane ~ ➜

->  새로운 clusterrole 생성

controlplane ~ ➜ kubectl create -f michelle.yaml clusterrole.rbac.authorization.k8s.io/node-admin created controlplane ~ ➜

-> clusterrolebinding 복사

controlplane ~ ➜ kubectl get clusterrolebinding system:basic-user -o yaml > michelle-binding.yaml controlplane ~ ➜

-> 내용 확인

controlplane ~ ➜ cat michelle-binding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: ​​annotations: ​​​​rbac.authorization.kubernetes.io/autoupdate: "true" ​​creationTimestamp: "2023-06-06T07:08:24Z" ​​labels: ​​​​kubernetes.io/bootstrapping: rbac-defaults ​​name: system:basic-user ​​resourceVersion: "138" ​​uid: 8ec82b9d-2758-4347-a0d2-25ac08eb17b6 roleRef: ​​apiGroup: rbac.authorization.k8s.io ​​kind: ClusterRole ​​name: system:basic-user subjects: - apiGroup: rbac.authorization.k8s.io ​​kind: Group ​​name: system:authenticated controlplane ~ ➜

-> 내용 수정

controlplane ~ ➜ cat michelle-binding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: ​​name: michelle-binding roleRef: ​​apiGroup: rbac.authorization.k8s.io ​​kind: ClusterRole ​​name: node-admin subjects: - apiGroup: rbac.authorization.k8s.io ​​kind: User ​​name: michelle controlplane ~ ➜

-> clusterrolebinding 생성

controlplane ~ ➜ kubectl create -f michelle-binding.yaml clusterrolebinding.rbac.authorization.k8s.io/michelle-binding created controlplane ~ ➜

 

 

미셸의 책임이 커지면서 이제 그녀는 스토리지도 담당하게 됩니다. 그녀가 스토리지에 액세스할 수 있도록 필요한 ClusterRoles과 clusterrolebinding을 생성한다.

-> stroage-admin 룰 생성

controlplane ~ ➜ kubectl create clusterrole storage-admin --resource=persistentvolumes,storageclasses --verb=get,list,create,delete,watch clusterrole.rbac.authorization.k8s.io/storage-admin created controlplane ~ ➜

-> 룰 확인

controlplane ~ ➜ kubectl describe clusterrole storage-admin Name: storage-admin Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​persistentvolumes [] [] [get list create delete watch] ​​storageclasses.storage.k8s.io [] [] [get list create delete watch] controlplane ~ ➜

-> michelle-stroage-admin 생성

controlplane ~ ➜ kubectl create clusterrolebinding michelle-storage-admin --user=michelle --clusterrole=storage-admin clusterrolebinding.rbac.authorization.k8s.io/michelle-storage-admin created controlplane ~ ➜

-> 룰 확인

controlplane ~ ➜ kubectl describe clusterrolebindings michelle-storage-admin Name: michelle-storage-admin Labels: <none> Annotations: <none> Role: ​​Kind: ClusterRole ​​Name: storage-admin Subjects: ​​Kind Name Namespace ​​---- ---- --------- ​​User michelle controlplane ~ ➜

 

반응형
반응형

 

 

2023.06.06

★ kube-apiserver 클러스터에 구성된 인증 모드를 확인합니다.

->  Node, RBAC

controlplane ~ ➜ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE blue blue-app 1/1 Running 0 3m50s blue dark-blue-app 1/1 Running 0 3m50s default red-84c985b67c-ncs4w 1/1 Running 0 3m51s default red-84c985b67c-whbgx 1/1 Running 0 3m51s kube-flannel kube-flannel-ds-fmzvv 1/1 Running 0 7m12s kube-system coredns-787d4945fb-p9td5 1/1 Running 0 7m11s kube-system coredns-787d4945fb-qqzl2 1/1 Running 0 7m12s kube-system etcd-controlplane 1/1 Running 0 7m22s kube-system kube-apiserver-controlplane 1/1 Running 0 7m24s kube-system kube-controller-manager-controlplane 1/1 Running 0 7m27s kube-system kube-proxy-r8dbm 1/1 Running 0 7m12s kube-system kube-scheduler-controlplane 1/1 Running 0 7m22s controlplane ~ ➜ controlplane ~ ➜ kubectl describe pods -n kube-system kube-apiserver-controlplane | grep authorization ​​​​​​--authorization-mode=Node,RBAC controlplane ~ ➜

 

 

★ default 네임스페이스에 몇개의  role이 존재하나요?.

->  0

controlplane ~ ➜ kubectl get role -n default No resources found in default namespace. controlplane ~ ➜

 

 

★ 모든 네임스페이스에 총 몇 개의 role이 존재하나요?

-> 12

controlplane ~ ➜ kubectl get role -A NAMESPACE NAME CREATED AT blue developer 2023-06-06T05:54:21Z kube-public kubeadm:bootstrap-signer-clusterinfo 2023-06-06T05:50:46Z kube-public system:controller:bootstrap-signer 2023-06-06T05:50:45Z kube-system extension-apiserver-authentication-reader 2023-06-06T05:50:44Z kube-system kube-proxy 2023-06-06T05:50:48Z kube-system kubeadm:kubelet-config 2023-06-06T05:50:45Z kube-system kubeadm:nodes-kubeadm-config 2023-06-06T05:50:45Z kube-system system::leader-locking-kube-controller-manager 2023-06-06T05:50:45Z kube-system system::leader-locking-kube-scheduler 2023-06-06T05:50:45Z kube-system system:controller:bootstrap-signer 2023-06-06T05:50:44Z kube-system system:controller:cloud-provider 2023-06-06T05:50:44Z kube-system system:controller:token-cleaner 2023-06-06T05:50:45Z controlplane ~ ➜

 

 

★ kube-system 네임스페이스의 kube-proxy 역할에 액세스 권한이 부여된 리소스에는 어떤 것이 있나요?

->  configmaps

controlplane ~ ➜ kubectl describe role -n kube-system kube-proxy Name: kube-proxy Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​configmaps [] [kube-proxy] [get] controlplane ~ ➜

 

★ kube-proxy role은 컨피그맵에서 어떤 권한을 수행할 수 있나요?

-> get

-> kube-proxy 역할은 kube-proxy라는 이름으로만 컨피그맵 오브젝트의 세부 정보를 가져올 수 있다.

controlplane ~ ➜ kubectl describe role -n kube-system kube-proxy Name: kube-proxy Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​configmaps [] [kube-proxy] [get] controlplane ~ ➜

 

 

★ 어떤 계정에 kube-proxy 역할이 할당되나요?

-> Group:system:bootstrappers:kubeadm:default-node-token  

controlplane ~ ➜ kubectl describe rolebindings -n kube-system kube-proxy Name: kube-proxy Labels: <none> Annotations: <none> Role: ​​Kind: Role ​​Name: kube-proxy Subjects: ​​Kind Name Namespace ​​---- ---- --------- ​​Group system:bootstrappers:kubeadm:default-node-token controlplane ~ ➜

 

 

★ 개발 사용자가 기본 네임스페이스에서 파드를 생성, 나열 및 삭제하는 데 필요한 role과 rolebinding을 생성합니다.

-> role 생성

controlplane ~ ➜ kubectl create role developer --namespace=default --verb=list,create,delete --resource=pods role.rbac.authorization.k8s.io/developer created controlplane ~ ➜

-> role binding 생성

controlplane ~ ➜ kubectl create rolebinding dev-user-binding --namespace=default --role=developer --user=dev-user rolebinding.rbac.authorization.k8s.io/dev-user-binding created controlplane ~ ➜

-> 생성 확인

controlplane ~ ➜ kubectl get role NAME CREATED AT developer 2023-06-06T06:23:29Z controlplane ~ ➜ controlplane ~ ➜ kubectl get rolebindings NAME ROLE AGE dev-user-binding Role/developer 63s controlplane ~ ➜

-> 상세확인

controlplane ~ ➜ kubectl describe role developer Name: developer Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​pods [] [] [list create delete] controlplane ~ ➜ controlplane ~ ➜ kubectl describe rolebindings dev-user-binding Name: dev-user-binding Labels: <none> Annotations: <none> Role: ​​Kind: Role ​​Name: developer Subjects: ​​Kind Name Namespace ​​---- ---- --------- ​​User dev-user controlplane ~ ➜

 

 

★ dark-blue-app 파드에 developer role을 연동해주세요.

-> 기존에는 Resource Names에 blue-app으로 연동되어있는데 바꿔줘야한다.

controlplane ~ ➜ kubectl describe role -n blue developer Name: developer Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​pods [] [blue-app] [get watch create delete] controlplane ~ ➜

-> 파드 이름을 수정

apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ​​creationTimestamp: "2023-06-06T05:54:21Z" ​​name: developer ​​namespace: blue ​​resourceVersion: "3780" ​​uid: cf0b171d-870f-4cf8-a585-33d9294ccf95 rules: - apiGroups: ​​- "" ​​resourceNames: ​​- dark-blue-app ​​resources: ​​- pods ​​verbs: ​​- get ​​- watch ​​- create ​​- delete

-> 확인

controlplane ~ ➜ kubectl edit role -n blue developer role.rbac.authorization.k8s.io/developer edited controlplane ~ ➜ controlplane ~ ➜ controlplane ~ ➜ kubectl describe role -n blue Name: developer Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​pods [] [dark-blue-app] [get watch create delete] controlplane ~ ➜

 

 

★ 기존 developer 새 규칙을 추가하여 blue 네임스페이스에서 배포를 만들 수 있는 dev-user 권한을 부여합니다.

-> 기존 role에서 아래 apiGroups 부터 추가 기입

apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ​​creationTimestamp: "2023-06-06T05:54:21Z" ​​name: developer ​​namespace: blue ​​resourceVersion: "4827" ​​uid: cf0b171d-870f-4cf8-a585-33d9294ccf95 rules: - apiGroups: ​​- apps ​​resourceNames: ​​- dark-blue-app ​​resources: ​​- pods ​​verbs: ​​- get ​​- watch ​​- create ​​- delete - apiGroups: ​​- apps ​​resources: ​​- deployments ​​verbs: ​​- create

-> edit 완료

controlplane ~ ➜ kubectl edit role -n blue developer role.rbac.authorization.k8s.io/developer edited controlplane ~ ➜

- role 상세 확인

controlplane ~ ➜ kubectl describe role -n blue developer Name: developer Labels: <none> Annotations: <none> PolicyRule: ​​Resources Non-Resource URLs Resource Names Verbs ​​--------- ----------------- -------------- ----- ​​deployments.apps [] [] [create] ​​pods.apps [] [dark-blue-app] [get watch create delete] controlplane ~ ➜

 

반응형
반응형

 

 

2023.05.30

★ kubeconfig 파일에 몇 개의 클러스터가 정의되어 있는가?

-> 1개

controlplane ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://controlplane:6443 ​​name: kubernetes contexts: - context: ​​​​cluster: kubernetes ​​​​user: kubernetes-admin ​​name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin ​​user: ​​​​client-certificate-data: DATA+OMITTED ​​​​client-key-data: DATA+OMITTED controlplane ~ ➜

 

 

★ kubeconfig 파일에 몇명의 사용자가 정의되어 있는가?

-> 1명

controlplane ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://controlplane:6443 ​​name: kubernetes contexts: - context: ​​​​cluster: kubernetes ​​​​user: kubernetes-admin ​​name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin ​​user: ​​​​client-certificate-data: DATA+OMITTED ​​​​client-key-data: DATA+OMITTED controlplane ~ ➜

 

 

 

★ kubeconfig 파일에는 몇 개의 컨텍스트가 정의되어 있는가?

-> 1개

controlplane ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://controlplane:6443 ​​name: kubernetes contexts: - context: ​​​​cluster: kubernetes ​​​​user: kubernetes-admin ​​name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin ​​user: ​​​​client-certificate-data: DATA+OMITTED ​​​​client-key-data: DATA+OMITTED controlplane ~ ➜

 

 

★ 컨텍스트에서 구성된 사용자 이름이 무엇인가요?

-> kubernetes-admin

controlplane ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://controlplane:6443 ​​name: kubernetes contexts: - context: ​​​​cluster: kubernetes ​​​​user: kubernetes-admin ​​name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin ​​user: ​​​​client-certificate-data: DATA+OMITTED ​​​​client-key-data: DATA+OMITTED controlplane ~ ➜

 

 

★ 컨텍스트에서 구성된  클러스터 이름이 무엇인가요?

-> kubernetes

controlplane ~ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://controlplane:6443 ​​name: kubernetes contexts: - context: ​​​​cluster: kubernetes ​​​​user: kubernetes-admin ​​name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin ​​user: ​​​​client-certificate-data: DATA+OMITTED ​​​​client-key-data: DATA+OMITTED controlplane ~

 

 

★ my-kube-config 구성된 클러스터는 몇개인가요?

-> 4개

★ my-kube-config 구성된 컨텍스트는 몇개인가요?

-> 4개

★ my-kube-config 구성된 컨텍스트에서 research의 유저는 누구인가요?

-> dev-user

★ my-kube-config 구성된 aws-user의 인증서 파일명은 무엇인가요?

-> aws-user.crt

controlplane ~ ➜ kubectl config view --kubeconfig my-kube-config apiVersion: v1 clusters: - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: development - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: kubernetes-on-aws - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: production - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: test-cluster-1 contexts: - context: ​​​​cluster: kubernetes-on-aws ​​​​user: aws-user ​​name: aws-user@kubernetes-on-aws - context: ​​​​cluster: test-cluster-1 ​​​​user: dev-user ​​name: research - context: ​​​​cluster: development ​​​​user: test-user ​​name: test-user@development - context: ​​​​cluster: production ​​​​user: test-user ​​name: test-user@production current-context: test-user@development kind: Config preferences: {} users: - name: aws-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt ​​​​client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key - name: dev-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt ​​​​client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key - name: test-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt ​​​​client-key: /etc/kubernetes/pki/users/test-user/test-user.key controlplane ~ ➜

 

 

★ my-kube-config 파일에서 현재 컨텍스트가 무엇으로 설정되어 있나요?

-> test-user@development

controlplane ~ ➜ kubectl config current-context --kubeconfig my-kube-config test-user@development controlplane ~ ➜

 

 

★ dev-user를 사용하여 test-cluster-1에 접근하려고합니다.  현재 컨텍스트를 올바른 컨텍스트로 설정해주세요.

-> research 컨텍스트로 스위칭

controlplane ~ ➜ kubectl config --kubeconfig=/root/my-kube-config use-context research Switched to context "research". controlplane ~ ➜

-> 현재 컨텍스트 확인

controlplane ~ ➜ kubectl config --kubeconfig=/root/my-kube-config current-context research controlplane ~ ➜

 

 

 

★ my-kube-config 파일을 기본 kubeconfig로 설정한다.

-> /.kube/config 파일에다가 복사한다.

controlplane ➜ mv my-kube-config /root/.kube/config

-> 확인한다.

controlplane ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: development - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: kubernetes-on-aws - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: production - cluster: ​​​​certificate-authority: /etc/kubernetes/pki/ca.crt ​​​​server: https://controlplane:6443 ​​name: test-cluster-1 contexts: - context: ​​​​cluster: kubernetes-on-aws ​​​​user: aws-user ​​name: aws-user@kubernetes-on-aws - context: ​​​​cluster: test-cluster-1 ​​​​user: dev-user ​​name: research - context: ​​​​cluster: development ​​​​user: test-user ​​name: test-user@development - context: ​​​​cluster: production ​​​​user: test-user ​​name: test-user@production current-context: research kind: Config preferences: {} users: - name: aws-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt ​​​​client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key - name: dev-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt ​​​​client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key - name: test-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt ​​​​client-key: /etc/kubernetes/pki/users/test-user/test-user.key controlplane ~ ➜

 

 

★ 현재 컨텍스트를 연구로 설정한 상태에서 클러스터에 액세스하려고 합니다. 그러나 뭔가 잘못된 것 같습니다. 문제를 식별하고 수정하세요.

-> pod를 확인하려는데 오류 발생 (사용자 인증서를 읽을 수 없음)

controlplane ~ ➜ kubectl get pods error: unable to read client-cert /etc/kubernetes/pki/users/dev-user/developer-user.crt for dev-user due to open /etc/kubernetes/pki/users/dev-user/developer-user.crt: no such file or directory controlplane ~ ✖

-> config를 보고 user항목에 키 경로 확인해보기

controlplane ~ ✖ kubectl config view users: - name: aws-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt ​​​​client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key - name: dev-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt ​​​​client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key - name: test-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt ​​​​client-key: /etc/kubernetes/pki/users/test-user/test-user.key controlplane ~ ➜

->     client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt 고치기

controlplane ~ ➜ kubectl config view users: - name: aws-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt ​​​​client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key - name: dev-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/dev-user/dev-user.crt ​​​​client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key - name: test-user ​​user: ​​​​client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt ​​​​client-key: /etc/kubernetes/pki/users/test-user/test-user.key controlplane ~ ➜

-> pod 확인

controlplane ~ ➜ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-2mr4z 1/1 Running 0 54m kube-system coredns-787d4945fb-bxctq 1/1 Running 0 54m kube-system coredns-787d4945fb-r7hz5 1/1 Running 0 54m kube-system etcd-controlplane 1/1 Running 0 54m kube-system kube-apiserver-controlplane 1/1 Running 0 54m kube-system kube-controller-manager-controlplane 1/1 Running 0 54m kube-system kube-proxy-f27wr 1/1 Running 0 54m kube-system kube-scheduler-controlplane 1/1 Running 0 54m controlplane ~ ➜

 

 

 

 

반응형
반응형


2023.05.30

★ 새로운 사람이 들어왔다. 그사람을 cluster에 접속할 수 있게 akshay.csr 파일의 내용을 사용하여 이름이 akshay인 CertificateSigningRequest 오브젝트를 생성해보자.

-> 해당 인증서를 base64 값으로 출력한다.

controlplane ~ ➜ cat akshay.csr | base64 -w 0 LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZV3R6YUdGNU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXcyeXNFaXUwKzJVMVhzbUtPMUx6WmE4ekp5ZlgxMXhNL3ZlZXVyYmtqcmpFCnNSSEhVei9zWTlRMXA2TUh1U2xjcUFSTU1OVnYvNU9EOUJnZmdlekxIZFluS0dnSGJubDVkZTdud2FsamRDU2UKZ0Z5Vmovamh0L210cFBlc1ZTcU1xRjh2U2dHS2ZoVTRrWG5Fc3BxeXQwREdIRTVQM3NaQ2Vua2cxU3NEajZmagpnK1pvRzUzKzZncnBRSmQzdm1XTDhIN0hhL2xBVXhEa3BRUW9kNGU5REdLeEVJVzFDSE5vcUFTaVRtdWo0d0lyCmRhSWJJMnVKSm1VWGdJN0dPRjd3MkdsZktjRG90VmVzSk5RcFNtVDFQT1JCRS9BQnVCZXY1eGFsVCt5aUNNWHQKWXY1MmZWSkVFN2c5bHZ2SnA2cjMrSTRSbG5iU2Fnek5xL0tiaXNBZDlRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRFQ4emt4STVPTDIrbHp2T1VsU085UkZ1SGJPMEtEbjhrZkFLdk5LcUxYSFN1VlgrZ2dpClNDNGl0a0pWRCtBVVVJbmhmY2gyU3V3V0I2OTV4bERlRHd1WW0rK0ExY1Ztc3V1VEs3cXVlRkhsaDFpUXR3cUwKTGE5NU4zcHZyUUcyWC9lazhEOC93T0Z4bDF3WDdXakJiWC92RnMzaFBQNzViZVJkbHVZUG13RnZ5UWhRK3lyYQp0SVEwWXdwUUxnQUJQV0VObEtFZUpWeHZxVGtwNHMzWXczVEZ3WThNdUxrSEU3MVFWaDhyZUlTQUVWUGxWdHUzCnhyZ0dOTzgwdDFDN2cxUEJEUWpqZWNEQnFuQm52RHhYNFF1a0xjalpzNHVVTzhubW1lZWdWZm5LQTl5UEcvbk4KdG92STRLRUwvUE5CbSt0UHYvclhqdzl1Zy9kbkQ3V2tkeEU9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= controlplane ~ ➜

-> 출력한 키값을 yaml 파일을 만들어서 저장한다.

controlplane ~ ➜ vi akshay-csr.yaml controlplane ~ ➜ cat akshay-csr.yaml --- apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: ​​name: akshay spec: ​​groups: ​​- system:authenticated ​​request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZV3R6YUdGNU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQXcyeXNFaXUwKzJVMVhzbUtPMUx6WmE4ekp5ZlgxMXhNL3ZlZXVyYmtqcmpFCnNSSEhVei9zWTlRMXA2TUh1U2xjcUFSTU1OVnYvNU9EOUJnZmdlekxIZFluS0dnSGJubDVkZTdud2FsamRDU2UKZ0Z5Vmovamh0L210cFBlc1ZTcU1xRjh2U2dHS2ZoVTRrWG5Fc3BxeXQwREdIRTVQM3NaQ2Vua2cxU3NEajZmagpnK1pvRzUzKzZncnBRSmQzdm1XTDhIN0hhL2xBVXhEa3BRUW9kNGU5REdLeEVJVzFDSE5vcUFTaVRtdWo0d0lyCmRhSWJJMnVKSm1VWGdJN0dPRjd3MkdsZktjRG90VmVzSk5RcFNtVDFQT1JCRS9BQnVCZXY1eGFsVCt5aUNNWHQKWXY1MmZWSkVFN2c5bHZ2SnA2cjMrSTRSbG5iU2Fnek5xL0tiaXNBZDlRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRFQ4emt4STVPTDIrbHp2T1VsU085UkZ1SGJPMEtEbjhrZkFLdk5LcUxYSFN1VlgrZ2dpClNDNGl0a0pWRCtBVVVJbmhmY2gyU3V3V0I2OTV4bERlRHd1WW0rK0ExY1Ztc3V1VEs3cXVlRkhsaDFpUXR3cUwKTGE5NU4zcHZyUUcyWC9lazhEOC93T0Z4bDF3WDdXakJiWC92RnMzaFBQNzViZVJkbHVZUG13RnZ5UWhRK3lyYQp0SVEwWXdwUUxnQUJQV0VObEtFZUpWeHZxVGtwNHMzWXczVEZ3WThNdUxrSEU3MVFWaDhyZUlTQUVWUGxWdHUzCnhyZ0dOTzgwdDFDN2cxUEJEUWpqZWNEQnFuQm52RHhYNFF1a0xjalpzNHVVTzhubW1lZWdWZm5LQTl5UEcvbk4KdG92STRLRUwvUE5CbSt0UHYvclhqdzl1Zy9kbkQ3V2tkeEU9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= ​​signerName: kubernetes.io/kube-apiserver-client ​​usages: ​​- client auth controlplane ~ ➜

-> 저장한 yaml 파일을 생성한다.

controlplane ~ ➜ kubectl apply -f akshay-csr.yaml certificatesigningrequest.certificates.k8s.io/akshay created controlplane ~ ➜

 

 

★ 새로 만든 인증서 서명 요청 개체의 상태는 어떤인가요?

-> Pending

controlplane ~ ➜ kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION akshay 118s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Pending csr-gwpnt 18m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issued controlplane ~ ➜

 

 

★ CSR 요청을 승인해 주세요.

controlplane ~ ➜ kubectl certificate approve akshay certificatesigningrequest.certificates.k8s.io/akshay approved controlplane ~ ➜ controlplane ~ ➜ kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION akshay 4m19s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued csr-gwpnt 20m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issued controlplane ~ ➜

 

 

★ CSR 승인요청이 왔습니다. 어떤 그룹에 대한 액세스를 요청하는 CSR인가요?

-> 이름 : agent-smith

controlplane ~ ➜ kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION agent-smith 15s kubernetes.io/kube-apiserver-client agent-x <none> Pending akshay 5m35s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued csr-gwpnt 21m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issued controlplane ~ ➜

-> 그룹 : system:masters

controlplane ~ ➜ kubectl get csr agent-smith -o yaml apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: ​​creationTimestamp: "2023-05-30T13:13:51Z" ​​name: agent-smith ​​resourceVersion: "2115" ​​uid: b1aad5d7-ec17-468a-b157-b36bf328ed60 spec: ​​groups: ​​- system:masters ​​- system:authenticated ​​request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1dEQ0NBVUFDQVFBd0V6RVJNQThHQTFVRUF3d0libVYzTFhWelpYSXdnZ0VpTUEwR0NTcUdTSWIzRFFFQgpBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRE8wV0pXK0RYc0FKU0lyanBObzV2UklCcGxuemcrNnhjOStVVndrS2kwCkxmQzI3dCsxZUVuT041TXVxOTlOZXZtTUVPbnJEVU8vdGh5VnFQMncyWE5JRFJYall5RjQwRmJtRCs1eld5Q0sKeTNCaWhoQjkzTUo3T3FsM1VUdlo4VEVMcXlhRGtuUmwvanYvU3hnWGtvazBBQlVUcFdNeDRCcFNpS2IwVSt0RQpJRjVueEF0dE1Wa0RQUTdOYmVaUkc0M2IrUVdsVkdSL3o2RFdPZkpuYmZlek90YUF5ZEdMVFpGQy93VHB6NTJrCkVjQ1hBd3FDaGpCTGt6MkJIUFI0Sjg5RDZYYjhrMzlwdTZqcHluZ1Y2dVAwdEliT3pwcU52MFkwcWRFWnB3bXcKajJxRUwraFpFV2trRno4MGxOTnR5VDVMeE1xRU5EQ25JZ3dDNEdaaVJHYnJBZ01CQUFHZ0FEQU5CZ2txaGtpRwo5dzBCQVFzRkFBT0NBUUVBUzlpUzZDMXV4VHVmNUJCWVNVN1FGUUhVemFsTnhBZFlzYU9SUlFOd0had0hxR2k0CmhPSzRhMnp5TnlpNDRPT2lqeWFENnRVVzhEU3hrcjhCTEs4S2czc3JSRXRKcWw1ckxaeTlMUlZyc0pnaEQ0Z1kKUDlOTCthRFJTeFJPVlNxQmFCMm5XZVlwTTVjSjVURjUzbGVzTlNOTUxRMisrUk1uakRRSjdqdVBFaWM4L2RoawpXcjJFVU02VWF3enlrcmRISW13VHYybWxNWTBSK0ROdFYxWWllKzBIOS9ZRWx0K0ZTR2poNUw1WVV2STFEcWl5CjRsM0UveTNxTDcxV2ZBY3VIM09zVnBVVW5RSVNNZFFzMHFXQ3NiRTU2Q0M1RGhQR1pJcFVibktVcEF3a2ErOEUKdndRMDdqRytocGtueG11RkFlWHhnVXdvZEFMYUo3anUvVERJY3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K ​​signerName: kubernetes.io/kube-apiserver-client ​​usages: ​​- digital signature ​​- key encipherment ​​- server auth ​​username: agent-x status: {} controlplane ~ ➜

 

 

★ CSR 승인을 거절하세요.

-> kubectl certificate deny agent-smith

controlplane ~ ➜ kubectl certificate deny agent-smith certificatesigningrequest.certificates.k8s.io/agent-smith denied controlplane ~ ➜ kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION agent-smith 6m21s kubernetes.io/kube-apiserver-client agent-x <none> Denied akshay 11m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued csr-gwpnt 27m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issued controlplane ~ ➜

 

 

★ CSR 요청을 삭제하세요.

-> kubectl delete csr agent-smith

controlplane ~ ➜ kubectl delete csr agent-smith certificatesigningrequest.certificates.k8s.io "agent-smith" deleted controlplane ~ ➜ kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION akshay 13m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued csr-gwpnt 29m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issued controlplane ~ ➜
반응형
반응형

 

 

2023.05.29

★ kube-apiserver에서 사용하고 있는 인증서 파일 위치 확인

1. kube-apiserver.yaml 파일 확인

-> --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep tls ​​​​- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt ​​​​- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key controlplane ~ ➜

2. kube-apiserver pod 확인

-> --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

controlplane ~ ➜ kubectl -n kube-system describe pods kube-apiserver-controlplane | grep tls ​​​​​​--tls-cert-file=/etc/kubernetes/pki/apiserver.crt ​​​​​​--tls-private-key-file=/etc/kubernetes/pki/apiserver.key controlplane ~ ➜

 

 

★ kube-apiserver에서 사용하고 있는 etcd-client 인증서 파일 위치 확인

-> etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd ​​​​- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt ​​​​- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt ​​​​- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key ​​​​- --etcd-servers=https://127.0.0.1:2379 controlplane ~ ➜

 

 

★ kube-apiserver에서 사용하고 있는 kubelet 키 파일 위치 확인

-> --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep kubelet ​​​​- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt ​​​​- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key ​​​​- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname controlplane ~ ➜

 

 

★ etcd server 에서 사용하고 있는 인증서 파일 위치 확인

-> --cert-file=/etc/kubernetes/pki/etcd/server.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep cert ​​​​- --cert-file=/etc/kubernetes/pki/etcd/server.crt ​​​​- --client-cert-auth=true ​​​​- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt ​​​​- --peer-client-cert-auth=true ​​​​​​name: etcd-certs ​​​​name: etcd-certs controlplane ~ ➜

 

 

★ etcd server 에서 사용하고 있는 자체 ca 인증서 파일 위치 확인

-> --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep ca ​​​​- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt ​​​​- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt ​​priorityClassName: system-node-critical controlplane ~ ➜

 

 

★ Kube-apiserver 인증서에 구성된 CN 이름은 무엇인가요?

-> Subject: CN = kube-apiserver

controlplane ~ ➜ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 8483000120492181273 (0x75b9abaa2b884719) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = kubernetes ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:33 2023 GMT ​​​​​​​​​​​​Not After : May 28 11:43:33 2024 GMT ​​​​​​​​Subject: CN = kube-apiserver ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ...

 

 

★ Kube-apiserver 인증서를 발급한 CA의 이름은 무엇인가요?

-> Issuer: CN = kubernetes

controlplane ~ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 8483000120492181273 (0x75b9abaa2b884719) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = kubernetes ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:33 2023 GMT ​​​​​​​​​​​​Not After : May 28 11:43:33 2024 GMT ​​​​​​​​Subject: CN = kube-apiserver ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ...

 

 

★ Kube-apiserver 인증서에 구성된 대체 이름은 무엇인가요?

-> DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9

controlplane ~ ➜ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 8483000120492181273 (0x75b9abaa2b884719) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = kubernetes ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:33 2023 GMT ​​​​​​​​​​​​Not After : May 28 11:43:33 2024 GMT ​​​​​​​​Subject: CN = kube-apiserver ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ​​​​​​​​​​​​​​​​Modulus: ​​​​​​​​​​​​​​​​​​​​00:a3:2f:ff:75:ed:b2:38:74:01:f9:b1:41:51:aa: ​​​​​​​​​​​​​​​​​​​​f5:bb:9a:39:02:46:c2:5b:05:b1:0e:8f:75:9b:46: ​​​​​​​​​​​​​​​​​​​​18:a5:35:52:2f:2d:22:3b:fe:37:e3:ea:98:32:c5: ​​​​​​​​​​​​​​​​​​​​79:b4:2d:1b:f2:67:cd:f6:7d:4e:fa:e8:a0:69:b4: ​​​​​​​​​​​​​​​​​​​​4b:c8:25:46:20:4b:ad:69:dd:fa:63:56:b4:5c:4f: ​​​​​​​​​​​​​​​​​​​​ce:b7:28:bb:43:de:59:5f:c6:e7:c7:16:08:11:cf: ​​​​​​​​​​​​​​​​​​​​28:b2:4a:7f:20:74:3d:f4:53:6a:b6:33:37:25:98: ​​​​​​​​​​​​​​​​​​​​3e:a7:02:56:da:1b:75:7a:39:bd:0a:31:d5:26:cb: ​​​​​​​​​​​​​​​​​​​​30:8b:3d:bf:a5:58:48:8c:a8:5d:b4:eb:51:0d:72: ​​​​​​​​​​​​​​​​​​​​52:32:85:60:0d:56:2f:46:3c:65:90:4a:9b:a3:01: ​​​​​​​​​​​​​​​​​​​​b3:d9:01:b2:d9:ea:70:68:38:49:d5:1a:29:9f:52: ​​​​​​​​​​​​​​​​​​​​b8:54:72:71:0c:4a:88:4b:73:63:6f:05:a0:b6:23: ​​​​​​​​​​​​​​​​​​​​03:31:12:be:c3:cf:6c:b7:2b:e6:4e:50:a1:1b:7f: ​​​​​​​​​​​​​​​​​​​​ab:2a:ba:5f:92:16:3d:4c:ac:d8:02:11:78:8b:bf: ​​​​​​​​​​​​​​​​​​​​4e:43:3b:e5:0c:57:fb:6f:8a:81:ef:51:7e:a3:92: ​​​​​​​​​​​​​​​​​​​​2a:de:2b:96:ae:95:2e:dc:e3:97:ce:c7:af:8d:42: ​​​​​​​​​​​​​​​​​​​​67:2c:6a:3a:fa:fa:67:79:d2:14:52:47:eb:65:ca: ​​​​​​​​​​​​​​​​​​​​53:af ​​​​​​​​​​​​​​​​Exponent: 65537 (0x10001) ​​​​​​​​X509v3 extensions: ​​​​​​​​​​​​X509v3 Key Usage: critical ​​​​​​​​​​​​​​​​Digital Signature, Key Encipherment ​​​​​​​​​​​​X509v3 Extended Key Usage: ​​​​​​​​​​​​​​​​TLS Web Server Authentication ​​​​​​​​​​​​X509v3 Basic Constraints: critical ​​​​​​​​​​​​​​​​CA:FALSE ​​​​​​​​​​​​X509v3 Authority Key Identifier: ​​​​​​​​​​​​​​​​keyid:AB:7D:E2:A1:2C:F0:E0:27:53:52:72:D8:C9:46:76:09:F8:77:0D:63 ​​​​​​​​​​​​X509v3 Subject Alternative Name: ​​​​​​​​​​​​​​​​DNS:controlplane, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.26.249.9

 

 

★ etcd server 인증서에 구성된 CN은 무엇인가요?

-> Subject: CN = controlplane

controlplane ~ ➜ openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 253720388574558331 (0x38565596142b87b) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = etcd-ca ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:34 2023 GMT ​​​​​​​​​​​​Not After : May 28 11:43:34 2024 GMT ​​​​​​​​Subject: CN = controlplane ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ...

 

★ etcd server 인증서는 발급일로부터 얼마 동안 유효하나요?

-> 1 years 

controlplane ~ openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 253720388574558331 (0x38565596142b87b) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = etcd-ca ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:34 2023 GMT ​​​​​​​​​​​​Not After : May 28 11:43:34 2024 GMT ​​​​​​​​Subject: CN = controlplane ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ...

 

 

★ etcd server ca인증서는 발급일로부터 얼마 동안 유효하나요?

->  10 years

controlplane ~ ➜ openssl x509 -in /etc/kubernetes/pki/ca.crt -text Certificate: ​​​​Data: ​​​​​​​​Version: 3 (0x2) ​​​​​​​​Serial Number: 0 (0x0) ​​​​​​​​Signature Algorithm: sha256WithRSAEncryption ​​​​​​​​Issuer: CN = kubernetes ​​​​​​​​Validity ​​​​​​​​​​​​Not Before: May 29 11:43:33 2023 GMT ​​​​​​​​​​​​Not After : May 26 11:43:33 2033 GMT ​​​​​​​​Subject: CN = kubernetes ​​​​​​​​Subject Public Key Info: ​​​​​​​​​​​​Public Key Algorithm: rsaEncryption ​​​​​​​​​​​​​​​​RSA Public-Key: (2048 bit) ...

 

 

★ kubectl 명령어가 동작하지 않습니다. etcd server를 확인해보고 틀린부분을 수정하시오.

-> etcd 서버 yaml 파일을 확인해보면 cert-file 경로에 server-certificate.crt 파일이라고 저장되어있다.

controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep cert-file ​​​​- --cert-file=/etc/kubernetes/pki/etcd/server-certificate.crt ​​​​- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt controlplane ~ ➜

->  해당 경로에 crt 파일을 확인해보면 파일명이 다른걸 확인할 수 있다.

controlplane ~ ➜ ls /etc/kubernetes/pki/etcd/server* | grep .crt /etc/kubernetes/pki/etcd/server.crt controlplane ~ ➜

->  경로명을 수정해주고 기다리면 api-server가 정상적으로 작동한다.

controlplane ~ ➜ vi /etc/kubernetes/manifests/etcd.yaml controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep cert-file ​​​​- --cert-file=/etc/kubernetes/pki/etcd/server.crt ​​​​- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt controlplane ~ ➜

 

 

★ kubectl 명령어가 동작하지 않습니다. apiserver 서버의 logs를 확인해서 문제를 해결하시오.

-> crictl ps -a | grep kube-apiserver 명령어로 pod 상태 확인 

controlplane ~ ➜ crictl ps -a | grep kube-apiserver 9bd19c0102ca6 a31e1d84401e6 36 seconds ago Exited kube-apiserver 5 5413bff6f15be kube-apiserver-controlplane controlplane ~ ➜

-> logs를 확인해보니 etcd ca 인증서에 문제가 있는 것을 확인했습니다.

controlplane ~ ➜ crictl logs --tail=5 9bd19c0102ca6 ​​"BalancerAttributes": null, ​​"Type": 0, ​​"Metadata": null }. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority" E0529 13:03:01.408411 1 run.go:74] "command failed" err="context deadline exceeded" controlplane ~ ➜

-> kube-apiserver.yaml 파일의 etcd cafile 위치가 잘못되어있습니다. 수정해주세요.

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd ​​​​- --etcd-cafile=/etc/kubernetes/pki/ca.crt ​​​​- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt ​​​​- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key ​​​​- --etcd-servers=https://127.0.0.1:2379 controlplane ~ ➜ cd /etc/kubernetes/pki/etcd/ controlplane kubernetes/pki/etcd ➜ ls -al total 40 drwxr-xr-x 2 root root 4096 May 29 08:44 . drwxr-xr-x 3 root root 4096 May 29 08:44 .. -rw-r--r-- 1 root root 1086 May 29 08:44 ca.crt -rw------- 1 root root 1675 May 29 08:44 ca.key -rw-r--r-- 1 root root 1159 May 29 08:44 healthcheck-client.crt -rw------- 1 root root 1679 May 29 08:44 healthcheck-client.key -rw-r--r-- 1 root root 1208 May 29 08:44 peer.crt -rw------- 1 root root 1679 May 29 08:44 peer.key -rw-r--r-- 1 root root 1208 May 29 08:44 server.crt -rw------- 1 root root 1675 May 29 08:44 server.key controlplane kubernetes/pki/etcd ➜

-> --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd ​​​​- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt ​​​​- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt ​​​​- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key ​​​​- --etcd-servers=https://127.0.0.1:2379 controlplane ~ ➜

 

 

 

 

 

 

 

 

 

 

반응형
반응형

 

 

 

2023.05.24

★ 여러개의 Cluster가 존재한다. 그 안에 node가 몇개인지 확인하시오.

 > 클러스터 총 2개

student-node ~ ➜ kubectl config view apiVersion: v1 clusters: - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://cluster1-controlplane:6443 ​​name: cluster1 - cluster: ​​​​certificate-authority-data: DATA+OMITTED ​​​​server: https://192.12.169.3:6443 ​​name: cluster2 contexts: - context: ​​​​cluster: cluster1 ​​​​user: cluster1 ​​name: cluster1 - context: ​​​​cluster: cluster2 ​​​​user: cluster2 ​​name: cluster2 current-context: cluster1 kind: Config preferences: {} users: - name: cluster1 ​​user: ​​​​client-certificate-data: REDACTED ​​​​client-key-data: REDACTED - name: cluster2 ​​user: ​​​​client-certificate-data: REDACTED ​​​​client-key-data: REDACTED student-node ~ ➜ student-node ~ ➜ kubectl config use-context cluster cluster1 cluster2 student-node ~ ➜ kubectl config use-context cluster1 Switched to context "cluster1".

 > cluster1 안에 있는 node 2개 확인

student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION cluster1-controlplane Ready control-plane 39m v1.24.0 cluster1-node01 Ready <none> 39m v1.24.0

 > cluster2 안에 있는 node 2개 확인

student-node ~ ➜ kubectl config use-context cluster2 Switched to context "cluster2". student-node ~ ➜ student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION cluster2-controlplane Ready control-plane 43m v1.24.0 cluster2-node01 Ready <none> 42m v1.24.0 student-node ~ ➜

 

 

★ 여러개의 Cluster가 존재한다. 그 안에 node에 ssh로 접속하시오.

student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION cluster1-controlplane Ready control-plane 44m v1.24.0 cluster1-node01 Ready <none> 44m v1.24.0 student-node ~ ➜ ssh cluster1-controlplane Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64) ​* Documentation: https://help.ubuntu.com ​* Management: https://landscape.canonical.com ​* Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. cluster1-controlplane ~ ➜ cluster1-controlplane ~ ➜ logout Connection to cluster1-controlplane closed. student-node ~ ➜

 

 

★ cluster1 안에 존재하는 ETCD를 확인하세요.

 > 먼저 cluster1 클러스터로 바꾼다.

student-node ~ ➜ kubectl config use-context cluster1 Switched to context "cluster1". student-node ~ ➜

> controlplane 노드를 확인한다.

student-node ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION cluster1-controlplane Ready control-plane 100m v1.24.0 cluster1-node01 Ready <none> 99m v1.24.0 student-node ~ ➜

> cluster1-controlplane에 들어간다.

student-node ~ ➜ ssh cluster1-controlplane Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64) ​* Documentation: https://help.ubuntu.com ​* Management: https://landscape.canonical.com ​* Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. cluster1-controlplane ~ ➜

> etcd 노드가 있는지 확인한다.

cluster1-controlplane ~ ➜ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-bfgs9 1/1 Running 0 100m kube-system coredns-6d4b75cb6d-lftbq 1/1 Running 0 100m kube-system etcd-cluster1-controlplane 1/1 Running 0 101m kube-system kube-apiserver-cluster1-controlplane 1/1 Running 0 100m kube-system kube-controller-manager-cluster1-controlplane 1/1 Running 0 100m kube-system kube-proxy-45kft 1/1 Running 0 100m kube-system kube-proxy-qmxkh 1/1 Running 0 100m kube-system kube-scheduler-cluster1-controlplane 1/1 Running 0 100m kube-system weave-net-fwvfd 2/2 Running 0 100m kube-system weave-net-h9tg4 2/2 Running 1 (100m ago) 100m cluster1-controlplane ~ ➜

 

 

★ cluster2 안에 존재하는 ETCD를 확인하세요.

> cluster2 클러스터로 바꾼다.

student-node ~ ➜ kubectl config use-context cluster2 Switched to context "cluster2". student-node ~ ➜

> cluster2 클러스터안에 kube-system 네임스페이스안에 etcd가 들어가있는 pod가 있는지 확인한다. 

student-node ~ ➜ kubectl get pods -n kube-system | grep etcd student-node ~ ➜

> cluster2-controlplane 로 ssh 접속한다.

student-node ~ ➜ ssh cluster2-controlplane Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64) ​* Documentation: https://help.ubuntu.com ​* Management: https://landscape.canonical.com ​* Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. Last login: Sat May 27 11:31:10 2023 from 192.12.169.22 cluster2-controlplane ~ ➜

> manifests경로안에 etcd가 들어간 문자가 있는지 확인

cluster2-controlplane ~ ➜ ls /etc/kubernetes/manifests/ | grep -i etcd cluster2-controlplane ~ ✖

> cluster2-controlplane 프로세스를 확인 kube-apiserver가 etcd 외부 데이터를 참조하고있는 것을 확인.

cluster2-controlplane ~ ➜ ps -ef | grep etcd root 1754 1380 0 09:41 ? 00:06:24 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key root 12885 11985 0 11:38 pts/0 00:00:00 grep etcd cluster2-controlplane ~ ➜

> 또한 kube-apiserver 파드를 확인해보면 etcd경로가 나옵니다.

cluster2-controlplane ~ ➜ kubectl -n kube-system describe pods kube-apiserver-cluster2-controlplane Name: kube-apiserver-cluster2-controlplane Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Node: cluster2-controlplane/192.12.169.3 Start Time: Sat, 27 May 2023 09:42:04 +0000 Labels: component=kube-apiserver ​​​​​​​​​​​​​​​​​​​​​​tier=control-plane Annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.12.169.3:6443 ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.hash: 227d76fd58a21baab84e0eee669ac726 ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.mirror: 227d76fd58a21baab84e0eee669ac726 ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.seen: 2023-05-27T09:42:02.804610708Z ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.source: file ​​​​​​​​​​​​​​​​​​​​​​seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 192.12.169.3 IPs: ​​IP: 192.12.169.3 Controlled By: Node/cluster2-controlplane Containers: ​​kube-apiserver: ​​​​Container ID: containerd://2f32b634619597cd782c9253de5a40154d160681168ced52481ffc333276a0dc ​​​​Image: k8s.gcr.io/kube-apiserver:v1.24.0 ​​​​Image ID: k8s.gcr.io/kube-apiserver@sha256:a04522b882e919de6141b47d72393fb01226c78e7388400f966198222558c955 ​​​​Port: <none> ​​​​Host Port: <none> ​​​​Command: ​​​​​​kube-apiserver ​​​​​​--advertise-address=192.12.169.3 ​​​​​​--allow-privileged=true ​​​​​​--authorization-mode=Node,RBAC ​​​​​​--client-ca-file=/etc/kubernetes/pki/ca.crt ​​​​​​--enable-admission-plugins=NodeRestriction ​​​​​​--enable-bootstrap-token-auth=true ​​​​​​--etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem ​​​​​​--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem ​​​​​​--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem ​​​​​​--etcd-servers=https://192.12.169.15:2379

 

 

★ 내부에 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.

> /var/lib/etcd

student-node ~ ➜ kubectl -n kube-system describe pods etcd-cluster1-controlplane Name: etcd-cluster1-controlplane Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Node: cluster1-controlplane/192.12.169.24 Start Time: Sat, 27 May 2023 09:42:42 +0000 Labels: component=etcd ​​​​​​​​​​​​​​​​​​​​​​tier=control-plane Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379 ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.hash: e4db54d1749cdfee9674a1e9e140fd0b ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.mirror: e4db54d1749cdfee9674a1e9e140fd0b ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.seen: 2023-05-27T09:42:25.092806611Z ​​​​​​​​​​​​​​​​​​​​​​kubernetes.io/config.source: file ​​​​​​​​​​​​​​​​​​​​​​seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 192.12.169.24 IPs: ​​IP: 192.12.169.24 Controlled By: Node/cluster1-controlplane Containers: ​​etcd: ... Volumes: ​​etcd-certs: ​​​​Type: HostPath (bare host directory volume) ​​​​Path: /etc/kubernetes/pki/etcd ​​​​HostPathType: DirectoryOrCreate ​​etcd-data: ​​​​Type: HostPath (bare host directory volume) ​​​​Path: /var/lib/etcd ​​​​HostPathType: DirectoryOrCreate QoS Class: Burstable Node-Selectors: <none> Tolerations: :NoExecute op=Exists Events: <none>

 

★ 외부로 연결된 ETCD 데이터가 어디에 쌓이는지 경로를 확인하세요.

> cluster2-controlplane 노드로 ssh 접속 후 프로세스 확인 

> etcd-server:https://192.12.169.15:2379 외부 IP 확인

cluster2-controlplane ~ ➜ ps -ef | grep etcd root 1754 1380 0 09:41 ? 00:07:01 kube-apiserver --advertise-address=192.12.169.3 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.12.169.15:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key root 14390 14207 0 11:50 pts/0 00:00:00 grep etcd cluster2-controlplane ~ ➜

> 외부 etcd-server 접속

student-node ~ ➜ ssh 192.12.169.15 Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64) ​* Documentation: https://help.ubuntu.com ​* Management: https://landscape.canonical.com ​* Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. etcd-server ~ ➜

> etcd-server에서 프로세스 확인

> data-dir=/var/lib/etcd-data

etcd-server ~ ➜ ps -ef | grep etcd etcd 917 1 0 10:21 ? 00:01:25 /usr/local/bin/etcd --name etcd-server --data-dir=/var/lib/etcd-data --cert-file=/etc/etcd/pki/etcd.pem --key-file=/etc/etcd/pki/etcd-key.pem --peer-cert-file=/etc/etcd/pki/etcd.pem --peer-key-file=/etc/etcd/pki/etcd-key.pem --trusted-ca-file=/etc/etcd/pki/ca.pem --peer-trusted-ca-file=/etc/etcd/pki/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.12.169.15:2380 --listen-peer-urls https://192.12.169.15:2380 --advertise-client-urls https://192.12.169.15:2379 --listen-client-urls https://192.12.169.15:2379,https://127.0.0.1:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd-server=https://192.12.169.15:2380 --initial-cluster-state new root 1170 1005 0 11:51 pts/0 00:00:00 grep etcd etcd-server ~ ➜

 

 

★ 외부에 연결된 etcd-server에 연결된 노드가 몇개인가요 ?

> 1개

etcd-server ~ ➜ ETCDCTL_API=3 etcdctl \ > --endpoints=https://127.0.0.1:2379 \ > --cacert=/etc/etcd/pki/ca.pem \ > --cert=/etc/etcd/pki/etcd.pem \ > --key=/etc/etcd/pki/etcd-key.pem \ > member list c810de83f8ff6c49, started, etcd-server, https://192.12.169.15:2380, https://192.12.169.15:2379, false etcd-server ~ ➜

 

 

 

★ cluster1의 etcd를 백업하자.

> cluster1로 바꾸기

student-node ~ ➜ kubectl config use-context cluster1 Switched to context "cluster1". student-node ~ ➜

> etcd파드에서 사용하는 엔드포인트 정보를 확인한다. 해당 정보로 백업을 수행합니다.

student-node ~ ➜ kubectl describe pods -n kube-system etcd-cluster1-controlplane | grep advertise-client-urls Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.12.169.24:2379 ​​​​​​--advertise-client-urls=https://192.12.169.24:2379 student-node ~ ➜

> etcd파드에서 사용하는 인증서 정보를 확인한다. 해당 정보로 백업을 수행합니다.

student-node ~ ➜ kubectl describe pods -n kube-system etcd-cluster1-controlplane | grep pki ​​​​​​--cert-file=/etc/kubernetes/pki/etcd/server.crt ​​​​​​--key-file=/etc/kubernetes/pki/etcd/server.key ​​​​​​--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt ​​​​​​--peer-key-file=/etc/kubernetes/pki/etcd/peer.key ​​​​​​--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt ​​​​​​--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt ​​​​​​/etc/kubernetes/pki/etcd from etcd-certs (rw) ​​​​Path: /etc/kubernetes/pki/etcd student-node ~ ➜

> 수집했던 정보로 스냅샷을 만들어 준다.

cluster1-controlplane ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://192.12.169.24:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/cluster1.db Snapshot saved at /opt/cluster1.db cluster1-controlplane ~ ➜ cluster1-controlplane ~ ➜ ls -al /opt/ total 2096 drwxr-xr-x 1 root root 4096 May 27 12:08 . drwxr-xr-x 1 root root 4096 May 27 09:42 .. -rw-r--r-- 1 root root 2117664 May 27 12:08 cluster1.db drwxr-xr-x 1 root root 4096 Dec 20 10:08 cni drwx--x--x 4 root root 4096 Dec 20 10:09 containerd cluster1-controlplane ~ ➜

> 해당 노드를 나와서 밖에 노드로 복사해준다.

student-node ~ ✖ scp cluster1-controlplane:/opt/cluster1.db /opt cluster1.db 100% 2068KB 83.8MB/s 00:00 student-node ~ ➜ student-node ~ ➜ ls -al /opt total 2088 drwxr-xr-x 1 root root 4096 May 27 12:10 . drwxr-xr-x 1 root root 4096 May 27 09:41 .. -rw-r--r-- 1 root root 2117664 May 27 12:10 cluster1.db drwxr-xr-x 1 root root 4096 May 27 09:41 .init student-node ~ ➜

 

 

★ 가지고 있는 스냅샷으로 cluster2의 외부로 연결된 etcd를 복원해보자

> cluster2로 바꾸기

student-node ~ ➜ kubectl config use-context cluster2 Switched to context "cluster2". student-node ~ ➜

> 가지고 있는 스냅샷 파일을 외부 etcd-server /root 경로에 복사

student-node ~ ➜ scp /opt/cluster2.db etcd-server:/root cluster2.db 100% 2088KB 130.2MB/s 00:00 student-node ~ ➜

> 외부 etcd-serve로 ssh 접속

student-node ~ ➜ ssh 192.13.240.15 Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1105-gcp x86_64) ​* Documentation: https://help.ubuntu.com ​* Management: https://landscape.canonical.com ​* Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. Last login: Sat May 27 12:20:19 2023 from 192.13.240.21 etcd-server ~ ➜

> etcd-server에서 복원하는거니 127.0.0.1 사용, 인증서는 기본 사용, 데이터 경로는 /var/lib/etcd-data-new 사용

etcd-server ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/etcd.pem --key=/etc/etcd/pki/etcd-key.pem snapshot restore /root/cluster2.db --data-dir /var/lib/etcd-data-new {"level":"info","ts":1685190548.319989,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"} {"level":"info","ts":1685190548.3370929,"caller":"mvcc/kvstore.go:388","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":6336} {"level":"info","ts":1685190548.3447402,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]} {"level":"info","ts":1685190548.3543217,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"} etcd-server ~ ➜

> etcd.service 파일에서 data-dir 부분을 /var/lib/etcd-data-new로 수정해주기

etcd-server ~ ➜ vi /etc/systemd/system/etcd.service [Unit] Description=etcd key-value store Documentation=https://github.com/etcd-io/etcd After=network.target [Service] User=etcd Type=notify ExecStart=/usr/local/bin/etcd \ ​​--name etcd-server \ ​​--data-dir=/var/lib/etcd-data \

> 새 디렉터리에 대한 권한이 올바른지 확인합니다 (etcd-server 가 권한을 가지고있어야 함):

etcd-server ~ ➜ chown -R etcd:etcd /var/lib/etcd-data-new/ etcd-server ~ ➜ ls -ld /var/lib/etcd-data-new/ drwx------ 3 etcd etcd 4096 May 27 12:29 /var/lib/etcd-data-new/ etcd-server ~ ➜

> etcd daemon 재시작 해주세요.

etcd-server ~ ➜ systemctl daemon-reload etcd-server ~ ➜ systemctl restart etcd

> 마지막으로 전에 있는 정보에 의존하지 않도록 컨트롤플레인 구성 요소(예: kube-scheduler, kube-controller-manager, kubelet)를 재시작하는 것이 좋습니다.

 

 

 

 

 

 

 

반응형
반응형

 

 

2023.05.20

★ 클러스터에서 실행 중인 ETCD의 버전은 무엇인가요?

-> etcd-version : 3.5.6

controlplane ~ ➜ kubectl -n kube-system logs etcd-controlplane | grep -i 'etcd-version' {"level":"info","ts":"2023-05-20T05:14:33.291Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":36,"max-cpu-available":36,"member-initialized":false,"name":"controlplane","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.6.237.3:2380"],"listen-peer-urls":["https://192.6.237.3:2380"],"advertise-client-urls":["https://192.6.237.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.6.237.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"controlplane=https://192.6.237.3:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} controlplane ~ ➜

 

 

★ controlplane node에서 ETCD 클러스터에 연결할 수 있는 주소는 어디인가요?

 ->  https://127.0.0.1:2379

controlplane ~ ➜ kubectl -n kube-system describe pod etcd-controlplane | grep -i 'listen-client-url' ​​​​​​--listen-client-urls=https://127.0.0.1:2379,https://192.6.237.3:2379 controlplane ~ ➜

 

 

★ ETCD 서버 인증서 파일은 어디에 있나요?

 -> --cert-file=/etc/kubernetes/pki/etcd/server.crt

controlplane ~ ➜ kubectl -n kube-system describe pod etcd-controlplane | grep -i 'cert-file' ​​​​​​--cert-file=/etc/kubernetes/pki/etcd/server.crt ​​​​​​--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt controlplane ~ ➜

 

 

★ ETCD CA 인증서 파일은 어디에 있나요?

 -> --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

controlplane ~ ➜ kubectl -n kube-system describe pod etcd-controlplane | grep -i 'ca-file' ​​​​​​--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt ​​​​​​--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt controlplane ~ ➜

 

 

★ 클러스터의 master node는 오늘 밤에 재부팅이 예정되어 있습니다. 문제가 발생할 것으로 예상되지는 않지만 필요한 백업을 수행해야 합니다. 기본 제공 스냅샷 기능을 사용하여 ETCD 데이터베이스의 스냅샷을 만듭니다.

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 \
 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --cert=/etc/kubernetes/pki/etcd/server.crt \
 --key=/etc/kubernetes/pki/etcd/server.key \
 snapshot save /opt/snapshot-pre-boot.db

controlplane ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 \ > --cacert=/etc/kubernetes/pki/etcd/ca.crt \ > --cert=/etc/kubernetes/pki/etcd/server.crt \ > --key=/etc/kubernetes/pki/etcd/server.key \ > snapshot save /opt/snapshot-pre-boot.db Snapshot saved at /opt/snapshot-pre-boot.db controlplane ~ ➜ ls /opt/ cni containerd snapshot-pre-boot.db controlplane ~ ➜

 

 

★ 재부팅 후 마스터 노드가 다시 온라인 상태가 되었지만 애플리케이션에 액세스할 수 없습니다. 클러스터의 애플리케이션 상태를 확인하세요. 무슨 문제인가요?

- 배포가 없습니다.
- 서비스가 존재하지 않음
- 파드가 없음
 위의 모든 것

 

 

 

★ 백업 파일을 사용하여 클러스터의 원래 상태를 복원합니다.

controlplane ~ ➜ ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \ > snapshot restore /opt/snapshot-pre-boot.db 2023-05-20 02:06:58.280313 I | mvcc: restore compact to 2461 2023-05-20 02:06:58.287347 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32

 

 

반응형
반응형

 

 

 

 

 

 

 

2023.05.20

★ Cluster 버전 확인

 -> v1.25.0

controlplane ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION controlplane Ready control-plane 116m v1.25.0 node01 Ready <none> 116m v1.25.0 controlplane ~ ➜

 

 

★ Worker node인 node는 어떤 노드인가요 ?

 -> Taints 가 없으니 다 Worker node

controlplane ~ ➜ kubectl describe nodes controlplane | grep Taints Taints: <none> controlplane ~ ➜ kubectl describe nodes node01 | grep Taints Taints: <none>

 

 

★ deploy되는 pods는 어떤 node에 올라가나요 ?

 -> node01, controlplane

controlplane ~ ➜ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-5db6db69f7-94bd2 1/1 Running 0 14m 10.244.1.4 node01 <none> <none> blue-5db6db69f7-d4ckd 1/1 Running 0 14m 10.244.1.2 node01 <none> <none> blue-5db6db69f7-fgrvp 1/1 Running 0 14m 10.244.1.3 node01 <none> <none> blue-5db6db69f7-fkvdv 1/1 Running 0 14m 10.244.0.5 controlplane <none> <none> blue-5db6db69f7-kcnkl 1/1 Running 0 14m 10.244.0.4 controlplane <none> <none>

 

 

★ 클러스터를 업그레이드하는 작업을 수행해야 합니다. 애플리케이션에 접속하는 사용자에게 영향을 미치지 않아야 하며 새 VM을 프로비저닝할 수 없습니다. 클러스터를 업그레이드하기 위해 어떤 방법을 사용하시겠습니까?

 -> Worker node를 다른 노드로 이동하면서 한 번에 한 노드씩 업그레이드

 

 

★ 현재 쿠버네티스의 안정적인 최신 버전은 무엇인가요?

 -> v1.27.2

controlplane ~ ➜ kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.25.0 [upgrade/versions] kubeadm version: v1.25.0 I0519 23:10:12.713401 17274 version.go:256] remote version is much newer: v1.27.2; falling back to: stable-1.25 [upgrade/versions] Target version: v1.25.10 [upgrade/versions] Latest version in the v1.25 series: v1.25.10

 

 

★ 현재 버전의 kubeadm 도구가 설치된 상태에서 업그레이드할 수 있는 최신 버전은 무엇인가요?

 -> v1.25.10

controlplane ~ ➜ kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.25.0 [upgrade/versions] kubeadm version: v1.25.0 I0519 23:10:12.713401 17274 version.go:256] remote version is much newer: v1.27.2; falling back to: stable-1.25 [upgrade/versions] Target version: v1.25.10 [upgrade/versions] Latest version in the v1.25 series: v1.25.10 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT TARGET kubelet 2 x v1.25.0 v1.25.10 Upgrade to the latest version in the v1.25 series: COMPONENT CURRENT TARGET kube-apiserver v1.25.0 v1.25.10 kube-controller-manager v1.25.0 v1.25.10 kube-scheduler v1.25.0 v1.25.10 kube-proxy v1.25.0 v1.25.10 CoreDNS v1.9.3 v1.9.3 etcd 3.5.4-0 3.5.4-0 You can now apply the upgrade by executing the following command: ​​​​​​​​kubeadm upgrade apply v1.25.10 Note: Before you can perform this upgrade, you have to update kubeadm to v1.25.10.

 

 

 

★ controlplane node를 업그레이드할 것입니다. 컨트롤 플레인 노드에서 워크로드를 비우고 예약 불가능으로 설정해주세요.

controlplane ~ ✖ kubectl drain controlplane --ignore-daemonsets node/controlplane already cordoned Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-qljgv, kube-system/kube-proxy-xkrtl evicting pod kube-system/coredns-565d847f94-8ldgw evicting pod default/blue-5db6db69f7-jhn5p evicting pod default/blue-5db6db69f7-6pkjn evicting pod kube-system/coredns-565d847f94-2bbdc pod/blue-5db6db69f7-jhn5p evicted pod/blue-5db6db69f7-6pkjn evicted pod/coredns-565d847f94-2bbdc evicted pod/coredns-565d847f94-8ldgw evicted node/controlplane drained controlplane ~ ➜

 

 

★ Controlplane node 업그레이드

controlplane ~ ➜ apt update Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B] Hit:2 https://download.docker.com/linux/ubuntu focal InRelease ... controlplane ~ ➜ apt-get install kubeadm=1.26.0-00 Reading package lists... Done Building dependency tree Reading state information... Done ... controlplane ~ ➜ kubeadm upgrade apply v1.26.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... ... controlplane ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION controlplane Ready,SchedulingDisabled control-plane 85m v1.26.0 node01 Ready <none> 84m v1.25.0 controlplane ~ ➜

 

 

★ Controlplane node에서 예약 가능으로 설정하세요.

controlplane ~ ✖ kubectl uncordon controlplane node/controlplane uncordoned

 

★ node01를 업그레이드할 것입니다. 워크로드를 비우고 예약 불가능으로 설정해주세요.

 -> drain하게되면 node에 있던 pod들은 이동한다.

controlplane ~ ➜ kubectl drain node01 --ignore-daemonsets node/node01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-2stnt, kube-system/kube-proxy-n5hmd evicting pod kube-system/coredns-787d4945fb-lqjjv evicting pod default/blue-5db6db69f7-225hb evicting pod default/blue-5db6db69f7-c7ptb evicting pod default/blue-5db6db69f7-gkbz4 evicting pod default/blue-5db6db69f7-r4rvt evicting pod default/blue-5db6db69f7-tv2p9 evicting pod kube-system/coredns-787d4945fb-kr6xw pod/blue-5db6db69f7-gkbz4 evicted pod/blue-5db6db69f7-tv2p9 evicted pod/blue-5db6db69f7-r4rvt evicted pod/blue-5db6db69f7-225hb evicted pod/blue-5db6db69f7-c7ptb evicted pod/coredns-787d4945fb-kr6xw evicted pod/coredns-787d4945fb-lqjjv evicted node/node01 drained controlplane ~ ➜ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-5db6db69f7-5sld2 1/1 Running 0 17s 10.244.0.7 controlplane <none> <none> blue-5db6db69f7-9c4w2 1/1 Running 0 17s 10.244.0.12 controlplane <none> <none> blue-5db6db69f7-9mhbr 1/1 Running 0 17s 10.244.0.9 controlplane <none> <none> blue-5db6db69f7-j7tgx 1/1 Running 0 17s 10.244.0.11 controlplane <none> <none> blue-5db6db69f7-p98g5 1/1 Running 0 17s 10.244.0.10 controlplane <none> <none> controlplane ~ ➜

 

 

★ node01 업그레이드

controlplane ~ ➜ ssh node01 root@node01 ~ ➜ apt-get update Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB] Get:3 https://download.docker.com/linux/ubuntu focal InRelease [57.7 kB] Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B] ... root@node01 ~ ➜ apt-get install kubeadm=1.26.0-00 Reading package lists... Done Building dependency tree Reading state information... Done ... oot@node01 ~ ➜ kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks ... root@node01 ~ ➜ apt-get install kubelet=1.26.0-00 Reading package lists... Done Building dependency tree Reading state information... Done ... root@node01 ~ ➜ systemctl daemon-reload root@node01 ~ ➜ systemctl restart kubelet.service controlplane ~ ➜ kubectl get node NAME STATUS ROLES AGE VERSION controlplane Ready control-plane 118m v1.26.0 node01 NotReady,SchedulingDisabled <none> 117m v1.26.0 controlplane ~ ➜

 

 

★ node01 노드를 예약 가능으로 설정하세요.

controlplane ~ ➜ kubectl uncordon node01 node/node01 uncordoned

 

반응형
반응형

 

 

 

2023.05.17

★ node 갯수 확인

controlplane ~ ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION controlplane Ready control-plane 17m v1.26.0 node01 Ready <none> 17m v1.26.0 controlplane ~ ➜

 

 

★ namespace가 default인 deploy를 찾아라

controlplane ~ ➜ kubectl get deployments.apps --namespace default NAME READY UP-TO-DATE AVAILABLE AGE blue 3/3 3 3 33s controlplane ~ ➜

 

 

★ node관리를 위해 node01을 제거해야 한다. 애플리케이션의 노드를 비우고 예약 불가능으로 표시해주세요.

controlplane ~ ➜ kubectl drain node01 --ignore-daemonsets node/node01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-bjcqz, kube-system/kube-proxy-l6ln5 evicting pod default/blue-987f68cb5-jt66v evicting pod default/blue-987f68cb5-hw99q pod/blue-987f68cb5-hw99q evicted pod/blue-987f68cb5-jt66v evicted node/node01 drained controlplane ~ ➜

 

★ node관리 작업이 끝났다. 노드 node01을 다시 스케줄링할 수 있도록 구성

controlplane ~ ➜ kubectl uncordon node01 node/node01 uncordoned controlplane ~ ➜

 

 

★ uncordon으로 노드 node01을 스케줄링 했지만 pods가 생성되지 않습니다. 원인은?

 -> 노드에서 uncordon 명령을 실행해도 노드의 파드가 자동으로 스케줄링되지는 않습니다. 

 -> 새 파드가 생성되면 node01에 배치됩니다.

 

 

★ node01을 다시 drain 상태로 만들려고하는데 에러가 발생한다. 원인은?

 -> node01에 pod가 예약된 것을 볼 수 있다.
-> 해당 경우 drain 명령은 작동하지 않는다. 노드를 강제로 drain하려면 이제 --force 옵션을 사용해야 한다

controlplane ~ ➜ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-987f68cb5-gbbq8 1/1 Running 0 11m 10.244.0.5 controlplane <none> <none> blue-987f68cb5-gwvqs 1/1 Running 0 11m 10.244.0.4 controlplane <none> <none> blue-987f68cb5-njps9 1/1 Running 0 11m 10.244.0.6 controlplane <none> <none> hr-app 1/1 Running 0 3m29s 10.244.1.4 node01 <none> <none> controlplane ~ ➜

 

★ node01 강제로 drain 했을 경우 hr-app pod는 어떻게 될까?

 -> 사라져 버린다 ~

controlplane ~ ➜ kubectl drain node01 --ignore-daemonsets --force node/node01 already cordoned Warning: deleting Pods that declare no controller: default/hr-app; ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-xhg7x, kube-system/kube-proxy-96vtg evicting pod default/hr-app pod/hr-app evicted node/node01 drained controlplane ~ ➜ controlplane ~ ➜ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-987f68cb5-gbbq8 1/1 Running 0 15m 10.244.0.5 controlplane <none> <none> blue-987f68cb5-gwvqs 1/1 Running 0 15m 10.244.0.4 controlplane <none> <none> blue-987f68cb5-njps9 1/1 Running 0 15m 10.244.0.6 controlplane <none> <none> controlplane ~ ➜

 

★ hr-app pod는 굉장히 중요하므로 복구한다음 위험한 node01 노드에 배포되지 않게 설정해야합니다.

controlplane ~ ➜ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-987f68cb5-gbbq8 1/1 Running 0 16m 10.244.0.5 controlplane <none> <none> blue-987f68cb5-gwvqs 1/1 Running 0 16m 10.244.0.4 controlplane <none> <none> blue-987f68cb5-njps9 1/1 Running 0 16m 10.244.0.6 controlplane <none> <none> hr-app-66c4c9c67f-78hm7 1/1 Running 0 47s 10.244.1.5 node01 <none> <none> controlplane ~ ➜ controlplane ~ ➜ kubectl cordon node01 node/node01 cordoned controlplane ~ ➜

 

 

반응형

+ Recent posts