项目作者: allir

项目描述 :
kubernetes-the-hard-way scripted
高级语言: Shell
项目地址: git://github.com/allir/kubernetes-the-scripted-way.git
创建时间: 2020-03-26T13:58:11Z
项目社区:https://github.com/allir/kubernetes-the-scripted-way

开源协议:

下载


kubernetes-the-scripted-way

Kubernetes-the-hard-way (KTHW) on Vagrant… Scripted

Requirements

  • VirtualBox
  • Vagrant

Installing Requirements

macOS

Using homebrew

  1. brew cask install virtualbox virtualbox virtualbox-extension-pack
  2. brew cask install vagrant

Using

Provisioning with Vagrant

The default setup sets up a loadbalancer node, three control-plane nodes and two worker nodes.

To stand up a new environment

  1. vagrant up

Connecting to the nodes vagrant ssh <node>. Example:

  1. vagrant ssh control-plane-1

Kubernetes setup

  1. Setup PKI, Certificates and Kubeconfigs

    1. # On control-plane-1 run the setup script
    2. /vagrant/scripts/k8s-01-setup.sh
  2. Setup the Control Plane (control-plane nodes)

    1. # On ALL of the control-plane nodes run the control-plane setup script
    2. /vagrant/scripts/k8s-02-control-plane.sh
  3. Setup Kube-apiserver RBAC, Node Bootstrapping, Networking and Cluster DNS resources

    1. # On control-plane-1 run the resources setup script
    2. /vagrant/scripts/k8s-03-resources.sh
  4. Setup Worker Nodes

    There are two ways to set up the worker nodes. With manually created certificates and configuration or using the TLS Bootstrapping to automatically generate the certificates.

    The required certificates are created during step 1 so either way will work. You can provision all the worker nodes the same way or each one differently.

    NOTE: We can/should also run this on the control-plane nodes so they’ll be visible in the cluster and join the cluster networking.

    a. Manually created certificate

    1. # On one or more control-plane & worker nodes run the worker setup script
    2. /vagrant/scripts/k8s-04a-workers.sh

    b. TLS Bootstrap

    1. # On one or more control-plane & worker nodes run the worker bootstrap setup script
    2. /vagran/scripts/k8s-04b-workers-tls.sh
    3. # On control-plane-1 check the worker node Certificate request and approve it
    4. kubectl get csr
    5. ## OUTPUT:
    6. ## NAME AGE REQUESTOR CONDITION
    7. ## csr-95bv6 20s system:node:worker-2 Pending
    8. kubectl certificate approve csr-95bv6
  5. Verification

    Let’s check the health of the etcd cluster, control plane and worker nodes and their components

    1. # On control-plane-1
    2. # ETCD member list
    3. sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/etcd-server.pem --key=/etc/etcd/etcd-server-key.pem
    4. ## 45bf9ccad8d8900a, started, control-plane-2, https://192.168.5.12:2380, https://192.168.5.12:2379, false
    5. ## 54a5796a6803f252, started, control-plane-1, https://192.168.5.11:2380, https://192.168.5.11:2379, false
    6. ## da27c13c21936c01, started, control-plane-3, https://192.168.5.13:2380, https://192.168.5.13:2379, false
    7. # ETCD endpoint health
    8. sudo ETCDCTL_API=3 etcdctl endpoint health --endpoints=https://192.168.5.11:2379,https://192.168.5.12:2379,https://192.168.5.13:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/etcd-server.pem --key=/etc/etcd/etcd-server-key.pem
    9. ## https://192.168.5.11:2379 is healthy: successfully committed proposal: took = 11.698581ms
    10. ## https://192.168.5.13:2379 is healthy: successfully committed proposal: took = 12.404629ms
    11. ## https://192.168.5.12:2379 is healthy: successfully committed proposal: took = 17.80096ms
    12. # Control Plane components
    13. kubectl get componentstatuses
    14. ## NAME STATUS MESSAGE ERROR
    15. ## controller-manager Healthy ok
    16. ## scheduler Healthy ok
    17. ## etcd-2 Healthy {"health":"true"}
    18. ## etcd-0 Healthy {"health":"true"}
    19. ## etcd-1 Healthy {"health":"true"}
    20. # Ndoe status
    21. kubectl get nodes
    22. ## NAME STATUS ROLES AGE VERSION
    23. ## worker-1 Ready <none> 4m20s v1.18.0
    24. ## worker-2 Ready <none> 4m21s v1.18.0
    25. # Check version via loadbalancer
    26. curl --cacert ca.pem https://192.168.5.30:6443/version
    27. ## {
    28. ## "major": "1",
    29. ## "minor": "18",
    30. ## "gitVersion": "v1.18.0",
    31. ## "gitCommit": "9e991415386e4cf155a24b1da15becaa390438d8",
    32. ## "gitTreeState": "clean",
    33. ## "buildDate": "2020-03-25T14:50:46Z",
    34. ## "goVersion": "go1.13.8",
    35. ## "compiler": "gc",
    36. ## "platform": "linux/amd64"
    37. ## }
    38. # Test cluster DNS
    39. kubectl run dnsutils --image="gcr.io/kubernetes-e2e-test-images/dnsutils:1.3" --command -- sleep 4800
    40. ## pod/dnsutils created
    41. kubectl exec dnsutils -- nslookup kubernetes.default
    42. ## Server: 10.96.0.10
    43. ## Address: 10.96.0.10#53
    44. ## Name: kubernetes.default.svc.cluster.local
    45. ## Address: 10.96.0.1
    46. kubectl delete pod dnsutils
    47. ## pod "dnsutils" deleted

Smoke Tests

Let’s set up an NGINX deployment and service as a smoke test. This can be run from control-plane-1 node or using the admin.kubeconfig in the repository folder after provisioning.

  1. kubectl create deployment nginx --image=nginx
  2. ## deployment.apps/nginx created
  3. kubectl scale deployment nginx --replicas=3
  4. ## deployment.apps/nginx scaled
  5. kubectl expose deployment nginx --port=80 --target-port=80 --type NodePort
  6. ## service/nginx exposed
  7. kubectl get service nginx -o yaml | sed -E "s/nodePort\:.*/nodePort: 30080/" | kubectl apply -f -
  8. ## Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  9. ## service/nginx configured
  10. kubectl get pod,deployment,service
  11. ## NAME READY STATUS RESTARTS AGE
  12. ## pod/nginx-f89759699-7lr85 1/1 Running 0 3m37s
  13. ## pod/nginx-f89759699-gn97b 1/1 Running 0 3m30s
  14. ## pod/nginx-f89759699-l5bjt 1/1 Running 0 3m30s
  15. ##
  16. ## NAME READY UP-TO-DATE AVAILABLE AGE
  17. ## deployment.apps/nginx 3/3 3 3 3m37s
  18. ##
  19. ## NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  20. ## service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43m
  21. ## service/nginx NodePort 10.96.0.67 <none> 80:30080/TCP 104s
  22. curl http://worker-1:30080 && curl http://worker-2:30080
  23. ## <!DOCTYPE html>
  24. ## <html>
  25. ## <head>
  26. ## <title>Welcome to nginx!</title>
  27. ## <style>
  28. ## body {
  29. ## width: 35em;
  30. ## margin: 0 auto;
  31. ## font-family: Tahoma, Verdana, Arial, sans-serif;
  32. ## }
  33. ## </style>
  34. ## </head>
  35. ## <body>
  36. ## <h1>Welcome to nginx!</h1>
  37. ## <p>If you see this page, the nginx web server is successfully installed and
  38. ## working. Further configuration is required.</p>
  39. ##
  40. ## <p>For online documentation and support please refer to
  41. ## <a href="http://nginx.org/">nginx.org</a>.<br/>
  42. ## Commercial support is available at
  43. ## <a href="http://nginx.com/">nginx.com</a>.</p>
  44. ##
  45. ## <p><em>Thank you for using nginx.</em></p>
  46. ## </body>
  47. ## </html>
  48. ## ...
  49. # Let's generate some logs and then check logging. This verifies kube-apiserver to kubelet RBAC permissions.
  50. for (( i=0; i<50; ++i)); do
  51. curl http://worker-1:30080 &>/dev/null && curl http://worker-2:30080 &>/dev/null
  52. done
  53. kubectl logs deployment/nginx
  54. ## Found 3 pods, using pod/nginx-f89759699-7lr85
  55. ## 10.32.0.1 - - [26/Mar/2020:13:48:07 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"
  56. ## 10.32.0.1 - - [26/Mar/2020:13:55:38 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"
  57. ## 10.44.0.0 - - [26/Mar/2020:13:55:38 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"
  58. ## ...

Conclusion

Awesome!

Cleanup

Destroy the machines and clean up temporary files from the repository.

  1. vagrant destroy -f
  2. git clean -xf