项目作者: vutkin

项目描述 :
Kubernetes multinode cluster quickstart using vagrant
高级语言:
项目地址: git://github.com/vutkin/k8s-quickstart.git
创建时间: 2019-09-17T18:49:55Z
项目社区:https://github.com/vutkin/k8s-quickstart

开源协议:

下载


https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/

Operations

Vagrant cli

  1. vagrant up
  2. vagrant ssh k8s-master

Dashboard

Certs for dashboard

  1. mkdir certs
  2. openssl req -nodes -newkey rsa:2048 -keyout certs/dashboard.key -out certs/dashboard.csr -subj "/CN=kubernetes-dashboard"
  3. openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt
  4. kubectl create ns kubernetes-dashboard
  5. kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kubernetes-dashboard

Deploy dashboard

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
  2. kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
  3. kubectl get svc kubernetes-dashboard -n kubernetes-dashboard -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'
  1. cat <<EOF | kubectl create -f -
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kube-system
  7. EOF
  8. cat <<EOF | kubectl create -f -
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRoleBinding
  11. metadata:
  12. name: admin-user
  13. roleRef:
  14. apiGroup: rbac.authorization.k8s.io
  15. kind: ClusterRole
  16. name: cluster-admin
  17. subjects:
  18. - kind: ServiceAccount
  19. name: admin-user
  20. namespace: kube-system
  21. EOF
  22. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Deploy nginx

  1. kubectl create ns test
  2. kubectl create deployment nginx --image=nginx -n test
  3. kubectl create service nodeport nginx --tcp=80:80 -n test
  4. kubectl get svc nginx -n test -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'

Deploy a hello, world app

  1. kubectl create ns test
  2. kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 -n test
  3. kubectl expose deployment web --target-port=8080 --type=NodePort -n test
  4. kubectl get svc web -n test -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'

Ingress

  1. cat <<EOF | kubectl create -f -
  2. apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
  3. kind: Ingress
  4. metadata:
  5. name: example-ingress
  6. annotations:
  7. nginx.ingress.kubernetes.io/rewrite-target: /$1
  8. spec:
  9. rules:
  10. - host: hello-world.info
  11. http:
  12. paths:
  13. - path: /(.+)
  14. backend:
  15. serviceName: web
  16. servicePort: 8080
  17. EOF

Known issues

Flannel minutiae

The VMs in my initial setup had the first NIC connected to the VirtualBox NAT network, and the second NIC connected to a host-only network. This resulted in the pod IPs being inaccessible across nodes — because flannel used the first NIC by default and in my case it was a NAT interface on which the machines could not cross-talk, only reach the web. Thus the pod network did not really work across nodes. An easy way to find out which NIC flannel uses is to look at flannel’s logs:

  1. $ kubectl get pod --all-namespaces | grep flannel
  2. kube-system kube-flannel-ds-j587n 1/1 Running 2 3h
  3. kube-system kube-flannel-ds-p6sm7 1/1 Running 1 3h
  4. kube-system kube-flannel-ds-xn27c 1/1 Running 1 3h
  5. $ kubectl logs -f kube-flannel-ds-j587n -n kube-system
  6. I0622 14:17:37.841808 1 main.go:487] Using interface with name enp0s3 and address 10.0.2.2

In the above case you can see that the flannel pod is listening on the NAT interface and this would not work. The way I fixed it was by:

  1. Deleting the flannel daemon set:
    1. kubectl delete ds -l app=flannel -n kube-system
  2. Removing the flannel.1 virtual NIC on each node.
    1. ip link delete flannel.1
  3. Recreating the flannel daemonset with a tweak in the flannel configmap. Open the flannel-<...>.yaml file and append the following tokens to the command directive for thekube-flannel container: --iface and eth1 (or whatever is the name of the NIC connected to the host-only network).
    1. args:
    2. - --ip-masq
    3. - --kube-subnet-mgr
    4. - --iface
    5. - eth1
  4. Finally restarting kubelet and docker on all nodes:
    1. systemctl stop kubelet && systemctl restart docker && systemctl start kubelet
    @ErrInDam/taming-kubernetes-for-fun-and-profit-60a1d7b353de">soultion was found in reference article

Ingress controllers

  1. kubectl apply -f https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/master/deploy/haproxy-ingress.yaml

Assign to master node

  1. nodeSelector:
  2. node-role.kubernetes.io/master: ""

Controlling your cluster from machines other than the control-plane node

  1. scp root@<master ip>:/etc/kubernetes/admin.conf .
  2. kubectl --kubeconfig ./admin.conf get nodes

Proxying API Server to localhost

  1. kubectl --kubeconfig ./admin.conf proxy