项目作者: ronin13

项目描述 :
Zookeeper operator
高级语言: Go
项目地址: git://github.com/ronin13/zookeeper-operator.git
创建时间: 2019-03-02T14:30:59Z
项目社区:https://github.com/ronin13/zookeeper-operator

开源协议:

下载


GolangCI
GoDoc
Build Status
Go Report Card

Overview

zookeeper-operator implements an operator for Zookeeper 3.4.x based on operator-sdk
and controller-runtime.
It supports creating a zookeeper cluster given a spec containing number of nodes and size of data directory.
Persistence is achieved through ‘standard’ storage class and PersistentVolumeClaim.

Scaling up and down through rolling update is also supported. Dockerfile to build zookeeper also resides in
same repo.

Start

  1. # Make sure [operator-sdk](https://github.com/operator-framework/operator-sdk) is installed
  2. # Refer to operator-sdk docs for more and dependencies of operator-sdk.
  3. $ mkdir -p $GOPATH/src/github.com/operator-framework
  4. $ cd $GOPATH/src/github.com/operator-framework
  5. $ git clone https://github.com/operator-framework/operator-sdk
  6. $ cd operator-sdk
  7. $ git checkout master
  8. $ make dep
  9. $ make install
  10. $ mkdir -p $GOPATH/src/github.com/ronin13/
  11. $ cd $GOPATH/src/github.com/ronin13/
  12. $ git clone https://github.com/ronin13/zookeeper-operator
  13. $ cd zookeeper-operator
  14. # Make sure kubectl points to a running k8s cluster (or minikube for a start).
  15. $ minikube start
  16. # Build docker build
  17. $ eval $(minikube docker-env)
  18. $ make -C docker-zookeeper build
  19. # Create accounts for RBAC
  20. $ kubectl create -f deploy/service_account.yaml
  21. $ kubectl create -f deploy/role.yaml
  22. $ kubectl create -f deploy/role_binding.yaml
  23. # Deploy zookeeper CRD
  24. $ kubectl create -f deploy/crds/wnohang_v1alpha1_zookeeper_crd.yaml
  25. # Deploy zookeeper custom resource (default name is zoos with 3 nodes).
  26. $ kubectl create -f deploy/crds/wnohang_v1alpha1_zookeeper_cr.yaml
  27. # Finally deploy the operator.
  28. $ kubectl create -f deploy/operator.yaml
  29. # See if the cluster is running.
  30. $ kubectl get pods -l app=zoos
  31. NAME READY STATUS RESTARTS AGE
  32. zoos-0 0/1 Running 0 6s
  33. zoos-1 0/1 Running 0 6s
  34. zoos-2 0/1 Running 0 6s
  35. $ kubectl describe zookeeper.wnohang.net/zoos
  36. Name: zoos
  37. Namespace: default
  38. Labels: <none>
  39. Annotations: <none>
  40. API Version: wnohang.net/v1alpha1
  41. Kind: Zookeeper
  42. Metadata:
  43. Creation Timestamp: 2019-04-01T22:16:41Z
  44. Generation: 1
  45. Resource Version: 184847
  46. Self Link: /apis/wnohang.net/v1alpha1/namespaces/default/zookeepers/zoos
  47. UID: d39f11f4-54cb-11e9-8160-08002759e00a
  48. Spec:
  49. Nodes: 3
  50. Events: <none>
  51. # Verify that it is running.
  52. $ for x in 0 1 2;do kubectl exec zoos-$x -- sh -c "echo -n $x' '; echo mntr | nc localhost 2181 | grep zk_server_state"; done
  53. 0 zk_server_state follower
  54. 1 zk_server_state follower
  55. 2 zk_server_state leader
  56. # Cleanup
  57. $ kubectl delete -R -f deploy/
  58. $ kubectl delete statefulsets/zoos pvc/zoos-zoos-{0,1,2}
  59. # Scaling up/down
  60. $ kubectl edit zookeeper.wnohang.net/zoos
  61. # Bump the nodes of replicas and save.