项目作者: ethersphere

项目描述 :
Swarm on Kubernetes
高级语言: HCL
项目地址: git://github.com/ethersphere/swarm-kubernetes.git
创建时间: 2018-11-26T12:59:06Z
项目社区:https://github.com/ethersphere/swarm-kubernetes

开源协议:

下载


Swarm on Kubernetes

This document is targeted at developers who want to create a Kubernetes environment with Swarm and Geth applications running on it.

Users of this setup should follow the USER-GUIDE.md

Note that this setup is currently AWS specific.

Table of Contents

  1. Kubernetes service
  2. Add Kubernetes Dashboard to cluster
  3. Bootstrap auxiliary services
  4. Ethersphere Helm Charts
  5. Cluster monitoring with Prometheus and Grafana

Kubernetes service

Terraform EKS playbooks

Terraform playbooks for an AWS EKS (AWS Managed Kubernetes Service) with all related AWS resources (VPC, launch configurations, auto-scaling groups, security groups, etc.)

  1. Update values in backend.tf-sample, and rename it to backend.tf.

  2. Update users in outputs.tf-sample, and rename it to outputs.tf. The sample users arn:aws:iam::123456789012:user/alice and arn:aws:iam::123456789012:user/bob are added a admin for your Kubernetes environment.

  1. mapUsers: |
  2. - userarn: arn:aws:iam::123456789012:user/alice
  3. username: alice
  4. groups:
  5. - system:masters
  6. - userarn: arn:aws:iam::123456789012:user/bob
  7. username: bob
  8. groups:
  9. - system:masters
  1. Review and update variables.tf. Note that this setup is running on AWS spot instances that could be terminated by AWS at any time. You will have to amend the Terraform scripts if you want to run on-demand instances.

  2. Initialise Terraform and create the infrastructure. These Terraform EKS templates are heavily influenced by https://github.com/codefresh-io/eks-installer

  1. terraform init
  2. terraform plan
  3. terraform apply

Update kubeconfig

  1. export CLUSTER_NAME=your-cluster-name
  2. ./generate_terraform_outputs.sh
  3. aws eks update-kubeconfig --name $CLUSTER_NAME
  4. kubectl apply -f ./outputs/config-map-aws-auth.yaml

Add Kubernetes Dashboard to cluster

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  2. kubectl apply -f eks-admin-and-cluster-role-binding.yaml
  3. echo "http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/"
  4. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

Bootstrap auxiliary services

Run bootstrap-cluster-services.sh in order to:

  1. Setup storage classes
  2. Add local persistence volumes to every i3.large instance in the cluster
  3. Install tiller on kube-system to manage system wide components.
  4. Install nginx ingress controller
  5. Install cert-manager and cert issuers
  6. Install logging stack
  1. ./bootstrapping/bootstrap-cluster-services.sh

Ethersphere Helm Charts

Requirements

You’ll need access to a k8s cluster and the following binaries on your system:

Using the Helm charts

To be able to use the Ethersphere Helm charts, you need to load them from our registry first:

  1. helm repo add ethersphere-charts https://raw.githubusercontent.com/ethersphere/helm-charts-artifacts/master/
  2. helm repo list

Create a namespace and deployer Tiller

Tiller is the server portion of Helm and runs inside your Kubernetes cluster.

We need to create a dedicated k8s namespace and deploy tiller there with proper RBAC
to avoid that Tiller has full controll over our k8s cluster,

This can be done like:

  1. export NAMESPACE=your-namespace
  2. # Create the namespace
  3. kubectl create namespace $NAMESPACE
  4. # Apply tiller Role Based Access Controlls to your namespace only
  5. kubectl -n $NAMESPACE apply -f tiller.rbac.yaml
  6. # Start tiller in your namespace
  7. helm init --service-account tiller --tiller-namespace $NAMESPACE

Deploy your chart

Check out some examples on how to deploy your charts

  1. # Deploy the geth chart with default values
  2. helm --tiller-namespace=$NAMESPACE \
  3. --namespace=$NAMESPACE \
  4. --name=geth install ethersphere-charts/geth
  5. # Deploy the geth chart by providing your own custom-values.yaml file.
  6. # This will overwrite the default values.
  7. helm --tiller-namespace=$NAMESPACE \
  8. --namespace=$NAMESPACE \
  9. --name=geth install ethersphere-charts/geth \
  10. -f custom-values.yaml

Cluster monitoring with Prometheus and Grafana

Based on https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/

  1. git clone https://github.com/coreos/prometheus-operator.git
  2. git clone https://github.com/mateobur/prometheus-monitoring-guide.git
  3. kubectl create -f prometheus-operator/contrib/kube-prometheus/manifests/