Swarm on Kubernetes
This document is targeted at developers who want to create a Kubernetes environment with Swarm and Geth applications running on it.
Users of this setup should follow the USER-GUIDE.md
Note that this setup is currently AWS specific.
Terraform playbooks for an AWS EKS (AWS Managed Kubernetes Service) with all related AWS resources (VPC, launch configurations, auto-scaling groups, security groups, etc.)
Update values in backend.tf-sample
, and rename it to backend.tf
.
Update users in outputs.tf-sample
, and rename it to outputs.tf
. The sample users arn
and iam:
user/alice
arn
are added a admin for your Kubernetes environment.iam:
user/bob
mapUsers: |
- userarn: arn:aws:iam::123456789012:user/alice
username: alice
groups:
- system:masters
- userarn: arn:aws:iam::123456789012:user/bob
username: bob
groups:
- system:masters
Review and update variables.tf
. Note that this setup is running on AWS spot instances that could be terminated by AWS at any time. You will have to amend the Terraform scripts if you want to run on-demand instances.
Initialise Terraform and create the infrastructure. These Terraform EKS templates are heavily influenced by https://github.com/codefresh-io/eks-installer
terraform init
terraform plan
terraform apply
export CLUSTER_NAME=your-cluster-name
./generate_terraform_outputs.sh
aws eks update-kubeconfig --name $CLUSTER_NAME
kubectl apply -f ./outputs/config-map-aws-auth.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f eks-admin-and-cluster-role-binding.yaml
echo "http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/"
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Run bootstrap-cluster-services.sh
in order to:
./bootstrapping/bootstrap-cluster-services.sh
You’ll need access to a k8s cluster and the following binaries on your system:
To be able to use the Ethersphere Helm charts, you need to load them from our registry first:
helm repo add ethersphere-charts https://raw.githubusercontent.com/ethersphere/helm-charts-artifacts/master/
helm repo list
Tiller is the server portion of Helm and runs inside your Kubernetes cluster.
We need to create a dedicated k8s namespace and deploy tiller there with proper RBAC
to avoid that Tiller has full controll over our k8s cluster,
This can be done like:
export NAMESPACE=your-namespace
# Create the namespace
kubectl create namespace $NAMESPACE
# Apply tiller Role Based Access Controlls to your namespace only
kubectl -n $NAMESPACE apply -f tiller.rbac.yaml
# Start tiller in your namespace
helm init --service-account tiller --tiller-namespace $NAMESPACE
Check out some examples on how to deploy your charts
# Deploy the geth chart with default values
helm --tiller-namespace=$NAMESPACE \
--namespace=$NAMESPACE \
--name=geth install ethersphere-charts/geth
# Deploy the geth chart by providing your own custom-values.yaml file.
# This will overwrite the default values.
helm --tiller-namespace=$NAMESPACE \
--namespace=$NAMESPACE \
--name=geth install ethersphere-charts/geth \
-f custom-values.yaml
Based on https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/
git clone https://github.com/coreos/prometheus-operator.git
git clone https://github.com/mateobur/prometheus-monitoring-guide.git
kubectl create -f prometheus-operator/contrib/kube-prometheus/manifests/