项目作者: cytopia

项目描述 :
This role render an arbitrary number of Jinja2 templates and deploys or removes them to/from Kubernetes clusters.
高级语言: Python
项目地址: git://github.com/cytopia/ansible-role-k8s.git
创建时间: 2018-08-07T07:15:42Z
项目社区:https://github.com/cytopia/ansible-role-k8s

开源协议:MIT License

下载


Ansible role: K8s

This role renders an arbitrary number of Jinja2 templates and deploys
to or removes from one or multiple Kubernetes cluster.
Additionally this role offers a dry-run[1]
for Kubernetes deployments by doing a line-by-line diff between local templates and already
deployed templates.

Build Status
Ansible Galaxy
Version

Table of Contents

  1. Requirements
  2. Role variables
    1. Template variables
    2. Authentication variables
    3. Available list item keys
  3. Dry-run
    1. How does it work
    2. Particularities
    3. How does it look
  4. Examples
    1. Usage of variables
    2. Usage of tags per item
    3. Usage of context per item
  5. Testing
  6. License

Requirements

In order to use this role you need to meet the following software requirements:

Role variables

The K8s role offers the following variables.

Template variables

Variable Type Description
k8s_create list If set with any value, only deployments to create are executed.
k8s_remove list If set with any value, only deployments to remove are executed.
k8s_tag string Only deployments (create or remove) which have this tag specified in their definition are executed.
k8s_force bool Force deployment. The existing object will be replaced.

Authentication variables

Each of the following values can also be set per item and will then take precedence over the
below listed global values:

Variable Type Description
k8s_context string Global cluster context
k8s_host string The kubernetes API hostname
k8s_api_key string API key/token to authenticate against the cluster
k8s_ssl_ca_cert string Certificate authority to authenticate against the cluster
k8s_cert_file string Client certificate to authenticate against the cluster
k8s_key_file string Client key to authenticate against the cluster
k8s_username string Username to authenticate against the cluster
k8s_password string Password to authenticate against the cluster

Available list item keys

The only required item key is template, everything else is optional.

  1. # Specify a list of templates to remove
  2. # Runs before k8s_templates_create
  3. # Has the same arguments as k8s_templates_create
  4. k8s_templates_remove: []
  5. # Specify a list of templates to deploy
  6. k8s_templates_create:
  7. - template: # <str> Path to jinja2 template to deploy
  8. tag: # <str> tag this template (used by k8s_tag to only deploy this list item)
  9. tags: # <list> list of tags (mutually exclusive with tag)
  10. - tag1
  11. - tag2
  12. context: # <str> Overwrites k8s_context for this item
  13. host: # <str> Overwrites k8s_host for this item
  14. api_key: # <str> Overwrites k8s_api_key for this item
  15. ssl_ca_cert: # <str> Overwrites k8s_ssl_ca_cert for this item
  16. cert_file: # <str> Overwrites k8s_cert_file for this item
  17. key_file: # <str> Overwrites k8s_key_file for this item
  18. username: # <str> Overwrites k8s_username for this item
  19. password: # <str> Overwrites k8s_password for this item

Dry-run

The dry-run does not test if the templates to be deployed will actually work, it simply just adds
a diff output similar to git diff. With this you will be able to see any changes your local
template will introduce compared to what is already deployed at the moment.

How does it work

At a very brief level the dry-run works as follows:

ydiff

  1. Read out currently deployed template from Kubernetes via kubectl
  2. Renders the local jinja kubernetes template
  3. Diff compare both templates in human readable yaml format and add the result to Ansible’s diff output

Particularities

Kubernetes automatically adds a lot of default options to its deployed templates, if no value
has been specified for it in your template. This would make the diff output unusable as local and
deployed templates would always show differences.

To overcome this problem, the K8s role offers a dictionary definition for all Kubernetes kinds
that define keys to ignore on remote side.

This ignore part is still work in progress as I did not have the chance to compare all available
deployment kinds. The current ignore implementation can be seen in vars/main.yml.

How does it look

For dry-run it is recommended to use the --diff option so that you can actually see the changes.

  1. $ ansible-playbook playbook-k8s.yml -i inventories/dev/hosts --check --diff
  1. TASK [k8s : [my-kubernetes-cluster.k8s.local] diff: namespace.yml.j2] *******************
  2. --- before
  3. +++ after
  4. @@ -1,8 +1,6 @@
  5. apiVersion: v1
  6. kind: Namespace
  7. metadata:
  8. - annotations:
  9. - jenkins-x.io/created-by: Jenkins X
  10. labels:
  11. name: jenkinks
  12. name: jenkinks
  13. changed: [kubernetes]
  14. TASK [k8s : [my-kubernetes-cluster.k8s.local] diff: metrics-server-dpl.yml.j2] **********
  15. --- before
  16. +++ after
  17. @@ -1,7 +1,6 @@
  18. apiVersion: extensions/v1beta1
  19. kind: Deployment
  20. metadata:
  21. - annotations: {}
  22. labels:
  23. k8s-app: metrics-server
  24. name: metrics-server
  25. @@ -10,10 +9,6 @@
  26. selector:
  27. matchLabels:
  28. k8s-app: metrics-server
  29. + strategy:
  30. + rollingUpdate:
  31. + maxSurge: '1'
  32. + maxUnavailable: '1'
  33. template:
  34. metadata:
  35. labels:
  36. changed: [kubernetes]

Examples

For all examples below, we will use the following Ansible playbook:

playbook.yml

  1. ---
  2. - hosts: all
  3. roles:
  4. - k8s
  5. tags:
  6. - k8s

1. Usage of variables

Required files:

create-k8s-namespace.yml.j2

  1. ---
  2. kind: Namespace
  3. apiVersion: v1
  4. metadata:
  5. name: {{ my_namespace }}
  6. labels:
  7. name: {{ my_namespace }}

group_vars/all.yml

  1. ---
  2. # Custom variables for usage in templates
  3. my_namespace: frontend
  4. # Role variables
  5. k8s_templates_create:
  6. - template: path/to/create-k8s-namespace.yml.j2

How to execute:

  1. # Deploy namespace
  2. $ ansible-playbook playbook.yml
  3. # Overwrite namespace name
  4. $ ansible-playbook playbook.yml -e my_namespace=backend

2. Usage of tags per item

Required files:

group_vars/all.yml

  1. ---
  2. k8s_templates_create:
  3. - template: path/to/pod1.yml.j2
  4. tag: stage1
  5. - template: path/to/pod2.yml.j2
  6. tags:
  7. - pod
  8. - stage2
  9. k8s_templates_remove:
  10. - template: path/to/ds1.yml.j2
  11. tag: stage1
  12. - template: path/to/ds2.yml.j2
  13. tags:
  14. - pod
  15. - stage2

How to execute:

  1. # Remove and deploy all files
  2. $ ansible-playbook playbook.yml
  3. # Only deploy files
  4. $ ansible-playbook playbook.yml -e k8s_create=1
  5. # Only deploy files with tag stage1
  6. $ ansible-playbook playbook.yml -e k8s_create=1 -e k8s_tag=stage1

3. Usage of context per item

Required files:

group_vars/all.yml

  1. ---
  2. # context is global for all deployment files
  3. k8s_context: minikube
  4. k8s_templates_create:
  5. - template: path/to/pod1.yml.j2
  6. - template: path/to/pod2.yml.j2
  7. # The next item uses a different context (takes precedence over global context)
  8. - template: path/to/pod3.yml.j2
  9. context: dev-cluster

How to execute:

  1. # IMPORTANT:
  2. # When a context is attached to an item (as with pod3.yml)
  3. # it will take precedence over any globally specified context.
  4. # So this example will deploy everything into the cluster specified by the global context,
  5. # except pod3.yml, which will always go into dev-cluster
  6. # Deploy everything into minikube (pod3.yml will however be deployed into dev-cluster)
  7. $ ansible-playbook playbook.yml -e k8s_create=1
  8. # Deploy everything into a different cluster (pod3.yml will however be deployed into dev-cluster)
  9. $ ansible-playbook playbook.yml -e k8s_create=1 -e k8s_context=prod-cluster

Testing

Requirements

In order to run the tests you need to meet the following software requirements:

Run tests

  1. # Lint the source files
  2. make lint
  3. # Run integration tests with default Ansible version
  4. make test
  5. # Run integration tests with custom Ansible version
  6. make test ANSIBLE_VERSION=2.6

License

MIT License

Copyright (c) 2018 cytopia