项目作者: developer-guy

项目描述 :
OPA Gatekeeper vs Kyverno
高级语言:
项目地址: git://github.com/developer-guy/policy-as-code-war.git
创建时间: 2021-02-25T07:18:02Z
项目社区:https://github.com/developer-guy/policy-as-code-war

开源协议:

下载


policy_as_code_war

Introduction

In this guide, we are going to demonstrate what OPA Gatekeeper and Kyverno are, what are the differences between them and how we can set up and use them in the Kubernetes cluster by doing hands-on demo.

So, if you are interested in with one of these topics, please keep reading, there is a lots of good details in the following sections 💪.

Let’s start with defining what Policy-as-Code concept is.

Prerequisites

  • minikube v1.17.1
  • kubectl v1.20.2

What is Policy-as-Code?

Similar to the concept of Infrastructure-as-Code (IaC) and the benefits you get from codifying your infrastructure setup using the software development practices, Policy-as-Code (PaC) is the codification of your policies.

PaC is the idea of writing code in a high-level language to manage and automate policies. By representing policies as code in text files, proven software development best practices can be adopted such as version control, automated testing, and automated deployment.

The policies you want to enforce come from your organization’s established guidelines or agreed-upon conventions, and best practices within the industry. It could also be derived from tribal knowledge that has accumulated over the years within your operations and development teams.

PaC is very general, so, it can be applied to any environment that you want to manage and enforce policies but if want to apply it unto the Kubernetes world, there are two tools became very important: OPA Gatekeeper and Kyverno.

Let’s continue with the description of these tools.

What is OPA Gatekeeper ?

Before move on with the description of the OPA Gatekeeper, we should explain the OPA (Open Policy Agent) is first.

The OPA is an open-source, general-purpose policy engine that can be used to enforce policies on various types of software systems like microservices, CI/CD pipelines, gateways, Kubernetes, etc. OPA was developed by Styra and is currently a part of CNCF.

The OPA Gatekeeper is the policy controller for Kubernetes. More technically, it is a customizable Kubernetes Admission Webhook that helps enforce policies and strengthen governance.

The important thing that we should notice is the use of OPA is not tied to the Kubernetes alone. OPA Gatekeeper, on the other hand, is specifically built for Kubernetes Admission Control use case of OPA.

What is Kyverno ?

Kyverno is a policy engine designed for Kubernetes. With Kyverno, policies are managed as Kubernetes resources and no new language is required to write policies. This allows using familiar tools such as kubectl, git, and kustomize to manage policies. Kyverno policies can validate, mutate, and generate Kubernetes resources. The Kyverno CLI can be used to test policies and validate resources as part of a CI/CD pipeline. Kyverno is an open-source and a part of CNCF Sandbox Project also.

What are differences between OPA Gatekeeper and Kyverno ?

Let’s explain these differences with the table format.

Features/Capabilities Gatekeeper Kyverno
Validation
Mutation ✓*
Generation X
Policy as native resources
Metrics exposed
OpenAPI validation schema (kubectl explain) X
High Availability
API object lookup ✓*
CLI with test ability ✓**
Policy audit ability

* Alpha status
** Separate CLI

Credit: https://neonmirrors.net/post/2021-02/kubernetes-policy-comparison-opa-gatekeeper-vs-kyverno/

In my opinion, the best advantages of using Kyverno are no need to learn another policy language and the OpenAPI validation schema support that we can use via kubectl explain command. On the other hand side, OPA Gatekeeper has lots of tools developed around the Rego language to help us to write and test our policies such as conftest, konstraint and this is a big plus in my opinion. These are the tools that we can use to implement Policy-as-Code Pipeline. Another advantage of using OPA Gatekeeper, therese are lots of libraries that includes ready to use policies written for us such as gatekeeper-library, konstraint-examples and raspbernetes-policies.

Hands On

I created two seperate folders for OPA Gatekeeper and Kyverno resources. We are going to start with the OPA Gatekepeer project first.

There are various types of installation of OPA Gatekeeper but in this section we are going to use plain YAML manifest to install it. Let’s install OPA Gatekeeper using the YAML manifest. In order to do that, we need to start our local Kubernetes cluster using minikube, we are going to use two different Minikube profiles for the OPA Gatekeeper and the Kyverno, that will result with the creating two seperate Kubernetes cluster.

  1. $ minikube start -p opa-gatekeeper
  2. 😄 [opa-gatekeeper] minikube v1.17.1 on Darwin 10.15.7
  3. Using the hyperkit driver based on user configuration
  4. 👍 Starting control plane node opa-gatekeeper in cluster opa-gatekeeper
  5. 🔥 Creating hyperkit VM (CPUs=3, Memory=8192MB, Disk=20000MB) ...
  6. 🌐 Found network options:
  7. no_proxy=127.0.0.1,localhost
  8. 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
  9. env NO_PROXY=127.0.0.1,localhost
  10. Generating certificates and keys ...
  11. Booting up control plane ...
  12. Configuring RBAC rules ...
  13. 🔎 Verifying Kubernetes components...
  14. 🌟 Enabled addons: storage-provisioner, default-storageclass
  15. 🏄 Done! kubectl is now configured to use "opa-gatekeeper" cluster and "default" namespace by default

Let’s apply the manifest.

  1. $ kubectl apply -f opa-gatekeeper/deploy.yaml
  2. namespace/gatekeeper-system created
  3. Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  4. customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created
  5. customresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created
  6. customresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created
  7. customresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created
  8. serviceaccount/gatekeeper-admin created
  9. podsecuritypolicy.policy/gatekeeper-admin created
  10. role.rbac.authorization.k8s.io/gatekeeper-manager-role created
  11. clusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created
  12. rolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
  13. clusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
  14. secret/gatekeeper-webhook-server-cert created
  15. service/gatekeeper-webhook-service created
  16. deployment.apps/gatekeeper-audit created
  17. deployment.apps/gatekeeper-controller-manager created
  18. Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
  19. validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created

You should notice that bunch of CRDs created to allow define and enforce policies called ConstraintTemplate which describes both the Rego that enforces the constraint and the schema of the constraint.

In this section, we are going to enforce policy to validate required labels that we want to on resources, if required label exits then we’ll approve the request, if not we’ll reject it.

Let’s look at the ConstraintTemplate that we are going to apply.

  1. apiVersion: templates.gatekeeper.sh/v1beta1
  2. kind: ConstraintTemplate
  3. metadata:
  4. name: k8srequiredlabels
  5. spec:
  6. crd:
  7. spec:
  8. names:
  9. kind: K8sRequiredLabels
  10. validation:
  11. # Schema for the `parameters` field
  12. openAPIV3Schema:
  13. properties:
  14. labels:
  15. type: array
  16. items: string
  17. targets:
  18. - target: admission.k8s.gatekeeper.sh
  19. rego: |
  20. package k8srequiredlabels
  21. violation[{"msg": msg, "details": {"missing_labels": missing}}] {
  22. provided := {label | input.review.object.metadata.labels[label]}
  23. required := {label | label := input.parameters.labels[_]}
  24. missing := required - provided
  25. count(missing) > 0
  26. msg := sprintf("you must provide labels: %v", [missing])
  27. }

You should notice that the policy that we define with the Rego language is placed under the .targets[].rego section. Once we applied this to the cluster, K8sRequiredLabels Custom Resource is going to be created and by using this CR we’ll define our policy context, means which resources we want to apply the policy on.

Let’s apply it.

  1. $ kubectl apply -f opa-gatekeeper/k8srequiredlabels-constraint-template.yaml
  2. constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created
  3. $ kubectl get customresourcedefinitions.apiextensions.k8s.io
  4. Found existing alias for "kubectl". You should use: "k"
  5. NAME CREATED AT
  6. configs.config.gatekeeper.sh 2021-02-25T09:06:10Z
  7. constraintpodstatuses.status.gatekeeper.sh 2021-02-25T09:06:10Z
  8. constrainttemplatepodstatuses.status.gatekeeper.sh 2021-02-25T09:06:10Z
  9. constrainttemplates.templates.gatekeeper.sh 2021-02-25T09:06:10Z
  10. k8srequiredlabels.constraints.gatekeeper.sh 2021-02-25T09:19:39Z

As you can see, K8sRequiredLabels CR is created. Lets define and apply it too.

  1. apiVersion: constraints.gatekeeper.sh/v1beta1
  2. kind: K8sRequiredLabels
  3. metadata:
  4. name: ns-must-have-gk
  5. spec:
  6. match:
  7. kinds:
  8. - apiGroups: [""]
  9. kinds: ["Namespace"]
  10. parameters:
  11. labels: ["gatekeeper"]

You should notice that we’ll enforce the policy on Namespace resource and the label value that we want to be available on the Namespace is gatekepeer.

  1. $ kubectl apply -f opa-gatekeeper/k8srequiredlabels-constraint.yaml
  2. k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-gk created

Let’s test with creating invalid namespace then a valid one.

  1. $ kubectl apply -f opa-gatekeeper/invalid-namespace.yaml
  2. Found existing alias for "kubectl apply -f". You should use: "kaf"
  3. Error from server ([denied by ns-must-have-gk] you must provide labels: {"gatekeeper"}): error when creating "opa-gatekeeper/invalid-namespace.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by ns-must-have-gk] you must provide labels: {"gatekeeper"}
  1. $ kubectl apply -f opa-gatekeeper/valid-namespace.yaml
  2. Found existing alias for "kubectl apply -f". You should use: "kaf"
  3. namespace/valid-namespace created

Tadaaaa, it worked 🎉🎉🎉🎉

Let’s move on with the Kyverno, again, there are various way to install it unto the Kubernetes, in this case, we are going to use Helm. We said that we’ll start up another Minikub cluster with different profile.
Let’s start with it.

  1. $ minikube start -p kyverno
  2. 😄 [kyverno] minikube v1.17.1 on Darwin 10.15.7
  3. Using the hyperkit driver based on user configuration
  4. 👍 Starting control plane node kyverno in cluster kyverno
  5. 🔥 Creating hyperkit VM (CPUs=3, Memory=8192MB, Disk=20000MB) ...
  6. 🌐 Found network options:
  7. no_proxy=127.0.0.1,localhost
  8. 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
  9. env NO_PROXY=127.0.0.1,localhost
  10. Generating certificates and keys ...
  11. Booting up control plane ...
  12. Configuring RBAC rules ...
  13. 🔎 Verifying Kubernetes components...
  14. 🌟 Enabled addons: storage-provisioner, default-storageclass
  15. 🏄 Done! kubectl is now configured to use "kyverno" cluster and "default" namespace by default
  16. $ minikube profile list
  17. |----------------|-----------|---------|---------------|------|---------|---------|-------|
  18. | Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
  19. |----------------|-----------|---------|---------------|------|---------|---------|-------|
  20. | kyverno | hyperkit | docker | 192.168.64.17 | 8443 | v1.20.2 | Running | 1 |
  21. | minikube | hyperkit | docker | 192.168.64.15 | 8443 | v1.20.2 | Stopped | 1 |
  22. | opa-gatekeeper | hyperkit | docker | 192.168.64.16 | 8443 | v1.20.2 | Running | 1 |
  23. |----------------|-----------|---------|---------------|------|---------|---------|-------|

Let’s install it by using Helm.

  1. $ helm repo add kyverno https://kyverno.github.io/kyverno/
  2. "kyverno" has been added to your repositories
  3. $ helm repo update
  4. Hang tight while we grab the latest from your chart repositories...
  5. ...Successfully got an update from the "kyverno" chart repository
  6. ...Successfully got an update from the "nats" chart repository
  7. ...Successfully got an update from the "falcosecurity" chart repository
  8. ...Successfully got an update from the "openfaas" chart repository
  9. ...Successfully got an update from the "stable" chart repository
  10. Update Complete. Happy Helming!⎈
  11. $ helm install kyverno --namespace kyverno kyverno/kyverno --create-namespace
  12. NAME: kyverno
  13. LAST DEPLOYED: Thu Feb 25 13:16:21 2021
  14. NAMESPACE: kyverno
  15. STATUS: deployed
  16. REVISION: 1
  17. TEST SUITE: None
  18. NOTES:
  19. Thank you for installing kyverno 😀
  20. Your release is named kyverno.
  21. We have installed the "default" profile of Pod Security Standards and set them in audit mode.
  22. Visit https://kyverno.io/policies/ to find more sample policies.

Let’s look at the Custom Resource Definitions list.

  1. $ kubectl get customresourcedefinitions.apiextensions.k8s.io
  2. Found existing alias for "kubectl". You should use: "k"
  3. NAME CREATED AT
  4. clusterpolicies.kyverno.io 2021-02-25T10:16:16Z
  5. clusterpolicyreports.wgpolicyk8s.io 2021-02-25T10:16:16Z
  6. clusterreportchangerequests.kyverno.io 2021-02-25T10:16:16Z
  7. generaterequests.kyverno.io 2021-02-25T10:16:16Z
  8. policies.kyverno.io 2021-02-25T10:16:16Z
  9. policyreports.wgpolicyk8s.io 2021-02-25T10:16:16Z
  10. reportchangerequests.kyverno.io 2021-02-25T10:16:16Z

We can also use kubectl explain command to get information easily about the resource using the OpenAPI schema.

  1. $ kubectl explain policies
  2. KIND: Policy
  3. VERSION: kyverno.io/v1
  4. DESCRIPTION:
  5. Policy declares validation, mutation, and generation behaviors for matching
  6. resources. See: https://kyverno.io/docs/writing-policies/ for more
  7. information.
  8. FIELDS:
  9. apiVersion <string>
  10. APIVersion defines the versioned schema of this representation of an
  11. object. Servers should convert recognized schemas to the latest internal
  12. value, and may reject unrecognized values. More info:
  13. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
  14. kind <string>
  15. Kind is a string value representing the REST resource this object
  16. represents. Servers may infer this from the endpoint the client submits
  17. requests to. Cannot be updated. In CamelCase. More info:
  18. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
  19. metadata <Object>
  20. Standard object's metadata. More info:
  21. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
  22. spec <Object> -required-
  23. Spec defines policy behaviors and contains one or rules.
  24. status <Object>
  25. Status contains policy runtime information.

Lets look at our first policy definition. In this case we are using validating feature of Kyverno.

  1. apiVersion: kyverno.io/v1
  2. kind: ClusterPolicy
  3. metadata:
  4. name: require-labels
  5. spec:
  6. validationFailureAction: enforce
  7. rules:
  8. - name: check-for-labels
  9. match:
  10. resources:
  11. kinds:
  12. - Pod
  13. validate:
  14. message: "label `app.kubernetes.io/name` is required"
  15. pattern:
  16. metadata:
  17. labels:
  18. app.kubernetes.io/name: "?*"

You should notice that we enforcing a required label policy on Pod resource. We are ddefining policies using native Kyverno Custom Resource called ClusterPolicy.

Let’s apply it.

  1. $ kubectl apply -f kyverno/validating/requirelabels-clusterpolicy.yaml
  2. clusterpolicy.kyverno.io/require-labels created

Let’s test it by creating a Deployment that violates the policy.

  1. $ kubectl apply -f kyverno/validating/invalid-deployment.yaml
  2. Found existing alias for "kubectl apply -f". You should use: "kaf"
  3. Error from server: error when creating "kyverno/validating/invalid-deployment.yaml": admission webhook "validate.kyverno.svc" denied the request:
  4. resource Deployment/default/nginx was blocked due to the following policies
  5. require-labels:
  6. autogen-check-for-labels: 'validation error: label `app.kubernetes.io/name` is required. Rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'

Let’s apply valid one.

  1. $ kubectl apply -f kyverno/validating/valid-deployment.yaml
  2. pod/nginx created
  3. $ kubectl get pods
  4. NAME READY STATUS RESTARTS AGE
  5. nginx 0/1 ContainerCreating 0 6s

Tadaaaa, it worked 🎉🎉🎉🎉

References