项目作者: mittwald

项目描述 :
Varnish Reverse Proxy on Kubernetes
高级语言: Go
项目地址: git://github.com/mittwald/kube-httpcache.git
创建时间: 2018-10-16T06:51:53Z
项目社区:https://github.com/mittwald/kube-httpcache

开源协议:MIT License

下载


Varnish on Kubernetes

GitHub Workflow Status

This repository contains a controller that allows you to operate a Varnish cache on Kubernetes.


:warning: COMPATIBILITY NOTICE: As of version v0.3, the image tag name of this project was renamed from quay.io/spaces/kube-httpcache to quay.io/mittwald/kube-httpcache. The old image will remain available (for the time being), but only the new image name will receive any updates. Please remember to adjust the image name when upgrading.


Table of Contents

How it works

This controller is not intended to be a replacement of a regular ingress controller. Instead, it is intended to be used between your regular Ingress controller and your application’s service.

  1. ┌─────────┐ ┌─────────┐ ┌─────────────┐
  2. | Ingress | ----> | Varnish | ----> | Application |
  3. └─────────┘ └─────────┘ └─────────────┘

The Varnish controller needs the following prerequisites to run:

  • A Go-template that will be used to generate a VCL configuration file
  • An application Kubernetes service that will be used as backend for the Varnish controller
  • A Varnish Kubernetes service that will be used as frontend for the Varnish controller
  • If RBAC is enabled in your cluster, you’ll need a ServiceAccount with a role that grants WATCH access to the endpoints resource in the respective namespace

After starting, the Varnish controller will watch the configured Varnish service’s endpoints and application service’s endpoints; on startup and whenever these change, it will use the supplied VCL template to generate a new Varnish configuration and load this configuration at runtime.

The controller does not ship with any preconfigured configuration; the upstream connection and advanced features like load balancing are possible, but need to be configured in the VCL template supplied by you.

High-Availability mode

It can run in high avalability mode using multiple Varnish and application pods.

  1. ┌─────────┐
  2. Ingress
  3. └────┬────┘
  4. |
  5. ┌────┴────┐
  6. Service
  7. └───┬┬────┘
  8. ┌───┘└───┐
  9. ┌────────────┴──┐ ┌──┴────────────┐
  10. Varnish 1 ├──┤ Varnish 2
  11. Signaller 1 ├──┤ Signaller 2
  12. └─────────┬┬────┘ └────┬┬─────────┘
  13. │└─────┌──────┘│
  14. │┌─────┘└─────┐│
  15. ┌─────────┴┴────┐ ┌────┴┴─────────┐
  16. Application 1 | Application 2
  17. └───────────────┘ └───────────────┘

The Signaller component supports broadcasting PURGE and BAN requests to all Varnish nodes.

Getting started

Create a VCL template


:warning: NOTE: The current implementation (supplying a VCL template as ConfigMap) may still be subject to change. Future implementations might for example use a Kubernetes Custom Resource for the entire configuration set.


Start by creating a ConfigMap that contains a VCL template:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: vcl-template
  5. data:
  6. default.vcl.tmpl: |
  7. vcl 4.0;
  8. import std;
  9. import directors;
  10. // ".Frontends" is a slice that contains all known Varnish instances
  11. // (as selected by the service specified by -frontend-service).
  12. // The backend name needs to be the Pod name, since this value is compared
  13. // to the server identity ("server.identity" [1]) later.
  14. //
  15. // [1]: https://varnish-cache.org/docs/6.4/reference/vcl.html#local-server-remote-and-client
  16. {{ range .Frontends }}
  17. backend {{ .Name }} {
  18. .host = "{{ .Host }}";
  19. .port = "{{ .Port }}";
  20. }
  21. {{- end }}
  22. backend fe-primary {
  23. .host = "{{ .PrimaryFrontend.Host }}";
  24. .port = "{{ .PrimaryFrontend.Port }}";
  25. }
  26. {{ range .Backends }}
  27. backend be-{{ .Name }} {
  28. .host = "{{ .Host }}";
  29. .port = "{{ .Port }}";
  30. }
  31. {{- end }}
  32. backend be-primary {
  33. .host = "{{ .PrimaryBackend.Host }}";
  34. .port = "{{ .PrimaryBackend.Port }}";
  35. }
  36. acl purgers {
  37. "127.0.0.1";
  38. "localhost";
  39. "::1";
  40. {{- range .Frontends }}
  41. "{{ .Host }}";
  42. {{- end }}
  43. {{- range .Backends }}
  44. "{{ .Host }}";
  45. {{- end }}
  46. }
  47. sub vcl_init {
  48. new cluster = directors.hash();
  49. {{ range .Frontends -}}
  50. cluster.add_backend({{ .Name }}, 1);
  51. {{ end }}
  52. new lb = directors.round_robin();
  53. {{ range .Backends -}}
  54. lb.add_backend(be-{{ .Name }});
  55. {{ end }}
  56. }
  57. sub vcl_recv
  58. {
  59. # Set backend hint for non cachable objects.
  60. set req.backend_hint = lb.backend();
  61. # ...
  62. # Routing logic. Pass a request to an appropriate Varnish node.
  63. # See https://info.varnish-software.com/blog/creating-self-routing-varnish-cluster for more info.
  64. unset req.http.x-cache;
  65. set req.backend_hint = cluster.backend(req.url);
  66. set req.http.x-shard = req.backend_hint;
  67. if (req.http.x-shard != server.identity) {
  68. return(pass);
  69. }
  70. set req.backend_hint = lb.backend();
  71. # ...
  72. return(hash);
  73. }
  74. # ...

Environment variables can be used from the template. {{ .Env.ENVVAR }} is replaced with the
environment variable value. This can be used to set for example the Host-header for the external
service.

Create a Secret

Create a Secret object that contains the secret for the Varnish administration port:

  1. $ kubectl create secret generic varnish-secret --from-literal=secret=$(head -c32 /dev/urandom | base64)

[Optional] Configure RBAC roles

If RBAC is enabled in your cluster, you will need to create a ServiceAccount with a respective Role.

  1. $ kubectl create serviceaccount kube-httpcache
  2. $ kubectl apply -f https://raw.githubusercontent.com/mittwald/kube-httpcache/master/deploy/kubernetes/rbac.yaml
  3. $ kubectl create rolebinding kube-httpcache --clusterrole=kube-httpcache --serviceaccount=kube-httpcache

Deploy Varnish

  1. Create a StatefulSet for the Varnish controller:

    1. apiVersion: apps/v1
    2. kind: StatefulSet
    3. metadata:
    4. name: cache-statefulset
    5. labels:
    6. app: cache
    7. spec:
    8. serviceName: cache-service
    9. replicas: 2
    10. updateStrategy:
    11. type: RollingUpdate
    12. selector:
    13. matchLabels:
    14. app: cache
    15. template:
    16. metadata:
    17. labels:
    18. app: cache
    19. spec:
    20. containers:
    21. - name: cache
    22. image: quay.io/mittwald/kube-httpcache:stable
    23. imagePullPolicy: Always
    24. args:
    25. - -admin-addr=0.0.0.0
    26. - -admin-port=6083
    27. - -signaller-enable
    28. - -signaller-port=8090
    29. - -frontend-watch
    30. - -frontend-namespace=$(NAMESPACE)
    31. - -frontend-service=frontend-service
    32. - -frontend-port=8080
    33. - -backend-watch
    34. - -backend-namespace=$(NAMESPACE)
    35. - -backend-service=backend-service
    36. - -varnish-secret-file=/etc/varnish/k8s-secret/secret
    37. - -varnish-vcl-template=/etc/varnish/tmpl/default.vcl.tmpl
    38. - -varnish-storage=malloc,128M
    39. env:
    40. - name: NAMESPACE
    41. valueFrom:
    42. fieldRef:
    43. fieldPath: metadata.namespace
    44. volumeMounts:
    45. - name: template
    46. mountPath: /etc/varnish/tmpl
    47. - name: secret
    48. mountPath: /etc/varnish/k8s-secret
    49. ports:
    50. - containerPort: 8080
    51. name: http
    52. - containerPort: 8090
    53. name: signaller
    54. serviceAccountName: kube-httpcache # when using RBAC
    55. restartPolicy: Always
    56. volumes:
    57. - name: template
    58. configMap:
    59. name: vcl-template
    60. - name: secret
    61. secret:
    62. secretName: varnish-secret

    NOTE: Using a StatefulSet is particularly important when using a stateful, self-routed Varnish cluster. Otherwise, you could also use a Deployment resource, instead.

  2. Create a service for the Varnish controller:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: cache-service
    5. labels:
    6. app: cache
    7. spec:
    8. ports:
    9. - name: "http"
    10. port: 80
    11. targetPort: http
    12. - name: "signaller"
    13. port: 8090
    14. targetPort: signaller
    15. selector:
    16. app: cache
  3. Create an Ingress to forward requests to cache service. Typically, you should only need an Ingress for the Services http port, and not for the signaller port (if for some reason you do, make sure to implement proper access controls)

Logging

Logging uses glog.
Detailed logging e.g. for troubleshooting can be activated by passing command line parameter -v7 (where 7 is requested logging level).

Detailed how-tos

Using built in signaller component

The signaller component is responsible for broadcasting HTTP requests to all nodes of a Varnish cluster. This is useful in HA cluster setups, when BAN or PURGE requests should be broadcast across the entire cluster.

To broadcast a BAN or PURGE request to all Varnish endpoints, run one of the following commands, respectively:

  1. $ curl -H "X-Url: /path" -X BAN http://cache-service:8090
  2. $ curl -H "X-Host: www.example.com" -X PURGE http://cache-service:8090/path

When running from outside the cluster, you can use kubectl port-forward to forward the signaller port to your local machine (and then send your requests to http://localhost:8090):

  1. $ kubectl port-forward service/cache-service 8090:8090

NOTE: Specific headers for PURGE/BAN requests depend on your Varnish configuration. E.g. X-Host header is set for convenience, because signaller is listening on other URL than Varnish. However, you need to support such headers in your VCL.

  1. sub vcl_recv {
  2. # ...
  3. # Purge logic
  4. if (req.method == "PURGE") {
  5. if (client.ip !~ purgers) {
  6. return (synth(403, "Not allowed."));
  7. }
  8. if (req.http.X-Host) {
  9. set req.http.host = req.http.X-Host;
  10. }
  11. return (purge);
  12. }
  13. # Ban logic
  14. if (req.method == "BAN") {
  15. if (client.ip !~ purgers) {
  16. return (synth(403, "Not allowed."));
  17. }
  18. if (req.http.Cache-Tags) {
  19. ban("obj.http.Cache-Tags ~ " + req.http.Cache-Tags);
  20. return (synth(200, "Ban added " + req.http.host));
  21. }
  22. if (req.http.X-Url) {
  23. ban("obj.http.X-Url == " + req.http.X-Url);
  24. return (synth(200, "Ban added " + req.http.host));
  25. }
  26. return (synth(403, "Cache-Tags or X-Url header missing."));
  27. }
  28. # ...
  29. }

Proxying to external services


NOTE: Native support for ExternalName services is a requested feature. Have a look at #39 if you’re willing to help out.


In some cases, you might want to cache content from a cluster-external resource. In this case, create a new Kubernetes service of type ExternalName for your backend:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: external-service
  5. namespace: default
  6. spec:
  7. type: ExternalName
  8. externalName: external-service.example

In your VCL template, you can then simply use this service as static backend (since there are no dynamic endpoints, you do not need to iterate over .Backends in your VCL template):

  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata: # [...]
  4. data:
  5. default.vcl.tmpl: |
  6. vcl 4.0;
  7. {{ range .Frontends }}
  8. backend {{ .Name }} {
  9. .host = "{{ .Host }}";
  10. .port = "{{ .Port }}";
  11. }
  12. {{- end }}
  13. backend backend {
  14. .host = "external-service.svc";
  15. }
  16. // ...

When starting kube-httpcache, remember to set the --backend-watch=false flag to disable watching the (non-existent) backend endpoints.

Helm Chart installation

You can use the Helm chart to rollout an instance of kube-httpcache:

  1. $ helm repo add mittwald https://helm.mittwald.de
  2. $ helm install -f your-values.yaml kube-httpcache mittwald/kube-httpcache

For possible values, have a look at the comments in the provided values.yaml file. Take special note that you’ll most likely have to overwrite the vclTemplate value with your own VCL configuration file.

Ensure your defined backend services have a port named http:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: backend-service
  5. spec:
  6. ports:
  7. - name: http
  8. port: 80
  9. protocol: TCP
  10. targetPort: 8080
  11. type: ClusterIP

An ingress points to the kube-httpcache service which cached
your backend service:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: example-ingress
  5. spec:
  6. rules:
  7. - host: www.example.com
  8. http:
  9. paths:
  10. - backend:
  11. service:
  12. name: kube-httpcache
  13. port:
  14. number: 80
  15. path: /
  16. pathType: Prefix

Look at the vclTemplate property in chart/values.yaml to define
your own Varnish cluster rules or load with extraVolume an extra file
as initContainer if your ruleset is really big.

Developer notes

Build the Docker image locally

A Dockerfile for building the container image yourself is located in build/package/docker. Invoke docker build as follows:

  1. $ docker build -t $IMAGE_NAME -f build/package/docker/Dockerfile .