项目作者: telepresenceio

项目描述 :
Local development against a remote Kubernetes or OpenShift cluster
高级语言: Python
项目地址: git://github.com/telepresenceio/telepresence.git
创建时间: 2017-02-23T14:07:34Z
项目社区:https://github.com/telepresenceio/telepresence

开源协议:Apache License 2.0

下载


Telepresence: fast, efficient local development for Kubernetes microservices

Artifact Hub Gurubase

Telepresence gives developers infinite scale development environments for Kubernetes.

Key benefits

With Telepresence:

  • You run your services locally, using your favorite IDE and other tools
  • Your workstation is connected to the cluster and can access to its services

This gives developers:

  • A fast local dev loop, with no waiting for a container build / push / deploy
  • Ability to use their favorite local tools (IDE, debugger, etc.)
  • Ability to run large-scale applications that can’t run locally

Quick Start

A few quick ways to start using Telepresence:

  • Telepresence Quick Start: Quick Start
  • Install Telepresence: Install
  • Contributor’s Guide: Guide
  • Meetings: Check out our community meeting schedule for opportunities to interact with Telepresence developers

Walkthrough

Install an interceptable service:

Start with an empty cluster:

  1. $ kubectl create deploy hello --image=registry.k8s.io/echoserver:1.4
  2. deployment.apps/hello created
  3. $ kubectl expose deploy hello --port 80 --target-port 8080
  4. service/hello exposed
  5. $ kubectl get ns,svc,deploy,po
  6. NAME STATUS AGE
  7. namespace/kube-system Active 53m
  8. namespace/default Active 53m
  9. namespace/kube-public Active 53m
  10. namespace/kube-node-lease Active 53m
  11. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  12. service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 53m
  13. service/hello ClusterIP 10.43.73.112 <none> 80/TCP 2m
  14. NAME READY UP-TO-DATE AVAILABLE AGE
  15. deployment.apps/hello 1/1 1 1 2m
  16. NAME READY STATUS RESTARTS AGE
  17. pod/hello-9954f98bf-6p2k9 1/1 Running 0 2m15s

Check telepresence version

  1. $ telepresence version
  2. OSS Client : v2.17.0
  3. Root Daemon: not running
  4. User Daemon: not running

Setup Traffic Manager in the cluster

Install Traffic Manager in your cluster. By default, it will reside in the ambassador namespace:

  1. $ telepresence helm install
  2. Traffic Manager installed successfully

Establish a connection to the cluster (outbound traffic)

Let telepresence connect:

  1. $ telepresence connect
  2. Launching Telepresence Root Daemon
  3. Launching Telepresence User Daemon
  4. Connected to context default, namespace default (https://35.232.104.64)

A session is now active and outbound connections will be routed to the cluster. I.e. your laptop is logically “inside”
a namespace in the cluster.

Since telepresence connected to the default namespace, all services in that namespace can now be reached directly
by their name. You can of course also use namespaced names, e.g. curl hello.default.

  1. $ curl hello
  2. CLIENT VALUES:
  3. client_address=10.244.0.87
  4. command=GET
  5. real path=/
  6. query=nil
  7. request_version=1.1
  8. request_uri=http://hello:8080/
  9. SERVER VALUES:
  10. server_version=nginx: 1.10.0 - lua: 10001
  11. HEADERS RECEIVED:
  12. accept=*/*
  13. host=hello
  14. user-agent=curl/8.0.1
  15. BODY:
  16. -no body in request-

Intercept the service. I.e. redirect traffic to it to our laptop (inbound traffic)

Add an intercept for the hello deployment on port 9000. Here, we also start a service listening on that port:

  1. $ telepresence intercept hello --port 9000 -- python3 -m http.server 9000
  2. Using Deployment hello
  3. intercepted
  4. Intercept name : hello
  5. State : ACTIVE
  6. Workload kind : Deployment
  7. Destination : 127.0.0.1:9000
  8. Service Port Identifier: 80
  9. Volume Mount Point : /tmp/telfs-524630891
  10. Intercepting : all TCP connections
  11. Serving HTTP on 0.0.0.0 port 9000 (http://0.0.0.0:9000/) ...

The python -m httpserver is now started on port 9000 and will run until terminated by <ctrl>-C. Access it from a browser using http://hello/ or use curl from another terminal. With curl, it presents a html listing from the directory where the server was started. Something like:

  1. $ curl hello
  2. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
  3. <html>
  4. <head>
  5. <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  6. <title>Directory listing for /</title>
  7. </head>
  8. <body>
  9. <h1>Directory listing for /</h1>
  10. <hr>
  11. <ul>
  12. <li><a href="file1.txt">file1.txt</a></li>
  13. <li><a href="file2.txt">file2.txt</a></li>
  14. </ul>
  15. <hr>
  16. </body>
  17. </html>

Observe that the python service reports that it’s being accessed:

  1. 127.0.0.1 - - [16/Jun/2022 11:39:20] "GET / HTTP/1.1" 200 -

Clean-up and close daemon processes

End the service with <ctrl>-C and then try curl hello or http://hello again. The intercept is gone, and the echo service responds as normal.

Now end the session too. Your desktop no longer has access to the cluster internals.

  1. $ telepresence quit
  2. Disconnected
  3. $ curl hello
  4. curl: (6) Could not resolve host: hello

The telepresence daemons are still running in the background, which is harmless. You’ll need to stop them before you
upgrade telepresence. That’s done by passing the option -s (stop all local telepresence daemons) to the
quit command.

  1. $ telepresence quit -s
  2. Telepresence Daemons quitting...done

What got installed in the cluster?

Telepresence installs the Traffic Manager in your cluster if it is not already present. This deployment remains unless you uninstall it.

Telepresence injects the Traffic Agent as an additional container into the pods of the workload you intercept, and will optionally install
an init-container to route traffic through the agent (the init-container is only injected when the service is headless or uses a numerical
targetPort). The modifications persist unless you uninstall them.

At first glance, we can see that the deployment is installed …

  1. $ kubectl get svc,deploy,pod
  2. service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 7d22h
  3. service/hello ClusterIP 10.43.145.57 <none> 80/TCP 13m
  4. NAME READY UP-TO-DATE AVAILABLE AGE
  5. deployment.apps/hello 1/1 1 1 13m
  6. NAME READY STATUS RESTARTS AGE
  7. pod/hello-774455b6f5-6x6vs 2/2 Running 0 10m

… and that the traffic-manager is installed in the “ambassador” namespace.

  1. $ kubectl -n ambassador get svc,deploy,pod
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. service/traffic-manager ClusterIP None <none> 8081/TCP 17m
  4. service/agent-injector ClusterIP 10.43.72.154 <none> 443/TCP 17m
  5. NAME READY UP-TO-DATE AVAILABLE AGE
  6. deployment.apps/traffic-manager 1/1 1 1 17m
  7. NAME READY STATUS RESTARTS AGE
  8. pod/traffic-manager-dcd4cc64f-6v5bp 1/1 Running 0 17m

The traffic-agent is installed too, in the hello pod. Here together with an init-container, because the service is using a numerical
targetPort.

  1. $ kubectl describe pod hello-774455b6f5-6x6vs
  2. Name: hello-75b7c6d484-9r4xd
  3. Namespace: default
  4. Priority: 0
  5. Service Account: default
  6. Node: kind-control-plane/192.168.96.2
  7. Start Time: Sun, 07 Jan 2024 01:01:33 +0100
  8. Labels: app=hello
  9. pod-template-hash=75b7c6d484
  10. telepresence.io/workloadEnabled=true
  11. telepresence.io/workloadName=hello
  12. Annotations: telepresence.io/inject-traffic-agent: enabled
  13. telepresence.io/restartedAt: 2024-01-07T00:01:33Z
  14. Status: Running
  15. IP: 10.244.0.89
  16. IPs:
  17. IP: 10.244.0.89
  18. Controlled By: ReplicaSet/hello-75b7c6d484
  19. Init Containers:
  20. tel-agent-init:
  21. Container ID: containerd://4acdf45992980e2796f0eb79fb41afb1a57808d108eb14a355cb390ccc764571
  22. Image: docker.io/datawire/tel2:2.17.0
  23. Image ID: docker.io/datawire/tel2@sha256:e18aed6e7bd3c15cb5a99161c164e0303d20156af68ef138faca98dc2c5754a7
  24. Port: <none>
  25. Host Port: <none>
  26. Args:
  27. agent-init
  28. State: Terminated
  29. Reason: Completed
  30. Exit Code: 0
  31. Started: Sun, 07 Jan 2024 01:01:34 +0100
  32. Finished: Sun, 07 Jan 2024 01:01:34 +0100
  33. Ready: True
  34. Restart Count: 0
  35. Environment: <none>
  36. Mounts:
  37. /etc/traffic-agent from traffic-config (rw)
  38. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
  39. Containers:
  40. echoserver:
  41. Container ID: containerd://577e140545f3106c90078e687e0db3661db815062084bb0c9f6b2d0b4f949308
  42. Image: registry.k8s.io/echoserver:1.4
  43. Image ID: sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
  44. Port: <none>
  45. Host Port: <none>
  46. State: Running
  47. Started: Sun, 07 Jan 2024 01:01:34 +0100
  48. Ready: True
  49. Restart Count: 0
  50. Environment: <none>
  51. Mounts:
  52. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
  53. traffic-agent:
  54. Container ID: containerd://17558b4711903f4cb580c5afafa169d314a7deaf33faa749f59d3a2f8eed80a9
  55. Image: docker.io/datawire/tel2:2.17.0
  56. Image ID: docker.io/datawire/tel2@sha256:e18aed6e7bd3c15cb5a99161c164e0303d20156af68ef138faca98dc2c5754a7
  57. Port: 9900/TCP
  58. Host Port: 0/TCP
  59. Args:
  60. agent
  61. State: Running
  62. Started: Sun, 07 Jan 2024 01:01:34 +0100
  63. Ready: True
  64. Restart Count: 0
  65. Readiness: exec [/bin/stat /tmp/agent/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
  66. Environment:
  67. _TEL_AGENT_POD_IP: (v1:status.podIP)
  68. _TEL_AGENT_NAME: hello-75b7c6d484-9r4xd (v1:metadata.name)
  69. A_TELEPRESENCE_MOUNTS: /var/run/secrets/kubernetes.io/serviceaccount
  70. Mounts:
  71. /etc/traffic-agent from traffic-config (rw)
  72. /tel_app_exports from export-volume (rw)
  73. /tel_app_mounts/echoserver/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
  74. /tel_pod_info from traffic-annotations (rw)
  75. /tmp from tel-agent-tmp (rw)
  76. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
  77. Conditions:
  78. Type Status
  79. Initialized True
  80. Ready True
  81. ContainersReady True
  82. PodScheduled True
  83. Volumes:
  84. kube-api-access-svf4h:
  85. Type: Projected (a volume that contains injected data from multiple sources)
  86. TokenExpirationSeconds: 3607
  87. ConfigMapName: kube-root-ca.crt
  88. ConfigMapOptional: <nil>
  89. DownwardAPI: true
  90. traffic-annotations:
  91. Type: DownwardAPI (a volume populated by information about the pod)
  92. Items:
  93. metadata.annotations -> annotations
  94. traffic-config:
  95. Type: ConfigMap (a volume populated by a ConfigMap)
  96. Name: telepresence-agents
  97. Optional: false
  98. export-volume:
  99. Type: EmptyDir (a temporary directory that shares a pod's lifetime)
  100. Medium:
  101. SizeLimit: <unset>
  102. tel-agent-tmp:
  103. Type: EmptyDir (a temporary directory that shares a pod's lifetime)
  104. Medium:
  105. SizeLimit: <unset>
  106. QoS Class: BestEffort
  107. Node-Selectors: <none>
  108. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  109. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  110. Events:
  111. Type Reason Age From Message
  112. ---- ------ ---- ---- -------
  113. Normal Scheduled 7m40s default-scheduler Successfully assigned default/hello-75b7c6d484-9r4xd to kind-control-plane
  114. Normal Pulled 7m40s kubelet Container image "docker.io/datawire/tel2:2.17.0" already present on machine
  115. Normal Created 7m40s kubelet Created container tel-agent-init
  116. Normal Started 7m39s kubelet Started container tel-agent-init
  117. Normal Pulled 7m39s kubelet Container image "registry.k8s.io/echoserver:1.4" already present on machine
  118. Normal Created 7m39s kubelet Created container echoserver
  119. Normal Started 7m39s kubelet Started container echoserver
  120. Normal Pulled 7m39s kubelet Container image "docker.io/datawire/tel2:2.17.0" already present on machine
  121. Normal Created 7m39s kubelet Created container traffic-agent
  122. Normal Started 7m39s kubelet Started container traffic-agent

Telepresence keeps track of all possible intercepts for containers that have an agent installed in the configmap telepresence-agents.

  1. $ kubectl describe configmap telepresence-agents
  2. Name: telepresence-agents
  3. Namespace: default
  4. Labels: app.kubernetes.io/created-by=traffic-manager
  5. app.kubernetes.io/name=telepresence-agents
  6. app.kubernetes.io/version=2.17.0
  7. Annotations: <none>
  8. Data
  9. ====
  10. hello:
  11. ----
  12. agentImage: localhost:5000/tel2:2.17.0
  13. agentName: hello
  14. containers:
  15. - Mounts: null
  16. envPrefix: A_
  17. intercepts:
  18. - agentPort: 9900
  19. containerPort: 8080
  20. protocol: TCP
  21. serviceName: hello
  22. servicePort: 80
  23. serviceUID: 68a4ecd7-0a12-44e2-9293-dc16fb205621
  24. targetPortNumeric: true
  25. mountPoint: /tel_app_mounts/echoserver
  26. name: echoserver
  27. logLevel: debug
  28. managerHost: traffic-manager.ambassador
  29. managerPort: 8081
  30. namespace: default
  31. pullPolicy: IfNotPresent
  32. workloadKind: Deployment
  33. workloadName: hello
  34. BinaryData
  35. ====
  36. Events: <none>

Uninstalling

You can uninstall the traffic-agent from specific deployments or from all deployments. Or you can choose to uninstall everything in which
case the traffic-manager and all traffic-agents will be uninstalled.

  1. $ telepresence helm uninstall

will remove everything that was automatically installed by telepresence from the cluster.

  1. $ telepresence uninstall hello

will remove the traffic-agent and the configmap entry.

Troubleshooting

The telepresence background processes daemon and connector both produces log files that can be very helpful when problems are
encountered. The files are named daemon.log and connector.log. The location of the logs differ depending on what platform that is used:

  • macOS ~/Library/Logs/telepresence
  • Linux ~/.cache/telepresence/logs
  • Windows "%USERPROFILE%\AppData\Local\logs"

How it works

When Telepresence 2 connects to a Kubernetes cluster, it

  1. Ensures Traffic Manager is installed in the cluster.
  2. Looks for the relevant subnets in the kubernetes cluster.
  3. Creates a Virtual Network Interface (VIF).
  4. Assigns the cluster’s subnets to the VIF.
  5. Binds itself to VIF and starts routing traffic to the traffic-manager, or a traffic-agent if one is present.
  6. Starts listening for, and serving DNS requests, by passing a selected portion to the traffic-manager or traffic-agent.

When a locally running application makes a network request to a service in the cluster, Telepresence will resolve the name to an address within the cluster.
The operating system then sees that the TUN device has an address in the same subnet as the address of the outgoing packets and sends them to tel0.
Telepresence is on the other side of tel0 and picks up the packets, injecting them into the cluster through a gRPC connection with Traffic Manager.

Troubleshooting

Visit the troubleshooting section in the Telepresence documentation for more advice:
Troubleshooting

Or discuss with the community in the CNCF Slack in the #telepresence-oss channel.