项目作者: kubekit99

项目描述 :
Argo POC
高级语言: Smarty
项目地址: git://github.com/kubekit99/argo-ohm.git
创建时间: 2019-01-29T15:45:13Z
项目社区:https://github.com/kubekit99/argo-ohm

开源协议:Apache License 2.0

下载


argo-ohm

This repo contains a proof of concept for deploying the Openstack Helm Keystone
chart using Argo workflows.



Using Argo workflows allows us to run multiple tasks simultaneously, and begin
new tasks the moment that all dependencies are met. This dramatically speeds up
deployment time.



Installation of kubernetes

TBD

Installation of argo itself

  1. helm fetch argo/argo
  2. tar xvf argo.xxx.tgz
  3. helm install --name argo --namespace argo .

Installation of keystone and associated services

you need to deploy mariadb, memcached and rabbitmq first by using
the helm chart available under openstackhelm-infra.
Potentially if you don’t have a local helm server, ensure
that the helmtoolkit.tgz file is available each chart under charts/helmtoolkit.tgz

  1. cd ../mariadb/
  2. helm install --name mariadb --namespace openstack .
  3. cd ../memcached/
  4. helm install --name memcached --namespace openstack .
  5. cd ../rabbitmq/
  6. helm install --name rabbitmq --namespace openstack .
  7. kubectl get all -n openstack

argo cli

Deployment

  1. cd keystone-argo-cli
  2. argo submit -n openstack wf-mariadb.yaml
  3. argo submit -n openstack wf-memcached.yaml
  4. argo submit -n openstack wf-rabbitmq.yaml
  5. argo submit -n openstack wf-keystone-api.yaml
  6. argo submit -n openstack wf-keystone-bootstrap.yaml
  1. argo get wf-keystone-api -n openstack
  2. Name: wf-keystone-api
  3. Namespace: openstack
  4. ServiceAccount: keystone-api
  5. Status: Succeeded
  6. Created: Tue Jan 29 17:19:45 -0600 (54 seconds ago)
  7. Started: Tue Jan 29 17:19:45 -0600 (54 seconds ago)
  8. Finished: Tue Jan 29 17:20:38 -0600 (1 second ago)
  9. Duration: 53 seconds
  10. STEP PODNAME DURATION MESSAGE
  11. wf-keystone-api
  12. ├---✔ svc-memcached wf-keystone-api-2517001386 4s
  13. ├---✔ svc-mariadb wf-keystone-api-1718994872 5s
  14. ├---✔ wf-keystone-db-sync
  15. | ├---✔ svc-mariadb wf-keystone-api-3144285078 3s
  16. | ├---✔ wf-keystone-db-init
  17. | | ├---✔ svc-mariadb wf-keystone-api-1703586597 2s
  18. | | └---✔ job-keystone-db-init wf-keystone-api-3354426817 3s
  19. | ├---✔ job-keystone-credential-setup wf-keystone-api-3324764547 3s
  20. | ├---✔ job-keystone-fernet-setup wf-keystone-api-2605329163 2s
  21. | ├---✔ wf-keystone-rabbit-init
  22. | | ├---✔ svc-rabbitmq wf-keystone-api-823436806 3s
  23. | | └---✔ job-keystone-rabbit-init wf-keystone-api-1346041812 3s
  24. | └---✔ job-keystone-db-sync wf-keystone-api-1562708993 2s
  25. ├---✔ job-keystone-credential-setup wf-keystone-api-196303985 3s
  26. ├---✔ job-keystone-fernet-setup wf-keystone-api-740083775 3s
  27. ├---✔ wf-keystone-rabbit-init
  28. | ├---✔ svc-rabbitmq wf-keystone-api-2519204236 2s
  29. | └---✔ job-keystone-rabbit-init wf-keystone-api-2526730454 3s
  30. └---✔ svc-keystone-api wf-keystone-api-1282227320 2s

Note 1

the serviceAccountName field in the workflow is important. Each component of the openstackhelm is creating standalone serviceAccount.
I could not figure how to achieve the same level of granularity

Note 2

Note having access to the templating language basically void the interest of the helm-toolkit. For instance dependencies in
original helm chart, reference

Note 3

Approach is of such deploment is much more centralized. You don’t let kubernetes do its thing anymore…aka initcontainer would
retry until dependency is resolved. Need to find the right balance between the two approaches.

Note 4

ARGO UI is to some extend not as good as the “argo cli” and seems much slower

Installation using helm chart

Deployment

Argo is using a CRD pattern, so can be controlled through kubectl without using argo cli ….therefore let us helm and associated template language.

  1. cd keystone-argo-helm
  2. helm install --name armadalike-keystone --namespace .

Note 1

Not much of the workflows has been templatized in the template directory

Note 2

The workflow ‘keystone-argo-helm/templates/wf-keystone.yaml’ is an attempt to simulate an airship-armada like workflow/chart group.
One workflow (equivalent of chartgroup) is basically waiting for the other worflow completed (equivalent of helm chart).
Still having with serviceAccount. Interesting aspect is that we still have access to the templating of helm as for armada

Combining new kubernetes-endpoint and workflow

Deployment

  1. cd kubernetesendpoint-argo-poc1
  2. helm install --name argo-poc1 --namespace openstack .

Notes

TBD

Removing jobs in keystone helm chart and replacing them with argo steps

Development

  • Brute force and ugly copy paste of keystone helm chart.
  • helm template . —namespace openstack -x templates/jobs-xxx.yaml > templates/steps/_xxx.yaml
  • Use include in the wf-keystone-api.yaml and replace “jobs” by “containers” and include the “steps/xxx.yaml”

Deployment for debugging

Ensure that the “good” keystone helm chart has been run first. This procedure only necessary to understand
how the workflow works

  1. cd kubernetesendpoint-argo-poc2
  2. # helm install --name argo-poc2 --namespace openstack .
  3. helm template . --namespace openstack -x templates/wf-keystone-api.yaml > debugging.yaml
  4. argo submit -f debugging.yaml -n openstack
  5. argo get wf-keystone-api -n openstack

Deployment

  1. cd ../argo-ohm/kubernetesendpoint-argo-poc
  2. cd kubernetesendpoint-argo-poc2/
  3. helm install --name keystone --namespace openstack .

Notes

Notes 1

  • volumes handling at the top of the workflow looks like kind of strange. Can’t put on each container ?
  • gradally running “git rm templates/jobs-xxx.yaml” after those have been converted to steps
  • wf-roles.yaml is kind of ugly and contains all the roles that used to be created by the individual job.
  • wf-keystone-sa is “role-binded” to all the roles that used to be created by the indivual job. We should be able to simplify that.

Notes 2

Still need to include the following steps

  • _domain_manage
  • _bootstrap

Notes 3

  1. kubectl get all -n openstack
  2. NAME READY STATUS RESTARTS AGE
  3. pod/keystone-api-697d4bb54d-l8x4d 1/1 Running 0 15m
  4. pod/mariadb-ingress-6766c8566-ddp8r 1/1 Running 0 18m
  5. pod/mariadb-ingress-error-pages-8b9fd8dd-xnz25 1/1 Running 0 18m
  6. pod/mariadb-server-0 1/1 Running 0 18m
  7. pod/memcached-memcached-5bc79f976c-rxwns 1/1 Running 0 17m
  8. pod/rabbitmq-rabbitmq-0 1/1 Running 0 17m
  9. pod/wf-keystone-api-1273692324 0/2 Completed 0 15m
  10. pod/wf-keystone-api-1526467896 0/1 Completed 0 15m
  11. pod/wf-keystone-api-1632446228 0/1 Completed 0 15m
  12. pod/wf-keystone-api-1843436742 0/2 Completed 0 15m
  13. pod/wf-keystone-api-2693579121 0/2 Completed 0 15m
  14. pod/wf-keystone-api-3460448251 0/1 Completed 0 15m
  15. pod/wf-keystone-api-4103002611 0/1 Completed 0 15m
  16. pod/wf-keystone-api-61866709 0/2 Completed 0 15m
  17. pod/wf-keystone-api-935314636 0/2 Completed 0 15m
  18. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  19. service/keystone ClusterIP 10.96.158.41 <none> 80/TCP,443/TCP 15m
  20. service/keystone-api ClusterIP 10.99.48.219 <none> 5000/TCP 15m
  21. service/mariadb ClusterIP 10.97.250.24 <none> 3306/TCP 18m
  22. service/mariadb-discovery ClusterIP None <none> 3306/TCP,4567/TCP 18m
  23. service/mariadb-ingress-error-pages ClusterIP None <none> 80/TCP 18m
  24. service/mariadb-server ClusterIP 10.104.100.199 <none> 3306/TCP 18m
  25. service/memcached ClusterIP 10.101.43.34 <none> 11211/TCP 17m
  26. service/rabbitmq ClusterIP 10.108.163.238 <none> 5672/TCP,25672/TCP,15672/TCP 17m
  27. service/rabbitmq-dsv-7b1733 ClusterIP None <none> 5672/TCP,25672/TCP,15672/TCP 17m
  28. service/rabbitmq-mgr-7b1733 ClusterIP 10.97.48.153 <none> 80/TCP,443/TCP 17m
  29. NAME READY UP-TO-DATE AVAILABLE AGE
  30. deployment.apps/keystone-api 1/1 1 1 15m
  31. deployment.apps/mariadb-ingress 1/1 1 1 18m
  32. deployment.apps/mariadb-ingress-error-pages 1/1 1 1 18m
  33. deployment.apps/memcached-memcached 1/1 1 1 17m
  34. NAME DESIRED CURRENT READY AGE
  35. replicaset.apps/keystone-api-697d4bb54d 1 1 1 15m
  36. replicaset.apps/mariadb-ingress-6766c8566 1 1 1 18m
  37. replicaset.apps/mariadb-ingress-error-pages-8b9fd8dd 1 1 1 18m
  38. replicaset.apps/memcached-memcached-5bc79f976c 1 1 1 17m
  39. NAME READY AGE
  40. statefulset.apps/mariadb-server 1/1 18m
  41. statefulset.apps/rabbitmq-rabbitmq 1/1 17m
  42. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
  43. cronjob.batch/keystone-credential-rotate 0 0 1 * * False 0 <none> 15m
  44. cronjob.batch/keystone-fernet-rotate 0 */12 * * * False 0 <none> 15m

Notes 4

  1. argo list -n openstack
  2. NAME STATUS AGE DURATION
  3. wf-keystone-api Succeeded 16m 32s
  1. argo get wf-keystone-api -n openstack
  2. Name: wf-keystone-api
  3. Namespace: openstack
  4. ServiceAccount: wf-keystone-sa
  5. Status: Succeeded
  6. Created: Thu Jan 31 14:08:11 -0600 (16 minutes ago)
  7. Started: Thu Jan 31 14:08:11 -0600 (16 minutes ago)
  8. Finished: Thu Jan 31 14:08:43 -0600 (16 minutes ago)
  9. Duration: 32 seconds
  10. STEP PODNAME DURATION MESSAGE
  11. wf-keystone-api
  12. ├-✔ job-keystone-credential-setup wf-keystone-api-1273692324 9s
  13. ├-✔ job-keystone-fernet-setup wf-keystone-api-2693579121 9s
  14. ├-✔ svc-mariadb wf-keystone-api-4103002611 3s
  15. ├-✔ svc-memcached wf-keystone-api-1526467896 5s
  16. ├-✔ svc-rabbitmq wf-keystone-api-3460448251 4s
  17. ├-✔ job-keystone-db-init wf-keystone-api-935314636 6s
  18. ├-✔ job-keystone-rabbit-init wf-keystone-api-1843436742 6s
  19. ├-✔ job-keystone-db-sync wf-keystone-api-61866709 16s
  20. └-✔ svc-keystone-api wf-keystone-api-1632446228 2s

Cleanup

  1. helm delete --purge keystone
  2. helm delete --purge mariadb
  3. helm delete --purge memcached
  4. helm delete --purge rabbitmq
  5. kubectl delete configmap mariadb-mariadb-mariadb-ingress -n openstack
  6. kubectl delete configmap mariadb-mariadb-state -n openstack
  7. kubectl delete namespace openstack

Conclusion

TBD

Logs

  1. argo logs wf-keystone-api -n openstack -w
  2. svc-mariadb: time="2019-01-31T20:08:13Z" level=info msg="Creating a docker executor"
  3. svc-mariadb: time="2019-01-31T20:08:13Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) initialized with template:\narchiveLocation: {}\ninputs: {}\nmetadata: {}\nname: svc-mariadb\noutputs: {}\nresource:\n action: get\n manifest: |\n apiVersion: v1\n kind: Service\n metadata:\n name: mariadb\n successCondition: metadata.name == mariadb\n"
  4. svc-mariadb: time="2019-01-31T20:08:13Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
  5. svc-mariadb: time="2019-01-31T20:08:13Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o name"
  6. svc-rabbitmq: time="2019-01-31T20:08:14Z" level=info msg="Creating a docker executor"
  7. svc-rabbitmq: time="2019-01-31T20:08:14Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) initialized with template:\narchiveLocation: {}\ninputs: {}\nmetadata: {}\nname: svc-rabbitmq\noutputs: {}\nresource:\n action: get\n manifest: |\n apiVersion: v1\n kind: Service\n metadata:\n name: rabbitmq\n successCondition: metadata.name == rabbitmq\n"
  8. svc-rabbitmq: time="2019-01-31T20:08:14Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
  9. svc-rabbitmq: time="2019-01-31T20:08:14Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o name"
  10. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg=service/mariadb
  11. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="Waiting for conditions: metadata.name==mariadb"
  12. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="kubectl get service/mariadb -w -o json"
  13. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="{\"apiVersion\": \"v1\",\"kind\": \"Service\",\"metadata\": {\"creationTimestamp\":\"2019-01-31T20:06:00Z\",\"labels\": {\"application\": \"mariadb\",\"component\": \"ingress\",\"release_group\": \"mariadb\"},\"name\": \"mariadb\",\"namespace\": \"openstack\",\"resourceVersion\": \"370256\",\"selfLink\": \"/api/v1/namespaces/openstack/services/mariadb\",\"uid\": \"a1173585-2593-11e9-b736-0800272e6982\"},\"spec\": {\"clusterIP\": \"10.97.250.24\",\"ports\": [{\"name\": \"mysql\",\"port\": 3306,\"protocol\": \"TCP\",\"targetPort\": 3306}],\"selector\": {\"application\": \"mariadb\",\"component\": \"ingress\",\"release_group\": \"mariadb\"},\"sessionAffinity\": \"None\",\"type\": \"ClusterIP\"},\"status\": {\"loadBalancer\": {}}}"
  14. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="success condition '{metadata.name == [mariadb]}' evaluated true"
  15. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="1/1 success conditions matched"
  16. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="Returning from successful wait for resource service/mariadb"
  17. svc-mariadb: time="2019-01-31T20:08:14Z" level=info msg="No output parameters"
  18. svc-memcached: time="2019-01-31T20:08:14Z" level=info msg="Creating a docker executor"
  19. svc-memcached: time="2019-01-31T20:08:14Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) initialized with template:\narchiveLocation: {}\ninputs: {}\nmetadata: {}\nname: svc-memcached\noutputs: {}\nresource:\n action: get\n manifest: |\n apiVersion: v1\n kind: Service\n metadata:\n name: memcached\n successCondition: metadata.name == memcached\n"
  20. svc-memcached: time="2019-01-31T20:08:14Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
  21. svc-memcached: time="2019-01-31T20:08:14Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o name"
  22. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg=service/rabbitmq
  23. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="Waiting for conditions: metadata.name==rabbitmq"
  24. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="kubectl get service/rabbitmq -w -o json"
  25. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="{\"apiVersion\": \"v1\",\"kind\": \"Service\",\"metadata\": {\"creationTimestamp\":\"2019-01-31T20:06:24Z\",\"name\": \"rabbitmq\",\"namespace\": \"openstack\",\"resourceVersion\": \"370420\",\"selfLink\": \"/api/v1/namespaces/openstack/services/rabbitmq\",\"uid\": \"af96906d-2593-11e9-b736-0800272e6982\"},\"spec\": {\"clusterIP\": \"10.108.163.238\",\"ports\": [{\"name\": \"amqp\",\"port\": 5672,\"protocol\": \"TCP\",\"targetPort\": 5672},{\"name\": \"clustering\",\"port\": 25672,\"protocol\": \"TCP\",\"targetPort\": 25672},{\"name\": \"http\",\"port\": 15672,\"protocol\": \"TCP\",\"targetPort\": 15672}],\"selector\": {\"application\": \"rabbitmq\",\"component\": \"server\",\"release_group\": \"rabbitmq\"},\"sessionAffinity\": \"None\",\"type\": \"ClusterIP\"},\"status\": {\"loadBalancer\":{}}}"
  26. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="success condition '{metadata.name == [rabbitmq]}' evaluated true"
  27. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="1/1 success conditions matched"
  28. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="Returning from successful wait for resource service/rabbitmq"
  29. svc-rabbitmq: time="2019-01-31T20:08:15Z" level=info msg="No output parameters"
  30. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg=service/memcached
  31. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="Waiting for conditions: metadata.name==memcached"
  32. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="kubectl get service/memcached -w -o json"
  33. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="{\"apiVersion\": \"v1\",\"kind\": \"Service\",\"metadata\": {\"creationTimestamp\":\"2019-01-31T20:06:12Z\",\"name\": \"memcached\",\"namespace\": \"openstack\",\"resourceVersion\": \"370353\",\"selfLink\": \"/api/v1/namespaces/openstack/services/memcached\",\"uid\": \"a81f59f7-2593-11e9-b736-0800272e6982\"},\"spec\": {\"clusterIP\": \"10.101.43.34\",\"ports\": [{\"port\": 11211,\"protocol\": \"TCP\",\"targetPort\": 11211}],\"selector\": {\"application\": \"memcached\",\"component\": \"server\",\"release_group\": \"memcached\"},\"sessionAffinity\": \"ClientIP\",\"sessionAffinityConfig\": {\"clientIP\": {\"timeoutSeconds\": 10800}},\"type\": \"ClusterIP\"},\"status\": {\"loadBalancer\": {}}}"
  34. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="success condition '{metadata.name == [memcached]}' evaluated true"
  35. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="1/1 success conditions matched"
  36. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="Returning from successful wait for resource service/memcached"
  37. svc-memcached: time="2019-01-31T20:08:15Z" level=info msg="No output parameters"
  38. job-keystone-fernet-setup: 2019-01-31 20:08:16.192 - INFO - Executing 'keystone-manage fernet_setup --keystone-user=keystone --keystone-group=keystone' command.
  39. job-keystone-credential-setup: 2019-01-31 20:08:16.607 - INFO - Executing 'keystone-manage credential_setup --keystone-user=keystone --keystone-group=keystone' command.
  40. job-keystone-rabbit-init: Managing: User: keystone
  41. job-keystone-rabbit-init: user declared
  42. job-keystone-rabbit-init: Managing: vHost: keystone
  43. job-keystone-rabbit-init: vhost declared
  44. job-keystone-rabbit-init: Managing: Permissions: keystone on keystone
  45. job-keystone-rabbit-init: permission declared
  46. job-keystone-rabbit-init: Applying additional configuration
  47. job-keystone-rabbit-init: Imported definitions for rabbitmq.openstack.svc.cluster.local from "/tmp/rmq_definitions.json"
  48. job-keystone-db-init: 2019-01-31 20:08:20,295 - OpenStack-Helm DB Init - INFO - Got DB root connection
  49. job-keystone-db-init: 2019-01-31 20:08:20,296 - OpenStack-Helm DB Init - INFO - Using /etc/keystone/keystone.conf as db config source
  50. job-keystone-db-init: 2019-01-31 20:08:20,296 - OpenStack-Helm DB Init - INFO - Trying to load db config from database:connection
  51. job-keystone-db-init: 2019-01-31 20:08:20,296 - OpenStack-Helm DB Init - INFO - Got config from /etc/keystone/keystone.conf
  52. job-keystone-fernet-setup: 2019-01-31 20:08:20,315.315 9 WARNING keystone.common.fernet_utils [-] key_repository is world readable: /etc/keystone/fernet-keys/
  53. job-keystone-fernet-setup: 2019-01-31 20:08:20,316.316 9 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
  54. job-keystone-fernet-setup: 2019-01-31 20:08:20,316.316 9 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
  55. job-keystone-fernet-setup: 2019-01-31 20:08:20,316.316 9 INFO keystone.common.fernet_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/fernet-keys/0']
  56. job-keystone-fernet-setup: 2019-01-31 20:08:20,317.317 9 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
  57. job-keystone-fernet-setup: 2019-01-31 20:08:20,317.317 9 INFO keystone.common.fernet_utils [-] Current primary key is: 0
  58. job-keystone-fernet-setup: 2019-01-31 20:08:20,318.318 9 INFO keystone.common.fernet_utils [-] Next primary key will be: 1
  59. job-keystone-fernet-setup: 2019-01-31 20:08:20,318.318 9 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 1
  60. job-keystone-fernet-setup: 2019-01-31 20:08:20,320.320 9 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
  61. job-keystone-fernet-setup: 2019-01-31 20:08:20.429 - INFO - Updating data for 'keystone-fernet-keys' secret.
  62. job-keystone-fernet-setup: 2019-01-31 20:08:20.477 - INFO - 2 fernet keys have been placed to secret 'keystone-fernet-keys'
  63. job-keystone-fernet-setup: 2019-01-31 20:08:20.477 - INFO - Fernet keys generation has been completed
  64. job-keystone-db-init: 2019-01-31 20:08:20,507 - OpenStack-Helm DB Init - INFO - Tested connection to DB @ mariadb.openstack.svc.cluster.local:3306 as root
  65. job-keystone-db-init: 2019-01-31 20:08:20,508 - OpenStack-Helm DB Init - INFO - Got user db config
  66. job-keystone-db-init: 2019-01-31 20:08:20,533 - OpenStack-Helm DB Init - INFO - Created database keystone
  67. job-keystone-db-init: 2019-01-31 20:08:20,549 - OpenStack-Helm DB Init - INFO - Created user keystone for keystone
  68. job-keystone-db-init: 2019-01-31 20:08:20,565 - OpenStack-Helm DB Init - INFO - Tested connection to DB @ mariadb.openstack.svc.cluster.local:3306/keystone as keystone
  69. job-keystone-db-init: 2019-01-31 20:08:20,565 - OpenStack-Helm DB Init - INFO - Finished DB Management
  70. job-keystone-credential-setup: 2019-01-31 20:08:20,660.660 8 WARNING keystone.common.fernet_utils [-] key_repository is world readable: /etc/keystone/credential-keys/
  71. job-keystone-credential-setup: 2019-01-31 20:08:20,661.661 8 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
  72. job-keystone-credential-setup: 2019-01-31 20:08:20,661.661 8 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
  73. job-keystone-credential-setup: 2019-01-31 20:08:20,661.661 8 INFO keystone.common.fernet_utils [-] Starting key rotation with 1 key files: ['/etc/keystone/credential-keys/0']
  74. job-keystone-credential-setup: 2019-01-31 20:08:20,661.661 8 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
  75. job-keystone-credential-setup: 2019-01-31 20:08:20,661.661 8 INFO keystone.common.fernet_utils [-] Current primary key is: 0
  76. job-keystone-credential-setup: 2019-01-31 20:08:20,662.662 8 INFO keystone.common.fernet_utils [-] Next primary key will be: 1
  77. job-keystone-credential-setup: 2019-01-31 20:08:20,662.662 8 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 1
  78. job-keystone-credential-setup: 2019-01-31 20:08:20,662.662 8 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
  79. job-keystone-credential-setup: 2019-01-31 20:08:20.741 - INFO - Updating data for 'keystone-credential-keys' secret.
  80. job-keystone-credential-setup: 2019-01-31 20:08:20.760 - INFO - 2 fernet keys have been placed to secret 'keystone-credential-keys'
  81. job-keystone-credential-setup: 2019-01-31 20:08:20.760 - INFO - Credential keys generation has been completed
  82. job-keystone-db-sync: + keystone-manage --config-file=/etc/keystone/keystone.conf db_sync
  83. job-keystone-db-sync: + keystone-manage --config-file=/etc/keystone/keystone.conf bootstrap --bootstrap-username admin --bootstrap-password password --bootstrap-project-name admin --bootstrap-admin-url http://keystone.default.svc.cluster.local:80/v3 --bootstrap-public-url http://keystone.default.svc.cluster.local:80/v3 --bootstrap-internal-url http://keystone-api.default.svc.cluster.local:5000/v3 --bootstrap-region-id RegionOne
  84. job-keystone-db-sync: 2019-01-31 20:08:39,685.685 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created domain default
  85. job-keystone-db-sync: 2019-01-31 20:08:39,710.710 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created project admin
  86. job-keystone-db-sync: 2019-01-31 20:08:39,710.710 10 WARNING keystone.identity.core [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Unable to locate domain config directory: /etc/keystonedomains
  87. job-keystone-db-sync: 2019-01-31 20:08:39,766.766 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created user admin
  88. job-keystone-db-sync: 2019-01-31 20:08:39,778.778 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created role admin
  89. job-keystone-db-sync: 2019-01-31 20:08:39,791.791 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Granted adminon admin to user admin.
  90. job-keystone-db-sync: 2019-01-31 20:08:39,808.808 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created region RegionOne
  91. job-keystone-db-sync: 2019-01-31 20:08:39,847.847 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created adminendpoint http://keystone.default.svc.cluster.local:80/v3
  92. job-keystone-db-sync: 2019-01-31 20:08:39,858.858 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created internal endpoint http://keystone-api.default.svc.cluster.local:5000/v3
  93. job-keystone-db-sync: 2019-01-31 20:08:39,881.881 10 INFO keystone.cmd.cli [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Created public endpoint http://keystone.default.svc.cluster.local:80/v3
  94. job-keystone-db-sync: 2019-01-31 20:08:39,886.886 10 INFO keystone.assignment.core [req-8f5cd894-5bb9-44ed-906e-6992c39eba31 - - - - -] Creating the default role 9fe2ff9ee4384b1894a90878d3e92bab because it does not exist.
  95. job-keystone-db-sync: + exec python /tmp/endpoint-update.py
  96. job-keystone-db-sync: 2019-01-31 20:08:40,152 - OpenStack-Helm Keystone Endpoint management - INFO - Using /etc/keystone/keystone.conf as db config source
  97. job-keystone-db-sync: 2019-01-31 20:08:40,152 - OpenStack-Helm Keystone Endpoint management - INFO - Trying to load db config from database:connection
  98. job-keystone-db-sync: 2019-01-31 20:08:40,152 - OpenStack-Helm Keystone Endpoint management - INFO - Got config from /etc/keystone/keystone.conf
  99. job-keystone-db-sync: 2019-01-31 20:08:40,205 - OpenStack-Helm Keystone Endpoint management - INFO - endpoint (admin): http://keystone.default.svc.cluster.local:80/v3
  100. job-keystone-db-sync: 2019-01-31 20:08:40,205 - OpenStack-Helm Keystone Endpoint management - INFO - endpoint (internal): http://keystone-api.default.svc.cluster.local:5000/v3
  101. job-keystone-db-sync: 2019-01-31 20:08:40,205 - OpenStack-Helm Keystone Endpoint management - INFO - endpoint (public): http://keystone.default.svc.cluster.local:80/v3
  102. job-keystone-db-sync: 2019-01-31 20:08:40,205 - OpenStack-Helm Keystone Endpoint management - INFO - Finished Endpoint Management
  103. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="Creating a docker executor"
  104. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) initialized with template:\narchiveLocation: {}\ninputs: {}\nmetadata: {}\nname: svc-keystone-api\noutputs: {}\nresource:\n action: get\n manifest: |\n apiVersion: v1\n kind: Service\n metadata:\n name: keystone-api\n successCondition: metadata.name == keystone-api\n"
  105. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
  106. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o name"
  107. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg=service/keystone-api
  108. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="Waiting for conditions: metadata.name==keystone-api"
  109. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="kubectl get service/keystone-api -w -o json"
  110. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="{\"apiVersion\": \"v1\",\"kind\": \"Service\",\"metadata\": {\"creationTimestamp\": \"2019-01-31T20:08:11Z\",\"name\": \"keystone-api\",\"namespace\": \"openstack\",\"resourceVersion\": \"370696\",\"selfLink\": \"/api/v1/namespaces/openstack/services/keystone-api\",\"uid\": \"eefd0ce5-2593-11e9-b736-0800272e6982\"},\"spec\": {\"clusterIP\": \"10.99.48.219\",\"ports\": [{\"name\": \"ks-pub\",\"port\": 5000,\"protocol\": \"TCP\",\"targetPort\": 5000}],\"selector\": {\"application\": \"keystone\",\"component\": \"api\",\"release_group\": \"keystone\"},\"sessionAffinity\": \"None\",\"type\": \"ClusterIP\"},\"status\": {\"loadBalancer\": {}}}"
  111. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="success condition '{metadata.name == [keystone-api]}' evaluated true"
  112. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="1/1 success conditions matched"
  113. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="Returning from successful wait for resource service/keystone-api"
  114. svc-keystone-api: time="2019-01-31T20:08:42Z" level=info msg="No output parameters"