Local and Production Setups
The setup is split into to three configuration scenarios, whereas each scenario has
a dedicated directory with the same name. The directory structure is as follows:
jenkins_ci
: setup for Jenkins CInginx
: setup and configuration for the webserver (i.e. nginx)dev
: deployment setup for local developmentstaging
: deployment setup for staging environmentprod
: deployment setup used in productionEach deployment setup is composed of infrastructure componententes and
the actual Microservices. A utility script with the name run*.sh
can
be found in the directories of each setup.
Top Level Components:
Nginx is used a reverse proxy for each component.
nginx/docker-compose.yml
nginx/nginx.conf
fab deploy logs -H <username>@<host>
(fabricate Jenkins is used for continous integration.
jenkins_ci/docker-compose.yml
jenkins_ci/Dockerfile
prod
In general the Platform is split into two different kind of components (1) infrastructure components (directory infra
) and (2) microservice components (directory services
).
These componentes are part of the virtual network with the name nimbleinfraprod_default
. More information can be found bey executing docker network inspect nimbleinfraprod_default
on the Docker host.
prod/infra/docker-compose-marmotta.yml
prod/keycloak/docker-compose-prod.yml
prod/elk-prod/docker-compose-elk.yml
Defintion can be found in prod/services/docker-compose-prod.yml
, which consists of the following components:
Config Server:
Service Discovery:
Gateway Proxy:
Hystrix Dashboard (not used at the moment)
Definition and configuration of the deployment can be found in prod/services/docker-compose-prod.yml
and defines the follwing services:
Configuration is done via environment variables, which are define in prod/services/env_vars
. Secrets are stored in prod/services/env_vars-prod
(this file is adapted on the hosting machine).
A small utility script can be found in run-prod.sh
, which provides the following functionalies:
run-prod.sh infra
: starts all infrastructure components run-prod.sh keycloak
: starts the Keycloak containerrun-prod.sh marmotta
: starts the Marmotta containerrun-prod.sh elk
: start all ELK componentsrun-prod.sh services
: starts all serivces (note: make sure the infastructue is set up appropriately)run-prod.sh infra-logs
: print logs of all microservice components to stdoutrun-prod.sh services-logs
: print logs of all services to stdoutrun-prod.sh restart-single <serviceID>
: restart a single servicestaging
not yet active
This section provides detailed information on how to set up a local development deployment using Docker. Required files are located in the dev
directory.
cd dev
Recommended System Requirements (for Docker)
Minimum System Requirements (for Docker)
A utility script called run-dev.sh
provides the following main commands:
run-dev.sh infrastructure
: starts all microservice infrastructure componentsrun-dev.sh services
: starts all nimble core services (note: make sure the infrastructue is running appropriately before)run-dev.sh start
: starts infrastructure and services (not recommended at the first time)run-dev.sh stop
: stop all servicesIt is recommended to start the infrastructure and the services in separate terminals for easier debugging.
./run-dev.sh infrastructure
: log output will be shown in the terminal
Before continuing to start services, check the infrastructure components as follows:
docker ps
should show 7 new containers up and running:
$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES PORTS
nimbleinfra_gateway-proxy_1 0.0.0.0:80->80/tcp
nimbleinfra_service-discovery_1 0.0.0.0:8761->8761/tcp
nimbleinfra_keycloak_1 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcp
nimbleinfra_kafka_1 0.0.0.0:9092->9092/tcp
nimbleinfra_keycloak-db_1 5432/tcp
nimbleinfra_config-server_1 0.0.0.0:8888->8888/tcp
nimbleinfra_zookeeper_1 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp
nimbleinfra_maildev_1 25/tcp, 0.0.0.0:8025->80/tcp
nimbleinfra_solr_1 0.0.0.0:8983->8983/tcp
In case of port binding errors, the shown default port mappings can be adapted to local system requirements in infra/docker-compose.yml
.
The infrastructure services can be tested by the following http-requests:
nimbleinfra_config-server_1
nimbleinfra_service-discovery_1
(only “gateway-proxy” in the beginning)nimbleinfra_gateway-proxy_1
nimbleinfra_keycloak_1
. Login with admin
and password password
./run-dev.sh services
: log output will be shown in the terminal
docker ps
should show additional 16 containers up and running
$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES PORTS
nimbleservices_business-process-service_1 0.0.0.0:8085->8085/tcp
nimbleservices_catalog-service-srdc_1 0.0.0.0:10095->8095/tcp
nimbleservices_identity-service_1 0.0.0.0:9096->9096/tcp
nimbleservices_trust-service_1 9096/tcp, 0.0.0.0:9098->9098/tcp
nimbleservices_marmotta_1 0.0.0.0:8082->8080/tcp
nimbleservices_ubl-db_1 0.0.0.0:5436->5432/tcp
nimbleservices_camunda-db_1 0.0.0.0:5435->5432/tcp
nimbleservices_identity-service-db_1 0.0.0.0:5433->5432/tcp
nimbleservices_frontend-service_1 0.0.0.0:8081->8080/tcp
nimbleservices_business-process-service-db_1 0.0.0.0:5434->5432/tcp
nimbleservices_trust-service-db_1 5432/tcp
nimbleservices_frontend-service-sidecar_1 0.0.0.0:9097->9097/tcp
nimbleservices_marmotta-db_1 0.0.0.0:5437->5432/tcp
nimbleservices_category-db_1 5432/tcp
nimbleservices_sync-db_1 5432/tcp
nimbleservices_binary-content-db_1 0.0.0.0:5438->5432/tcp
nimbleservices_indexing-service_1 0.0.0.0:9101->8080/tcp
...
Port mappings can be adapted in services/docker-compose.yml
.
Once the services are up, they should show up in the EUREKA Service Discovrery. Depending on available resources this will take a while.
If they are all up, they can be tested via the NIMBLE frontend at: