项目作者: tiadobatima

项目描述 :
Cloud Infrastructure Builder
高级语言: Python
项目地址: git://github.com/tiadobatima/gpwm.git
创建时间: 2017-11-08T22:20:23Z
项目社区:https://github.com/tiadobatima/gpwm

开源协议:Other

下载


GPWM Project

What’s GPWM?

GPWM are the initials for Gwynedd Purves Wynn-Aubrey Meredith’s

For the few who don’t yet know, Major GPW Meredith, of the Seventh Heavy
Battery of the Royal Australian Artillery was the veteran commander of the
Australians against the Emus in the bloody Emu War
of 1932.

Here we honor his courage, sacrifice, and life-story by aptly naming an
infrastructure-as-code DSL wrapper tool after his legacy.

The great GPW Meredith - Australian Hero

The great GPW Meredith, Australian Hero

Evil Emu - Enemy Of The State

Evil Emu - Enemy Of The State

Major GPW Meredith… Father, patriot and true hero. Lest we forget!

Infrastructure as Code

The idea behind this is to allow for small, re-usable, independent, and
readable infrastructure building blocks that different teams
(networking/security/application) can own without affecting others, and
allowing microservices in different cloud provider accounts and environments to
be created just by modifying a set of values given to a template. Three main
components make up the system:

  • Consumables: A YAML-like template rendered through a Python’s Mako engine
    that results in a CF template.
  • Stacks (input values): A YAML file representing values that will be fed to a
    template (similar to the –cli-input-json option in the AWS CLI).
  • Script: Interpolates the input values of a stack with a consumable and
    executes an action with the resulting rendered stack: creates, deletes,
    render, etc.

Cloudformation, GCP Deployment Manager, Azure Resource Manager and pretty much
any infrastructure DSL out there have a few small deficiencies that makes it
somewhat hard to build reusable, concise, and easy to read templates, for\
example:

  • No “for loops”.
  • No high-level data structures like dictionaries or objects as variables.
  • No ability to run custom code locally.
  • Exports/Imports for linking stacks impose a hard dependency between stacks.

To address these shortcomings, the tool first uses a higher-level templating
engine (Mako or Jinja) to provide a richer and featureful text processing
capabilities to the provider’s DSL before compliling into the provider’s native
DSL and sending the resulting stack to the cloud provider.

This is not an abstraction layer like Terraform. The resulting template is a
native AWS Cloudformation stack, or GCP/ARM deployments.

Stack Types

For documentation specific to the provider of choice:

Examples

Usage

  1. pip install gpwm
  2. # Configure authentication with AWS (assuming a CLI profile already exists)
  3. export AWS_DEFAULT_PROFILE=some-profile
  4. export AWS_DEFAULT_REGION=us-west-2
  5. # Getting help
  6. python3 gpwm.py --help
  7. # Specify a build ID
  8. export BUILD_ID="SOME_BUILD_ID" # for example "$(date -u +'%F-%H-%M-%S')-$BITBUCKET_COMMIT"
  9. # prints a rendered stack on the screen
  10. python3 gpwm.py render aws/stacks/vpc-training-dev.mako
  11. python3 gpwm.py render google/deployments/instance.mako
  12. # Creates the stack/deployment in with the cloud provider
  13. python3 gpwm.py create aws/stacks/vpc-training-dev.mako
  14. python3 gpwm.py create google/deployments/instance.mako
  15. # Deletes the stack/deployment from the cloud provider
  16. python3 gpwm.py delete aws/stacks/vpc-training-dev.mako
  17. python3 gpwm.py delete google/deployments/instance.mako
  18. # Updates an existing stack/deployment in the cloud provider
  19. python3 gpwm.py update aws/stacks/vpc-training-dev.mako
  20. python3 gpwm.py update google/deployments/instance.mako
  21. # Updates a stack with review (change set) - AWS only
  22. python3 gpwm.py update aws/stacks/vpc-training-dev.mako -r
  23. # The template path/url specified in the stack/deployment file
  24. # will be prepended by GPWM_TEMPLATE_URL_PREFIX (if set).
  25. # This can be used to enforce the use of company-certified templates, for
  26. # when used with a deployment pipeline tool such as Jenkins
  27. export GPWM_TEMPLATE_URL_PREFIX=s3://my-s3-bucket/subfolder
  28. python3 gpwm.py create aws/stacks/vpc-training-dev.mako
  29. # Stack files can be fed via stdin (-t option must be used).
  30. # Very handy when another tool is creating the stack file on the fly
  31. cat my-stack.txt | python3 gpwm.py create -t jinja -
  32. some-script.sh | python3 gpwm.py create -t jinja -

AWS

Stack

  1. <%
  2. stack_type = "vpc"
  3. team = "demo"
  4. environment = "dev"
  5. %>
  6. StackName: ${stack_type}-${team}-${environment}
  7. TemplateBody: examples/consumables/network/vpc.mako
  8. Parameters:
  9. team: ${team}
  10. environment: ${environment}
  11. cidr: 10.0.0.0/16
  12. nat_availability_zones:
  13. - {"name": "a", "cidr": "10.0.0.0/28"}
  14. - {"name": "b", "cidr": "10.0.0.16/28"}
  15. Tags:
  16. type: ${stack_type}
  17. team: ${team}
  18. environment: ${environment}

Template

  1. ##
  2. ## Owner: networking
  3. ##
  4. ## Dependencies: None
  5. ##
  6. ## Parameters:
  7. ## - team (required): The team owning the stack
  8. ## - environment (required): The environment where the stack is running on (dev, prod, etc)
  9. ## - cidr (required): The CIDR for the VPC
  10. ## - nat_availability_zones (required): A list of dictionaries representing the CIDR and
  11. ## availability zone for the default NAT gateways:
  12. ## - name (required): Availability zone name ("a", "b", "c"...)
  13. ## - cidr (required): the CIDR NAT gateway's subnet.
  14. ##
  15. AWSTemplateFormatVersion: "2010-09-09"
  16. Description: VPC stack for ${team}-${environment}
  17. Resources:
  18. VPC:
  19. Type: "AWS::EC2::VPC"
  20. Metadata:
  21. Name: ${team}-${environment}
  22. Properties:
  23. CidrBlock: ${cidr}
  24. EnableDnsSupport: true
  25. EnableDnsHostnames: true
  26. InstanceTenancy: default
  27. Tags:
  28. - {Key: team, Value: ${team}}
  29. - {Key: version, Value: ${environment}}
  30. - {Key: Name, Value: ${team}-${environment}}
  31. InternetGateway:
  32. Type: "AWS::EC2::InternetGateway"
  33. Properties:
  34. Tags:
  35. - {Key: team, Value: ${team}}
  36. - {Key: type, Value: ${environment}}
  37. - {Key: Name, Value: ${team}-${environment}}
  38. VPCGatewayAttachment:
  39. Type: "AWS::EC2::VPCGatewayAttachment"
  40. Properties:
  41. VpcId: {Ref: VPC}
  42. InternetGatewayId: {Ref: InternetGateway}
  43. RouteTablePublic:
  44. Type: "AWS::EC2::RouteTable"
  45. Properties:
  46. VpcId: {Ref: VPC}
  47. Tags:
  48. - {Key: team, Value: ${team}}
  49. - {Key: type, Value: ${environment}}
  50. - {Key: Name, Value: ${team}-${environment}-public}
  51. Route:
  52. Type: "AWS::EC2::Route"
  53. DependsOn: VPCGatewayAttachment
  54. Properties:
  55. RouteTableId: {Ref: RouteTablePublic}
  56. DestinationCidrBlock: 0.0.0.0/0
  57. GatewayId: {Ref: InternetGateway}
  58. % for az in nat_availability_zones:
  59. SubnetAZ${az["name"]}:
  60. Type: "AWS::EC2::Subnet"
  61. Properties:
  62. VpcId: {Ref: VPC}
  63. CidrBlock: ${az["cidr"]}
  64. AvailabilityZone: {"Fn::Sub": "<%text>$</%text>{AWS::Region}${az["name"]}"}
  65. MapPublicIpOnLaunch: false
  66. Tags:
  67. - {Key: team, Value: ${team}}
  68. - {Key: type, Value: ${environment}}
  69. - {Key: Name, Value: ${team}-${environment}-nat-${az["name"]}-public}
  70. RouteTableAssociationAZ${az["name"]}:
  71. Type: "AWS::EC2::SubnetRouteTableAssociation"
  72. Properties:
  73. SubnetId: {Ref: SubnetAZ${az["name"]}}
  74. RouteTableId: {Ref: RouteTablePublic}
  75. RouteTablePrivateAZ${az["name"]}:
  76. Type: "AWS::EC2::RouteTable"
  77. Properties:
  78. VpcId: {Ref: VPC}
  79. Tags:
  80. - {Key: team, Value: ${team}}
  81. - {Key: environment, Value: nat}
  82. - {Key: Name, Value: ${team}-${environment}-private-${az["name"]}}
  83. EIPNATAZ${az["name"]}:
  84. Type: "AWS::EC2::EIP"
  85. Properties:
  86. Domain: vpc
  87. NatGatewayAZ${az["name"]}:
  88. Type: "AWS::EC2::NatGateway"
  89. Properties:
  90. AllocationId: {"Fn::GetAtt": [EIPNATAZ${az["name"]}, AllocationId]}
  91. SubnetId: {Ref: SubnetAZ${az["name"]}}
  92. RoutePrivateAZ${az["name"]}:
  93. Type: "AWS::EC2::Route"
  94. Properties:
  95. RouteTableId: {Ref: RouteTablePrivateAZ${az["name"]}}
  96. DestinationCidrBlock: 0.0.0.0/0
  97. NatGatewayId: {Ref: NatGatewayAZ${az["name"]}}
  98. % endfor

Why should I use this tool?

Amazon popularized the concept of “infrastructure as code” by proving a
declarative, standardized way for their users to describe what their
infrastructure should be like. Now, most reputable cloud providers offer their
own versions of templated, declarative resource managers:

In this context, stacks or deployments are text files that are processed
through a templating engine of choice, and must result in a YAML file after
processing.
As of now, Mako and Jinja are supported, with raw JSON and YAML in the roadmap,
though if using the latter two, there’s really no reason to this tool, just use
the provider’s own CLI/SDK

These stacks are meant to represent resources in the cloud provider of choice,
either:

  • Via the cloud provider’s own declarative language which, describes the
    final state of the infrastructure
  • Via commands or API calls used to get to that final states, ie the
    procedural approach. The procedural approach should be used only when
    there’s really no other way of managing resources via a declarative approach

At this point, these types of stacks/deployments are supported:

Regardless of the provider, the design principles for this tool are:

  • Never abstract, or dumb-down the cloud provider’s native resource manager DSL,
    only enhance it
  • Use and operation of the tool should be easy for mere mortals
  • Never race the provider for features. We are going to lose
  • Simplicity, flexibility, and reusability: Allow for small, focused
    stack/deployment building blocks that can be reused and loosely connected
    with other deployments, without having to deploy a massive tightly coupled
    group of resources
  • Never mix concepts and constructs from different cloud providers, eg AWS VPC
    is not the same as Azure vnet. Related to abstraction mentioned
    above, for maximum flexibility and efficiency, we want to be able to tune
    every knob of a resource. And this can only be done, if we treat every
    resource natively
  • Assume that we know better than the cloud provider about how their
    infrastructure should be managed
  • The code should avoid the more complex and obscure Python idioms, so people
    can understand and change the code more easily.

With the guidelines above, the tool always attempts to provide the cloud
provider’s look-and-feel for the syntax/constructs of the stacks/deployment
files: An AWS Clouformation stack looks like Cloudformation; an ARM
deployment file looks like ARM, etc. To illustrate, in AWS, the variable and
section names used in a CFN stack are UpperCamelCase, in Azure they are
lowerCamelCase, and GCP uses snake_case (Python-like).
This tool tries hard to keep that spirit, so not to throw off users already
familiar with a particular cloud provider’s DSL.

Why a higher level template engine? And why Mako as default?

A text templating engine extends the functionality of a CFN template or any
other text file by allowing “for loops”, the use of more complex data types like
dictionaries and objects, and overall better readability by not having to deal
with CFN’s hard-to-read intrinsic functions. On top of that, Mako allows for
inline python code right inside the template.

Mako is the default because it’s very easy to define simple blocks of python
code inside the template, making it a very powerful tool. But to simplify the
lives of the folks familiar to Ansible, Saltstack, and others, Jinja is also
supported, but be warned that it’s just not as flexible as Mako.

Development

Follow the guidelines in the development page

Contacts

  • Gustavo Baratto: gbaratto AT gmail