项目作者: pwasiewi

项目描述 :
Packer, vagrant for proxmox 5 beta
高级语言: Shell
项目地址: git://github.com/pwasiewi/packer-proxmoxbeta.git
创建时间: 2017-04-26T06:25:36Z
项目社区:https://github.com/pwasiewi/packer-proxmoxbeta

开源协议:Other

下载


packer-proxmoxbeta

CircleCI

packer template to build Proxmox Server images

vagrant images are available at the 42n4 vagrant account.

Building Images

To build images, simply run:

  1. git clone https://github.com/pwasiewi/packer-proxmoxbeta
  2. cd packer-proxmoxbeta
  3. export VAGRANT_CLOUD_TOKEN=the token string taken from Vagrant https://app.vagrantup.com/settings/security
  4. packer build -only=virtualbox-iso template.json

If you want to build only virtualbox, vmware or qemu, but now only virtualbox one works with ceph.

  1. packer build -only=virtualbox-iso template.json
  2. packer build -only=vmware-iso template.json
  3. packer build -only=qemu template.json

Setting up the proxmox cluster (START FOR BEGINNERS!)

Next (or HERE YOU START and use my image 42n4/promoxbeta without doing your own one),
try to execute it in a new directory in order to have the 3 server cluster:

  1. #vagrant destroy -f #remove ALL previous instances
  2. #vagrant box update #update this box in order to have my newest image
  3. mkdir vProxmox && cd vProxmox
  4. wget https://raw.githubusercontent.com/pwasiewi/packer-proxmoxbeta/master/Vagrantfile.3hosts -O Vagrantfile
  5. sed -i 's/192.168.0/192.168.<your local net number>/g' Vagrantfile
  6. sed -i 's/enp0s31f6/eth0/g' Vagrantfile # you change the host bridge name if it is not 'enp0s31f6'
  7. #in MSWin it gives you names: VBoxManage.exe list bridgedifs
  8. #:bridge => "Intel(R) Ethernet Connection (2) I219-V",
  9. vagrant up
  10. vagrant ssh server1

In M$Windows: https://www.sitepoint.com/getting-started-vagrant-windows/ - you use putty after converting with puttygen a vagrant openssh key to a putty key

Screen

Login to the server1 root account

  1. sudo su -

and execute:

  1. va_hosts4ssh server #password: packer
  2. pvecm create kluster
  3. sleep 5
  4. for i in server2 server3; do ssh $i "pvecm add server1"; done
  5. for i in server3 server2; do ssh $i "reboot"; done
  6. reboot

vagrant ssh server1

Login again:

  1. sudo su -
  2. ae "apt-get update"
  3. ae "apt-get install -y ceph"
  4. pveceph init --network 192.168.<YOUR_NET>.0/24 #CHANGE TO YOUR NET
  5. for i in server1 server2 server3; do ssh $i "pveceph createmon"; done
  6. for i in server1 server2 server3; do ssh $i "ceph-disk zap /dev/sdb" && ssh $i "pveceph createosd /dev/sdb" && ssh $i "partprobe /dev/sdb1"; done
  7. cd /etc/pve/priv/
  8. mkdir ceph
  9. cp /etc/ceph/ceph.client.admin.keyring ceph/rbd.keyring
  10. ceph -s #ceph should be online
  11. ceph osd lspools #look at the pools!
  12. ceph osd pool create rbd 128 #create pool if not present
  13. ceph osd pool set rbd size 2 #replica number
  14. ceph osd pool set rbd min_size 1 #min replica number after e.g. server failure
  15. ceph osd pool application enable rbd rbd
  16. rbd pool init rbd
  17. #GUI proxmox in a host browser: https://192.168.<YOUR_NET>.71:8006
  18. #add in GUI rdb storage named ceph4vm with monitor hosts: 192.168.<YOUR_NET>.71 192.168.<YOUR_NET>.72 192.168.<YOUR_NET>.73 #CHANGE TO YOUR NET
  19. #it should be added automatically
  20. #cp /etc/ceph/ceph.client.admin.keyring ceph/ceph4vm.keyring
  21. #net configs corrected, vmbr0 moved to the the second NIC2
  22. #first NIC1 is dedicated to vagrant NAT inner communication.
  23. cd
  24. ae "rm -f ~/interfaces && cp /usr/local/bin/va_interfaces ~/interfaces"
  25. for i in server1 server2 server3; do ssh $i "sed -i 's/192.168.2.71/'`grep $i /etc/hosts | awk '{ print $1}'`'/g' ~/interfaces && cat ~/interfaces"; done && \
  26. ae "rm -f /etc/network/interfaces && cp ~/interfaces /etc/network/interfaces" && \
  27. ae "cat /etc/network/interfaces"
  28. for i in server3 server2; do ssh $i "reboot"; done && reboot

vagrant ssh server1

After reboot try to check if all servers and their ceph osds are up.

Exit with vagrant halt but you can loose your files with vagrant halt -f

Next time: vagrant up && vagrant ssh server1

  1. #after vagrant up, again correct the net configs (removing vagrant public network settings)
  2. sudo su -
  3. ae "rm -f ~/interfaces && cp /usr/local/bin/va_interfaces ~/interfaces"
  4. for i in server1 server2 server3; do ssh $i "sed -i 's/192.168.2.71/'`grep $i /etc/hosts | awk '{ print $1}'`'/g' ~/interfaces && cat ~/interfaces"; done && \
  5. ae "rm -f /etc/network/interfaces && cp ~/interfaces /etc/network/interfaces" && \
  6. ae "cat /etc/network/interfaces"
  7. for i in server3 server2; do ssh $i "reboot"; done && reboot

After reboot try to check if all servers and their ceph osds are up. Reset them until they are all up.

Release setup

Vagrant images at Vagrant are released by Circle CI.
setup instructions are the following:

  1. Sign up
  2. Get API token
  3. Create new build configuration at Vagrant
    and generate token.
  4. Create project at Circle CI
  5. Add Vagrant environment variables to Circle CI project:

    1. $ VAGRANT_CLOUD_TOKEN={{ your vagrant api token here }}
    2. $ CIRCLE_USERNAME={{ your circle ci username here }}
    3. $ CIRCLE_PROJECT={{ your circle ci project here }}
    4. $ CIRCLE_TOKEN={{ your circle ci token here }}
    5. $ CIRCLE_ENVVARENDPOINT="https://circleci.com/api/v1/project/$CIRCLE_USERNAME/$CIRCLE_PROJECT/envvar?circle-token=$CIRCLE_TOKEN"
    6. $ json="{\"name\":\"VAGRANT_CLOUD_TOKEN\",\"value\":\"$VAGRANT_CLOUD_TOKEN\"}"
    7. $ curl -X POST -H "Content-Type: application/json" -H "Accept: application/json" -d "$json" "$CIRCLE_ENVVARENDPOINT"
  6. Edit circle.yml

License

CC0

dedicated to public domain, no rights reserved.