项目作者: chgeuer

项目描述 :
The walktrough describes how to distribute locally (on-prem) created VM images to one or more Azure subscriptions and data centers.
高级语言: Shell
项目地址: git://github.com/chgeuer/azure-opensuse-packer-distribution.git


Distribute images

The walktrough describes how to distribute locally (on-prem) created VM images to one or more Azure subscriptions and data centers.

  • The production VMs will use ‘managed disks’, therefore we need ‘managed images’ in the target subscriptions and data centers.
  • For this sample, I locally create VM images (VHD files) using Hashicorp packer against Hyper-V. The local OS to be installed is openSUSE.
  • Some scripts below are designed to be executed on a Windows host, particularly
    • Running packer build against Hyper-V
    • Running the Convert-VHD Commandlet, to convert the dynamic-size .vhdx file in a static-size .vhd file.
  • All other commands can be executed on Windows, Windows Subsystem for Linux, Linux or MacOS X, as long as the az command line utility is installed.
    • All the variable escaping, string interpolation, etc. below nevertheless is assumed to be executed in a bash shell.

Overall architecture

  1. Upload the image to a main storage account in a management subscription
  2. Copy the image to the desired data center and subscription. For example, if customer 1 needs the image in datacenter A and B, and customer 2 needs the image in datacenter A, there would be three transfers out of the management storage.

Image flow

Alternative distribution

If there are many datacenters and customers, an alternative approach would be to distribute the image within the management subscription into management storage accounts in each datacenter, and then ‘locally’ copy it over to all customers.

Advantages

  • faster provisioning times for new customers (because the images are already in the right datacenter)
  • Only pay once egress per image/datacenter location, instead of per image/datacenter/customer.

Disadvantage

  • Higher complexity in infrastructure and copy scripts

Image flow 2

Build a VHD locally

In order to have a VHD I can distribute, I used packer on Hyper-V:

Download ISO image

  1. export openSuseVersion=42.3
  2. export imageLocation="https://download.opensuse.org/distribution/leap/${openSuseVersion}/iso/openSUSE-Leap-${openSuseVersion}-DVD-x86_64.iso"
  3. curl --get --location --output "openSUSE-Leap-${openSuseVersion}-DVD-x86_64.iso" --url $imageLocation

install packer, if needed

  1. go get github.com/mitchellh/packer

Use packer, to install and build openSUSE in a local Hyper-V installation

TODO: Right now, the image is not yet fully prepared according to Azure: Prepare a SLES or openSUSE virtual machine for Azure. Some initial steps are executed in scripts/setup_azure.sh

  1. REM turn on packer logging
  2. set PACKER_LOG=1
  3. REM run packer against local Hyper-V
  4. packer build packer-hyper-v.json
  • The installation is configured through the http/autoinst.xml file. The file stucture is defined in AutoYaST documentation.
  • The packer build run creates a .vhdx file in output-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhdx.

Convert .vhdx to fixed-size .vhd

  1. Convert-VHD `
  2. –Path "output-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhdx" `
  3. -DestinationPath "output-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhd" `
  4. -VHDType Fixed

Upload and distribute the existing VHD from our build server to Azure

Variables we need

  1. export managementSubscriptionId="724467b5-bee4-484b-bf13-d6a5505d2b51"
  2. export demoPrefix="hecdemo"
  3. export managementResourceGroup="${demoPrefix}management"
  4. export imageIngestDataCenter="westeurope"
  5. export imageIngestStorageAccountName="${demoPrefix}imageingest"
  6. export imageIngestStorageContainerName="imagedistribution"
  7. export imageLocalFile="output-hyperv-iso/Virtual Hard Disks/packer-hyperv-iso.vhd"
  8. export imageBlobName="2017-12-06-opensuse-image.vhd"
  9. export productionSubscriptionId="706df49f-998b-40ec-aed3-7f0ce9c67759"
  10. export productionDataCenter="northeurope"
  11. export productionImageResourceGroup="${demoPrefix}production"
  12. export productionImageIngestStorageAccountName="${demoPrefix}prodimages"

Select the management subscription

  1. az account set \
  2. --subscription $managementSubscriptionId

Create the management resource group

  1. az group create \
  2. --name "${managementResourceGroup}" \
  3. --location "${imageIngestDataCenter}"

Create the storage account where images are uploaded to

  1. az storage account create \
  2. --name "${imageIngestStorageAccountName}" \
  3. --resource-group "${managementResourceGroup}" \
  4. --location "${imageIngestDataCenter}" \
  5. --https-only true \
  6. --kind Storage \
  7. --sku Standard_RAGRS

Fetch storage account key

  1. export imageIngestStorageAccountKey=$(az storage account keys list \
  2. --resource-group "${managementResourceGroup}" \
  3. --account-name "${imageIngestStorageAccountName}" \
  4. --query "[?contains(keyName,'key1')].[value]" \
  5. --o tsv)

Create the storage container where images are uploaded

  1. az storage container create \
  2. --account-name "${imageIngestStorageAccountName}" \
  3. --account-key "${imageIngestStorageAccountKey}" \
  4. --name "${imageIngestStorageContainerName}" \
  5. --public-access off

Upload the image to the distribution point

  1. az storage blob upload \
  2. --type page \
  3. --account-name "${imageIngestStorageAccountName}" \
  4. --account-key "${imageIngestStorageAccountKey}" \
  5. --container-name "${imageIngestStorageContainerName}" \
  6. --file "${imageLocalFile}" \
  7. --name "${imageBlobName}"

Select the production subscription

  1. az account set \
  2. --subscription $productionSubscriptionId

Create the production image resource group

  1. az group create \
  2. --name "${productionImageResourceGroup}" \
  3. --location "${productionDataCenter}"

Create the production image storage account where images are copied to

  1. az storage account create \
  2. --name "${productionImageIngestStorageAccountName}" \
  3. --resource-group "${productionImageResourceGroup}" \
  4. --location "${productionDataCenter}" \
  5. --https-only true \
  6. --kind Storage \
  7. --sku Premium_LRS

Fetch storage account key for the production storage account

  1. export productionImageIngestStorageAccountKey=$(az storage account keys list \
  2. --resource-group "${productionImageResourceGroup}" \
  3. --account-name "${productionImageIngestStorageAccountName}" \
  4. --query "[?contains(keyName,'key1')].[value]" \
  5. --o tsv)

Create the storage container where images are copied to

  1. az storage container create \
  2. --account-name "${productionImageIngestStorageAccountName}" \
  3. --account-key "${productionImageIngestStorageAccountKey}" \
  4. --name "${imageIngestStorageContainerName}" \
  5. --public-access off

Trigger the copy operation

  1. az storage blob copy start \
  2. --source-account-name "${imageIngestStorageAccountName}" \
  3. --source-account-key "${imageIngestStorageAccountKey}" \
  4. --source-container "${imageIngestStorageContainerName}" \
  5. --source-blob "${imageBlobName}" \
  6. --account-name "${productionImageIngestStorageAccountName}" \
  7. --account-key "${productionImageIngestStorageAccountKey}" \
  8. --destination-container "${imageIngestStorageContainerName}" \
  9. --destination-blob "${imageBlobName}"

Track the copy operation’s status

Once the destination storage account received the call to start the copy operation, it pulls the data from the source storage account. Calling az storage blob show retrieves the destination blob’s properties, amongst which you find the copy.status and copy.status values. A "status":"pending" let’s you know it’s still not finished. A "progress": "3370123264/4294967808" tells you how many bytes of which total have already been transferred.

  1. statusJson=$(az storage blob show \
  2. --account-name "${productionImageIngestStorageAccountName}" \
  3. --account-key "${productionImageIngestStorageAccountKey}" \
  4. --container-name "${imageIngestStorageContainerName}" \
  5. --name "${imageBlobName}")
  6. echo $statusJson | jq ".properties.copy.status"
  7. echo $statusJson | jq ".properties.copy.progress"

Create a managed image in the production subscription

Before creating an image, wait until the full copy operation finished successfully.

  1. export productionImageIngestUrl=$(az storage blob url \
  2. --protocol "https" \
  3. --account-name "${productionImageIngestStorageAccountName}" \
  4. --account-key "${productionImageIngestStorageAccountKey}" \
  5. --container-name "${imageIngestStorageContainerName}" \
  6. --name "${imageBlobName}" \
  7. --o tsv)
  8. az image create \
  9. --name "${imageBlobName}" \
  10. --resource-group "${productionImageResourceGroup}" \
  11. --location "${productionDataCenter}" \
  12. --source "${productionImageIngestUrl}" \
  13. --os-type Linux