Often you may require a unique custom build/release agent with a specific set of tools.
A good example is a dynamic Ansible Agent that can manage post deployment configuration. This ensures configuration drift is minimised.
Secondly this part of a release is not too critical, so we can afford to spend a bit of time downloading a docker image if it is not already cached.
This article demonstates how you can dynamically spawn a docker container during your release pipeline to apply configuration leveraging Ansible. It will also demonstrate how to use Ansible Dynamic Inventory to detect Azure Virtual machine scale set instances – in the past you would run hacks on facter.
You will require:
- A docker image with ansible – You can use mine as a starting point – https://github.com/Romiko/DockerUbuntuDev
The above is hosted at: dockerhub – romiko/ansible:latest (See reference at bottom of this page)
- A Self-host Azure Devops Agent – Linux
- Docker installed on the self-hosted agent
- Docker configured to expose Docker Socket
docker run -v /var/run/docker.sock:/var/run/docker.sock -d –name some_container some_image
Configure a CLI Task in your release pipeline.
variables: env: 'dev' steps: - task: AzureCLI@2 displayName: 'Azure CLI Ansible' inputs: azureSubscription: 'RangerRom' scriptType: bash scriptLocation: inlineScript inlineScript: | set -x docker run --rm -v $(System.DefaultWorkingDirectory)/myproject/config:/playbooks/ romiko/ansible:latest \ "cd /playbooks/ansible; ansible-playbook --version; az login --service-principal --username $servicePrincipalId --password $servicePrincipalKey --tenant $tenantId; az account set --subscription $subscription;ansible-playbook my-playbook.yaml -i inventory_$(env)_azure_rm.yml --extra-vars \"ansible_ssh_pass=$(clientpassword)\"" addSpnToEnvironment: true workingDirectory: '$(System.DefaultWorkingDirectory)/myproject/config/ansible'
In the above the code that is causing a SIBLING container to spawn on the self-hosted devops agent is:
docker run –rm -v $(System.DefaultWorkingDirectory)/myproject/config:/playbooks/ romiko/ansible:latest \ <command to execute inside the container>
Here we have a mount point occuring where the config folder in the repo will be mounted into the docker container.
The rest of the code after the \ will execute on the docker container. So in the above,
- The container will become a sibling,
- Entry into a bash shell
- Container will mount a /playbooks folder containing the source code from the build artifacts
- Connect to azure
- Run an anisble playbook.
- The playbook will find all virtual machine scale sets in a resoruce group with a name pattern
- Apply a configuration by configuring logstash to auto reload config files when they change
- Apply a configuration by copying files
The above is used to deploy configurations to an Azure Virtual Machine Scale Set. Ansible has a feature called dynamica inventory. We will leverage this feature to detect all active nodes/instances in a VMSS.
The structure of ansible is as follows:
Ansible Dynamic Inventory
So lets see how ansible can be used to detect all running instances in an Azure Virtual machine Scale Set
Below it will detect any VMSS cluster in resourcegroup rom-dev-elk-stack that has logstash in the name
plugin: azure_rm include_vmss_resource_groups: - rom-dev-elk-stack conditional_groups: logstash_hosts: "'logstash' in name" auth_source: auto
logstash_hosts.yml (Ensure this lives in a group_vars folder)
Now, I can configure ssh using a username or ssh keys.
--- ansible_connection: ssh ansible_ssh_user: logstash
Below I now have ansible doing some configuration checks for me on a logstash pipeline (upstream/downstream architecture).
- name: Logstash auto reloads check interval lineinfile: path: /etc/logstash/logstash.yml regexp: '^config\.reload\.interval' line: "config.reload.interval: 30s" become: true notify: - restart_service - name: Copy pipeline configs copy: src: ../pipelines/conf.d/ dest: /etc/logstash/conf.d/ owner: logstash group: logstash become: true - name: Copy pipeline settings copy: src: ../pipelines/register/ dest: /etc/logstash/ owner: logstash group: logstash become: true
To improve security – replace user/password ansible login with an SSH key pair.
To read up more about Docker Socket mount points. Check out
Thanks to Shawn Wang and Ducas Francis for the inspirations on Docker Socket.