Introduction
"How do we run Ansible in AWS?"
AWS has native support for running Ansible playbooks. Well, it has a some support for it anyway!
Normally we would install Ansible on a single control node, as explained here.
However, in AWS we can flip the script and install Ansible on every instance and rely on a built-in feature in AWS State Manager to configure it.
This allows us to avoid maintaining an extra control node while leveraging AWS Systems Manager State Manager features (that's a mouthful!).
The Normal Way
The AWS Way
EC2 Role
First up, permissions! They're important.
Each of the EC2 instances we would like to manage will need a role attached that has the following policies:
- Managed policy: arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
- Custom S3 policy giving S3 read/write access
The bucket permissions are very important, as this will contain our Ansible runbooks and our logs/errors.
Add playbook to S3
The example bucket will be called my_ansible_bucket. This is where we will store our Ansible playbooks.
Our example playbook will be called my_ansible_playbook.yaml. I will borrow an example playbook from this blog post.
---
- name: Ansible create user demo
hosts: demoservers
remote_user: ubuntu
tasks:
- name: Add the user 'demo1'
ansible.builtin.user:
name: demo1
- name: Set authorized keys
authorized_key:
user: demo1
state: present
key: "ssh-ed25519 INSERT_SOME_SSH_USER_HERE demo@example.com"
This playbook should be uploaded to the bucket for your EC2 instances to read later.
SSM Send Command
The first, and less interesting, way to run Ansible palybooks is by utilising the SSM send-command function:
aws ssm send-command --document-name "AWS-RunAnsiblePlaybook" \
--instance-ids "i-0923-my-instance-id" \
--output-s3-bucket-name my_ansible_bucket \
--output-s3-key-prefix output \
--output-s3-region eu-west-2 \
--max-errors 1 \
--parameters '{"extravars":["SSM=True"],"check":["False"],"playbook":["s3://my_ansible_bucket/my_ansible_playbook.yaml"]}' \
--timeout-seconds 600 \
--region eu-west-2
This is the approach recommended in the AWS blog post. It is useful for running one off commands and for testing everything works as expected.
State Manager
A nicer approach is to store the send-command configuration we just created inside of State Manager instead.
This will allow us to run our predefined Ansible playbooks from the console or from the CLI in our pipelines.
Terraform Example
Here is some example terraform code for setting up state manager. You will need to replace the bucket and instance ids.
resource "aws_ssm_association" "my_ssm_association" {
name = "AWS-RunAnsiblePlaybook"
association_name = "MyAnsiblePlaybook"
parameters = {
check = "False"
extravars = "SSM=True"
playbookurl = "s3://${aws_s3_bucket.my_ansible_bucket.bucket}/my_ansible_playbook.yaml"
timeoutSeconds = "3600"
}
output_location {
s3_bucket_name = aws_s3_bucket.ansible_bucket.bucket
s3_key_prefix = "output"
}
targets {
key = "InstanceIds"
values = [aws_instance.my_ec2_instance.id]
}
}
Applying Association
To run the ansible playbook once, you can either use the AWS console or run this command:
# Retrieve the association id
SM_ASSOC_NAME=MyAnsiblePlaybook
SM_ASSOC_ID=`aws ssm list-associations --region=eu-west-2 | jq '.[]' | jq '.[] | select(.AssociationName=="'${SM_ASSOC_NAME}'") | .AssociationId' --raw-output`
# Apply the association
aws ssm start-associations-once --association-id $SM_ASSOC_ID --region=eu-west-2
If there are any errors these will be reported in our Ansible S3 bucket.
Summary
Hopefully this gives you a good overview of different ways of using Ansible to configure your EC2 instances. As always, there's no one-size-fits-all approach, so always use the best approach for your use case.