
Introduction
SSH access to a container is generally not recommended as containers are meant to be ephemeral and stateless. However, there are some cases where SSH access may be necessary, such as for debugging or troubleshooting purposes.
The purpose of this article is to give you a quick reference point for how to connect to different containers running in different clusters.
SSH into a Docker Container
- Find the name of the container you want to SSH into:
docker ps
- You can now execute a shell session on the container by running the following command:
docker exec -it <container-name> /bin/bash
SSH into a Docker Compose Container
- Find the name of the container you want to SSH into:
docker-compose ps
- You can now execute a shell session on the container by running the following command:
docker-compose exec <service-name> /bin/bash
SSH into a ECS Container
-
Connect to the AWS CLI with the appropriate permissions.
-
Log into the AWS console and find the details for the container you would like to SSH into.
-
You can now execute a shell session on the container by running the following command:
aws ecs execute-command \
--cluster <cluster-name> \
--task <task-id> \
--container <container-id> \
--command "/bin/bash" \
--interactive \
--region eu-west-2
SSH into a Kubernetes Container
-
Authenticate access to kubectl, this varies per cloud. For example in AWS EKS.
-
Find the name of the container you want to SSH into:
kubectl get pods
- You can now execute a shell session on the container by running the following command:
kubectl exec -it <pod-name> -c <container-name> -- /bin/bash
Summary
Hopefully this gives you a quick reminder of how to connect to docker containers in different environments.
As someone who works on a large number of different clusters I find this quick reference sheet invaluable in my day to day work, particularly while debugging containers that are not behaving as expected.