List of the most important commands in Docker

List of the most important commands in Docker

Docker containers have been transformed from niche technology into a mandatory attribute of our development environments. Sometimes we have to spend an incredible amount of time on debugging or fighting the tool itself, which is initially expected to increase productivity. With each new wave of technology we have to master the great number of changes that are taking place.

Surely many of you have spent one or two days trying to configure a Docker cluster, or getting a piece of code that continues to fail the Docker container load. Most developers spend a lot of time on configuration, so finding bugs becomes something that seems to outweigh the time spent on developing the new functionality itself. This is especially true when you are working in a new environment or one that has not yet reached its “maturity”.



The less fortunate among us do not have stable environments with perfect CI/CD processes. This article is just for those who fall into this category. The information contained in it is taken from real experience. Like you, I spent my days debugging. This article is a kind of addition to the main Docker technical documentation site. At the same time, it focuses on the most common commands you will use daily when working with Docker.

A more detailed list of optional flags and arguments can be found at Docker manual. Be aware that depending on the configuration of your Docker system, you may need to precede each docker command with the sudo prefix.

Tip:Each Docker command has built-in documentation that you need to learn how to use. By typing docker run --help you will see the following helper documentation:



I hope this guide will help you navigate through the complex process of debugging and working with Docker. Pay attention to the accompanying explanation of the commands as you read them.

Build Docker

.

$ docker build \.
--build-arg ARTIFACTORY_USERNAME=timothy.mugayi \.
--build-arg ARTIFACTORY_SECRET_TOKEN=AP284233QnYX9Ckrdr7pUEY1F \.
--build-arg LICENSE_URL='https://source.com/license.txt' \.
--no-cache -t helloworld:latest.

This will create a Docker image with optional build arguments. Docker will by default cache the results for the first Dockerfile build or subsequent builds based on the layers added to the image through the run command in Dockerfile, allowing these builds to work faster. If you don’t need to, you can add the “no-cache” argument as in the above example.

Note: Docker commands can be executed by name or by container ID. <CONTAINER> allows either the container ID or its name to be substituted.

Start containers-Docker

.

$ docker start <CONTAINER>

Launching the existing container. Here we assume it was already loaded and created.

$ docker stop <CONTAINER>

Stop the existing Docker container.

$ docker stop $(docker container ls -aq)

If you have multiple Docker containers running and you want to stop them all, type docker stop and a list of these containers’ IDs.

$ docker exec -ti <CONTAINER> [COMMAND]

Executing the shell command inside a specific container.

$ docker run -ti - image <IMAGE> <CONTAINER> [COMMAND].

There is a clear distinction between Docker run and startrun basically does two things: 1) creates a new image container and 2) executes this container. If you ever need to re-run a failed or failed container, use docker start.

$ docker run -ti - rm - image <IMAGE> <CONTAINER> [COMMAND].

This is an interesting command designed for one-time creation and launch of a container. It also runs a command inside it and then, after executing this command, deletes the container.

docker run -d <IMAGE>:<IMAGE_TAG>

Usage: 
   docker run -d helloworld:latest

If you want to run docker run in a separate state – for example as a daemon (background process) in Linux – you can add -d to this command(s) run.

$ docker pause <CONTAINER>

Suspend all running processes inside a particular container.

$ docker ps -a

This command displays a list of all Docker images that were previously run. Once you have defined an image to run, run the command below. Make sure you change the container ID so that it displays the results shown by the original docker ps -a command.

$ sudo docker run {container ID} -e AWS_DEFAULT_REGION=us-east-1 \.
e INPUT_QUEUE_URL="https://sqs.us-east-1.amazonaws.com/my_input_sqs_queue.fifo". \
e REDIS_ENDPOINT="redis.dfasdf.0001.cache.amazonaws.com:8000". \
e ENV=dev \
e DJANGO_SETTINGS_MODULE=engine.settings \.
e REDIS_HOST="cmgadsfv7avlq.us-east-1.redis.amazonaws.com". \
e REDIS_PORT=5439 \.
e REDIS_USER=hello \.
e REDIS_PASSWORD=trasdf**#0ynpXkzg

This command demonstrates how to run a Docker image with multiple environment variables passed as arguments. The \ sign here acts as a line breaker.

$ docker run -p <host_public_port>:<container_port>

If you ever have to open Docker ports, keep in mind that for port forwarding the startup command gets the -p argument. Where host_public_port is a port on your machine, you need Docker to forward container_port. For additional ports, also add -p arguments.

$ docker run -p <host_public_port1>:<container_port1> -p <host_public_port2>:<container_port2>

Docker container debugging

Use docker ps to get the names of existing containers that are currently running.

$ docker history <IMAGE> 

example usage:

$ docker history my_image_name

Displays the history of a particular image. This information is useful when you want to know the details and get a detailed idea of how the Docker image came about. Let’s get a little distracted here as this is necessary to fully understand the actions of the specified command. The documentation itself is very poor.

In the case of Docker, the images are created on top of the layers that are the building blocks of these images. Each container consists of an image with a readable/writable layer, which you can consider as a permanent state or a file. In addition, other read-only layers are added. These layers (also called intermediate images) are generated when the build commands of the Docker image are executed in Dockerfile.

If you have the from directive, run and/or copy in the Dockerfile and then you build this image, the run directive will create one layer with its own image ID. This image/layer will then appear in docker history with this image ID and its creation date. Subsequent directives will continue to generate similar records. In doing so the CREATED BY column will roughly match the line in Dockerfile as shown in the following image:

$ docker images

A list of all the images currently stored on the machine.

$ docker inspect <IMAGE|CONTAINER ID>

Docker inspect displays low-level information about a particular Docker object. The data stored in this object can be very useful for debugging, for example when cross-checking Docker mount points.

Notice that there are two basic answers that this command provides – image level details and container level details. Here is some of the things that you can get with this command:

  • container ID and timestamp of its creation.
  • Current status (useful for finding out if the container is stopped and, if so, why).
  • Information of the Docker image, information about bindings to the file system and volume, as well as connections.
  • Variable environments – for example, command line parameters passed to the container.
  • Network configuration: IP address and gateway, and secondary addresses for IPv4 and IPv6.
$ docker version

Display the Docker version, including the client and server version currently installed on the machine.

Yeah, you got that right. Docker is a client-server application. A daemon (a long run Linux background service) is a server, and the CLI is one of many clients. The Docker Daemon discloses the REST API, through which various tools can interact with it.

$ docker version
Client: Docker Engine - Community
 Version: 19.03.5
 API version: 1.40
 Go version: go1.12.12
 Git commit: 633a0ea
 Built: Wed Nov 13 07:22:34 2019
 OS/Arch: darwin/amd64
 Experimental: false

Server: Docker Engine - Community
 Engine:
  Version: 19.03.5
  API version: 1.40 (minimum version 1.12)
  Go version: go1.12.12
  Git commit: 633a0ea
  Built: Wed Nov 13 07:29:19 2019
  OS/Arch: linux/amd64
  Experimental: false
 containerd:
  Version: v1.2.10
  GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version: 1.0.0-rc8+dev
  GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5a481657
 docker-init:
  Version: 0.18.0
  GitCommit: fec3683

Docker in AWS ECS

There are times when you need to log into a running Docker container to debug or cross-check the correct configuration.

Use docker exec -it <container ID> /bin/bash to access the shell if you need to find the Docker image that failed to run, for example if you are using the AWS ECS cluster and receive an inaccurate error message like the one below.

Honestly, there are a lot of reasons for this, including: 1) code problems, e.g. an exception that was not intercepted was thrown and the Docker container didn’t start, 2) you don’t have enough hard disk space if you use the ECS cluster in EC2 instances and don’t use Fargate, 3) your valid Docker container has exhausted the available EC2 memory.

Essential container in task exited

Perform the command below to identify the last container that failed to start. Lower sudo if you have sudo access in your account. Once you get the data back, use it to restart the container and identify the cause of the failure.

$ sudo docker ps -a --filter status=dead --filter status=exited --last 1

If in doubt, restart the Docker service

.

$ sudo service docker stop$ sudo service docker start# on a MAC you an use the docker utilty or alternatively run $ killall Docker && open /Applications/Docker.app

No explanation is required here.

Clear-images Docker

$ docker system prune

Docker takes a conservative approach to cleaning unused objects such as images, containers, volumes and networks.

These objects are usually not removed until you explicitly ask Docker to do so. Therefore, if these objects are not removed, then they will soon begin to take up a lot of space. As such, it is very important to run the command below periodically to clean up unused images.

docker kill <CONTAINER>

Kill a running container.

$ docker kill $(docker ps -q)

Kill all the containers that are running.

$ docker rm <CONTAINER>

Remove a specific container that is not currently running. If the image exists in the remote registry, it will not be affected.

$ docker rm $(docker ps -a -q):

Remove all containers that are not currently being opened.

$ docker logs my_container.

Get access to the container logs (useful for debugging).

Load Docker Images from Remote Registry

.

Docker Hub

Docker Hub is a Docker service for finding and sharing container images.

If you want to extract images for the local registry from there, it’s as easy as running the run command followed by the image path. The command below demonstrates how to extract and run a stable version of the Rocker image.

$ docker run --rm -p 8787:8787 rocker/verse

Docker will first check if this image is available on your local machine. If not, it will start downloading it from the Docker Hub repository. This is the behavior originally envisaged.

$ docker pull rocker/verse

If you just want to extract the image and not run the container, then the command docker pull will do.

docker login --username={DOCKERHUB_USERNNAME} --email={DOCKERHUB_EMAIL}.

To log into the Docker Hub, you can run the above command, which prompts you to enter your password.

Docker User Registry

.

$ docker login your.docker.host.com
Username: foo
Password: ********
Email: user@myemail.com

If you extract an image from a generic user Docker registry that requires authorization, the docker login command will allow you to extract the image from any Docker registry as shown above. Note that using the above approach, an entry in ~/.docker/config.json will be created. Attach ~/.docker/config.json to change the login details.

Elastic Container Registry Amazon

.
The Elastic Container Registry (ECS) is a fully supported Docker container registry that allows developers to store, maintain and deploy Docker container images. Amazon ECS works perfectly with Amazon ECR. If you suddenly need to extract images from the ECR registry, follow these instructions.

You need to configure AWS CLI so that the IAM user has access to AWS and the secret key.

Amazon ECR requires that the IAM user’s keys have permissions (ecr:GetAuthrizationToken) through the IAM policy, only then will you be able to login and retrieve the images. Alternatively, you can use the utility Amazon ECR Docker Credential Helper. Below is an approach that involves you using the AWS CLI and configuring all permissions correctly.

$ aws ecr list-images --repository-name=twitter-data-engine-core$ aws ecr describe-images -- repository-name=twitter-data-engine-core$ aws ecr get-login -- region us-east-1 -- no-include-email.

The get-login command generates a long Docker login command. Copy it and execute it. Authorization is required to be able to extract images from AWS ECR.

$ docker login -u AWS -p {YOUR_TEMPORARY_TOKEN}$ docker pull 723123836077.dkr.ecr.us-east-1.amazonaws.com/twitter-data-engine-core:build-9

Export and import of physical images Docker

$ docker save your_docker_image:latest > /usr/local/your_docker_image.tar$ docker load < /usr/local/your_docker_image.tar.

If you need to export the Docker image to disk and download it back, the above command will do so.

Exporting to a file is useful when you need to move images from one machine to the next through an alternate intermediary (not the Docker Registry). There are several environments that have access denied due to their security policy. This can prevent you from migrating between registries, so this command is very useful, although often unfairly underestimated.


42 Views

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments


Do NOT follow this link or you will be banned from the site!
0
Would love your thoughts, please comment.x
()
x

Spelling error report

The following text will be sent to our editors: