Quick Docker Tutorial to Run a Python Script
You're facing a first starting point to learn Docker from scratch where practice takes priority over theory. The objective of this article is NOT to teach you all aspects, there are already many more complete and in-depth courses. We'll cover the basic and essential aspects so you can start using Docker in your software development projects. We'll go step by step until we manage to run a Python script inside a Docker container.
I'm going to assume you've installed Docker on your machine. If not, you can follow the official instructions at docs.docker.com/get-docker.
When running:
docker --version
You should see something similar to:
Docker version 28.4.0, build d8eb465
The version and build may vary, but we must verify that we're able to run Docker commands from the terminal.
Let's begin!
What is Docker and why use it?
Docker is the most popular containerization platform. It allows you to package applications to make them easy to launch. This way you can run your code on any operating system that supports Docker, without worrying about dependencies, versions or environment configurations. Once configured, anyone on your team will be able to launch the application identically.
You shouldn't confuse it with a virtual machine. Although they may seem similar at first glance, Docker containers are isolated processes that use the host operating system's kernel, while virtual machines emulate a complete operating system. This difference is key as it explains their lightweight nature and performance similar to a native application.
Therefore, we achieve:
- Portability: Runs on any system with Docker installed
- Security: Each process has its own environment
- Efficiency: Similar, or equal, to running natively
- Consistency: The environment is identical in development, testing and production
Now that we understand what Docker is and its benefits, let's see how it works.
Docker Architecture
The main components are:
- Docker Engine: The engine that runs the containers. It's responsible for creating, running and managing containers.
- Docker Client: Command-line interface (CLI) to interact with Docker Engine.
- Docker Daemon: Background process that manages images and containers.
When you work with Docker, you mainly interact with the Docker Client, which sends commands to the Docker Daemon to execute actions.
When managing your applications with Docker Client, you'll continuously manage 2 fundamental elements:
- Image: Immutable template containing code and dependencies. Think of it as a "class".
- Container: Running instance of an image. It would be the "object" created from the class.
You now know the basics about Docker. Let's get practical!
Docker CLI Basic Commands
It's time to learn the basic commands.
Running your first container
The "hello world" of Docker is:
docker run hello-world
It will download a small image and run a container that prints a welcome message.
If everything went well, we can move to something more advanced.
# Will download the Debian image and open a bash terminal inside the container
docker run -it debian bash
# Check the version
cat /etc/debian_version
# Exit the container
exit
We're now able to run containers. We're closer to being able to launch our Python script.
Container management
Let's see some useful commands to manage containers. Raise the Debian container again and in a new terminal execute:
# List active containers
docker ps
# List all containers, including stopped ones
docker ps -a
# Stop a container
docker stop <container_id>
# Remove a container
docker rm <container_id>
# Remove all stopped containers
docker container prune
Execution modes
When we raised the Debian container, you may have wondered what does the -it flag mean? It's the execution mode, or how we interact with the container:
- Interactive (
-it): To work inside the container. Allows opening a terminal and executing commands manually. - Detached (
-d): Runs in the background. Ideal for services or applications that must always be active but without direct interaction.
# Interactive mode
docker run --rm -it eggplanter/sh-tetris:v2.1.0
# Detached mode
docker run -d -p 3000:80 excalidraw/excalidraw
Open http://localhost:3000 in your browser to draw with Excalidraw.
And how do I stop the container in detached mode? Very easy, let's remember the management commands:
docker ps
docker stop <container_id>
Logs and debugging
If you need to see what's happening inside a container, logs are your best friend.
# View logs of a container
docker logs <container_id>
# Follow logs in real-time
docker logs -f <container_id>
Executing commands inside containers
Suppose I have the Debian container running in detached mode and I want to use its curl command to make an HTTP request.
You have 2 options:
# Open an interactive shell
docker exec -it <container_id> bash
# And inside
curl rate.sx/btc
# Execute in a single command
docker exec <container_id> curl rate.sx/btc
Working with Images
We're using different images (debian, hello-world, excalidraw...), but where are those images? How do they work?
You can search and download them from Docker Hub, the largest public repository of Docker images.
You can also use the CLI.
docker search python
Each image has a name and a tag that indicates the version or variant. If you don't specify a tag, Docker will use latest by default.
Let's enter the Python prompt using the official image:
docker run -it python:alpine python
The alpine version is a lightweight variant based on Alpine Linux, ideal for reducing image size. However, occupying less space doesn't mean it has all libraries or that its performance is better. Each image is optimized for different use cases.
Sharing data between host and container: Bind mounts
Suppose we have a Python script on our computer and we want to run it inside a container.
script.py contains the following code:
print("Hello from the Docker container!")
To run this script inside a Python container, we can use many strategies in Docker. The simplest is to use a bind mount to share the file between the host and the container.
docker run --rm -v $(pwd)/script.py:/app/script.py python:alpine python /app/script.py
The key is in the -v flag. The two dots separate the host path (left) and the path inside the container (right). In this case, we're mounting $(pwd)/script.py to /app/script.py inside the container. $(pwd) is an environment variable that represents the current directory, or where we executed the command. --rm indicates that the container will be automatically deleted after it finishes running.
And voilà! You should see the message:
Hello from the Docker container!
Congratulations! You've run your first Python script inside a Docker container.
We've launched a very simple example. What would happen if our script had external dependencies? Or if we wanted to configure it to run automatically without an intricate docker run command? We need to create our own Docker images.
Creating Your Own Images
Dockerfile files are the standard way to define how to build a custom Docker image.
An example for what we've done so far would be:
FROM python:alpine
WORKDIR /app
COPY script.py .
CMD ["python", "script.py"]
We would build the image with:
docker build -t my-python-script:v1 .
Don't forget the dot at the end!
And we would run it with:
docker run --rm my-python-script:v1
It's the same as before, but now we have a reusable image. We could share it with other colleagues or deploy it in production.
You can use the following basic instructions in a Dockerfile:
- FROM: Defines the base image
- RUN: Executes commands during the build (install packages)
- COPY: Copies files from the host to the image
- ADD: Similar to COPY but with additional capabilities (decompress files)
- CMD: Default command when starting the container
- ENTRYPOINT: Main command that isn't easily overwritten
Now let's make it a bit more complicated.
We create a file called requirements.txt with the dependencies:
names
It's a library that generates random names.
We modify our script.py to use it:
import names
print(names.get_full_name())
We modify the Dockerfile to install the dependencies:
FROM python:alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY script.py .
CMD ["python", "script.py"]
We build and run again:
docker build -t my-python-script:v2 .
And we launch:
docker run --rm my-python-script:v2
You should see a random name generated by the names library.
You've managed to create a container to run your Python script with dependencies. Good job!
What files would we upload to the repository or pass to a colleague?
Dockerfile: The file with the instructions to build the image.script.py: The Python script we want to run.requirements.txt: The dependencies needed for the script.README.md: Optionally, a documentation file with instructions to build and run the image.
But no binary or heavy Docker image. Each person can build the image locally.
Extra: Environment variables
It's not always a good idea to hardcode configurations inside the code or the Dockerfile. For example passwords, tokens or URLs that will change depending on the environment (development, testing, production). That's why Docker natively supports environment variables.
The most common is to use a .env file:
DATABASE_PASSWORD=mysecretpassword
API_KEY=abcdef
And we use it when running the container:
docker run --rm --env-file .env my-python-script:v3
They will only work inside the container. In Python you can access them using the os module:
import os
db_password = os.getenv("DATABASE_PASSWORD")
api_key = os.getenv("API_KEY")
print(f"DB Password: {db_password}, API Key: {api_key}")
Happy containerizing!
- What is Docker and why use it?
- Docker Architecture
- Docker CLI Basic Commands
- Running your first container
- Container management
- Execution modes
- Logs and debugging
- Executing commands inside containers
- Working with Images
- Sharing data between host and container: Bind mounts
- Creating Your Own Images
- Extra: Environment variables
This work is under a Attribution-NonCommercial-NoDerivatives 4.0 International license.