Docker
Dockerizing the FastAPI Application in VSCode
Introduction to Docker
Docker is a containerization platform that allows you to package an application and its dependencies into a standardized unit called a container. Containers are lightweight, portable, and isolated, making it easy to develop, package, and deploy applications consistently across different environments. Docker has become a fundamental technology in modern software development and operations.
Key Concepts in Docker:
Container: A container is a standalone, executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Containers run in isolated environments, ensuring consistency and reproducibility.
Docker Image: A Docker image is a read-only template used to create containers. Images contain a snapshot of a file system and application code, along with configuration settings. Images can be versioned and shared through container registries like Docker Hub.
Dockerfile: A Dockerfile is a text file that contains instructions for building a Docker image. It specifies the base image, application code, and dependencies, allowing you to define the environment in which your application runs.
Docker Container Registry: A container registry is a repository for storing and distributing Docker images. Docker Hub is a popular public registry, but organizations often use private registries for security and control.
Benefits of Docker in a Kubernetes Context
Kubernetes, as a container orchestration platform, leverages Docker containers to manage and automate the deployment, scaling, and management of containerized applications. Here are the key benefits of Docker in a Kubernetes context:
Consistency: Docker containers ensure that applications run consistently across development, testing, and production environments. This consistency minimizes the "it works on my machine" problem.
Portability: Docker containers are highly portable, allowing you to package an application and its dependencies into a single unit that can run on any Kubernetes cluster, regardless of the underlying infrastructure.
Resource Efficiency: Containers share the host OS kernel, which reduces overhead compared to traditional virtualization. This leads to better resource utilization and allows for running multiple containers on the same host.
Isolation: Docker containers provide process and file system isolation, ensuring that applications do not interfere with each other. This isolation enhances security and stability.
Scalability: Kubernetes can easily scale containers up or down based on demand, ensuring that resources are used efficiently and applications remain responsive.
Version Control: Docker images can be versioned, making it easy to roll back to previous versions of an application if issues arise. Kubernetes supports deploying specific image versions.
Efficient Updates: Kubernetes supports rolling updates, enabling you to update applications without downtime. Docker's layered image approach allows for efficient image updates, downloading only the changed layers.
Resource Management: Kubernetes can manage resource constraints and quality of service (QoS) for containers, ensuring that applications receive the appropriate amount of CPU and memory resources.
Continuous Integration and Continuous Deployment (CI/CD): Docker and Kubernetes integrate well with CI/CD pipelines, allowing for automated testing and deployment of containerized applications.
In summary, Docker is a containerization platform that provides consistency, portability, and isolation for applications. When used with Kubernetes, Docker containers become the building blocks for scalable, efficient, and manageable containerized applications in modern DevOps and cloud-native environments.
Writing a Dockerfile for the FastAPI application.
Create a file named "Dockerfile" in the root of the project with the following:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Here's a brief explanation of each section of the Dockerfile:
FROM python:3.9-slim: This specifies the base image for the container. In this case, it's using an official Python 3.9 image.
WORKDIR /app: Sets the working directory within the container to /app.
COPY . /app: Copies the contents of your current directory (where the Dockerfile is located) to the /app directory in the container.
RUN pip install --no-cache-dir -r requirements.txt: Installs any Python dependencies listed in the requirements.txt file. Make sure you have a requirements.txt file with your FastAPI application's dependencies in the same directory as the Dockerfile.
EXPOSE 80: Exposes port 80, which is the port that FastAPI will run on inside the container.
ENV NAME World: Sets an environment variable. This is optional and can be used to configure your application if needed.
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]: Specifies the command to run when the container starts. In this case, it runs the FastAPI application using Uvicorn.
Building and testing the Docker image.
To build a Docker image using this Dockerfile, open your terminal in VS Code, navigate to the directory containing the Dockerfile and your FastAPI application code, and run:
docker build -t dev-ops-hello-worldapp .
This command will build a Docker image with the tag "dev-ops-hello-worldapp." You can replace "dev-ops-hello-worldapp" with a different tag name if you prefer.
To run the image after you have built it:
docker run -p 80:80 dev-ops-hello-worldapp
This will run the image and map the port 80 from the local host to port 80 in the container.
To check the running application:
docker ps
This will give you the list of running containers.
To access the the image open a browser to http://localhost
you should get the message "{"message":"Hello, World!"}"