Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Thursday

MySql with Docker

Running a MySQL database in a Docker container is straightforward. Here are the steps:


Pull the Official MySQL Image:

The official MySQL image is available on Docker Hub. You can choose the version you want (e.g., MySQL 8.0): docker pull mysql:8.0


Create a Docker Volume (Optional):

To persist your database data, create a Docker volume or bind mount. Otherwise, data will be lost when the container restarts.

Example using a volume: docker volume create mysql-data


Run the MySQL Container:

Use the following command to start a MySQL container: docker run --name my-mysql -e MYSQL_ROOT_PASSWORD=secret -v mysql-data:/var/lib/mysql -d mysql:8.0

Replace secret with your desired root password.


The MySQL first-run routine will take a few seconds to complete.


Check if the database is up by running: docker logs my-mysql


Look for a line that says “ready for connections.”


Access MySQL Shell:

To interact with MySQL, attach to the container and run the mysql command: docker exec -it my-mysql mysql -p


Enter the root password when prompted.


To import an SQL file from your filesystem: docker exec -it my-mysql mysql -psecret database_name < path-to-file.sql


Access MySQL from Host:

If you want to access MySQL from your host machine, set up a port binding:

Add the following to your docker-compose.yml file (if using Docker Compose):

services:

mysql:

ports:

- 33060:3306


If not using Docker Compose, pass -p 33060:3306 to docker run.

That’s it! You now have a MySQL database running in a Docker container.

Saturday

Kubernetes Ingress

Kubernetes Ingress is an API object that provides HTTP and HTTPS routing to services based on rules. It acts as an entry point for external traffic into the cluster, managing external access to services. Ingress allows you to define how external HTTP/S traffic should be processed and routed to different services within the cluster.

If you want to start with the beginning then you can click here

Key components and concepts of Kubernetes Ingress include:


1. Ingress Resource:

   - An Ingress resource is created to define the rules for how external HTTP/S traffic should be handled.

2. Rules:

   - Rules define how requests should be routed based on the host and path specified in the incoming request.

3. Backend Services:

   - Ingress directs traffic to backend services based on the defined rules.

4. TLS Termination:

   - Ingress can handle TLS termination, allowing you to configure HTTPS for your services.

5. Annotations:

   - Annotations provide additional configuration options, allowing you to customize Ingress behavior.


Example Ingress YAML:


```yaml

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: my-ingress

spec:

  rules:

    - host: example.com

      http:

        paths:

          - path: /app

            pathType: Prefix

            backend:

              service:

                name: app-service

                port:

                  number: 80

  tls:

    - hosts:

        - example.com

      secretName: tls-secret

```


In this example:

- Requests to `example.com/app` are directed to the `app-service`.

- TLS termination is configured using the secret `tls-secret` for HTTPS.


Key Benefits:

- Simplifies external access management.

- Allows centralized control of routing rules.

- Supports TLS termination for secure communication.


Kubernetes Ingress controllers (like NGINX Ingress Controller, Traefik, etc.) are responsible for implementing Ingress rules and managing traffic accordingly. The choice of Ingress controller may depend on specific requirements and features needed for your environment.

Kubernetes Ingress is similar to how a reverse proxy like NGINX works, but it operates at the Kubernetes cluster level. 

Ingress in Kubernetes is an API object that defines how external HTTP/S traffic should be processed and routed to different services within the Kubernetes cluster. The Ingress resource itself is just an abstraction. To make it effective, you need an Ingress controller, and NGINX is one of the popular choices for that.


Here's the breakdown:

1. Ingress Resource:

   - Defines the rules for routing external HTTP/S traffic to different services within the Kubernetes cluster.

2. Ingress Controller:

   - Actively watches for changes to Ingress resources.

   - Implements the rules defined in Ingress resources.

   - Manages the actual routing and traffic processing.

3. NGINX Ingress Controller:

   - One of the many available Ingress controllers for Kubernetes.

   - Implements Ingress rules using NGINX as a reverse proxy.

   - Handles the actual HTTP/S traffic based on the defined rules.


So, in a sense, you can think of Kubernetes Ingress working in conjunction with an Ingress controller like NGINX to manage external access and routing within your Kubernetes cluster.

Friday

Introduction to Django, Celery, Nginx, Redis and Docker

 




Django: A High-Level Web Framework


Django is a high-level web framework for building robust web applications quickly and efficiently. Written in Python, it follows the Model-View-Controller (MVC) architectural pattern and emphasizes the principle of DRY (Don't Repeat Yourself). Django provides an ORM (Object-Relational Mapping) system for database interactions, an admin interface for easy content management, and a powerful templating engine.


When to Use Django:


- Building web applications with complex data models.

- Rapid development of scalable and maintainable web projects.

- Emphasizing clean and pragmatic design.


Docker: Containerization for Seamless Deployment


Docker is a platform that enables developers to automate the deployment of applications inside lightweight, portable containers. Containers encapsulate the application and its dependencies, ensuring consistency across different environments. Docker simplifies the deployment process, making it easier to move applications between development, testing, and production environments.


When to Use Docker:


- Achieving consistency in different development and production environments.

- Isolating applications and dependencies for portability.

- Streamlining the deployment process with containerization.


Celery: Distributed Task Queue for Asynchronous Processing


Celery is an asynchronous distributed task queue system that allows you to run tasks asynchronously in the background. It's particularly useful for handling time-consuming operations, such as sending emails, processing data, or running periodic tasks. Celery supports task scheduling, result storage, and can be integrated with various message brokers.


When to Use Celery:


- Handling background tasks to improve application responsiveness.

- Performing periodic or scheduled tasks.

- Scaling applications by offloading resource-intensive processes.


Redis: In-Memory Data Store for Performance


Redis is an open-source, in-memory data structure store that can be used as a cache, message broker, or real-time analytics database. It provides fast read and write operations, making it suitable for scenarios where low-latency access to data is crucial. Redis is often used as a message broker for Celery in Django applications.


When to Use Redis:


- Caching frequently accessed data for faster retrieval.

- Serving as a message broker for distributed systems.

- Handling real-time analytics and data processing.


Nginx: The Versatile Web Server and Reverse Proxy


Nginx is a versatile web server and reverse proxy server known for its efficiency and scalability. It excels in handling concurrent connections and balancing loads. In Django applications, Nginx often acts as a reverse proxy, forwarding requests to the Django server.


When to Incorporate Nginx:


Enhancing performance by serving static files and handling concurrent connections.

Acting as a reverse proxy to balance loads and forward requests to the Django server.


Sample Application: Django ToDo App


I have created a beginner-level ToDo application using Django, Docker, Celery, and Redis. You can find the source code on [GitHub](https://github.com/dhirajpatra/docker-django-celery-postgres). The application demonstrates the integration of these technologies to build a simple yet powerful task management system.


Future Updates:


Feel free to explore the provided GitHub repository, and I encourage you to contribute or extend the application. I will be creating new branches to introduce additional features and improvements. Stay tuned for updates!


GitHub Repository: https://github.com/dhirajpatra/docker-django-celery-postgres

I have other similar repositories a few years back as well.

Tuesday

How To Get the MongoDB Version from Another Docker Container


To get the MongoDB version from outside of MongoDB in the terminal, you can use the following commands:

Linux and macOS:

mongo --version

Windows:

mongod --version

These commands will print the version of MongoDB that is installed on your system.

For example, if you are running MongoDB version 5.0.13, the command will output the following:

MongoDB server version: 5.0.13

If you are running the MongoDB Compass GUI, you can also get the MongoDB version by clicking on the Help menu and selecting About MongoDB Compass.

There are a few ways to get the MongoDB version from another container when the MongoDB is running on a separate container and the other container is running a Debian-based image with Python:

  1. Use the mongo command. You can use the mongo command to connect to the MongoDB container and get the version. To do this, you will need to know the IP address of the MongoDB container. You can get the IP address of the container using the docker inspect command.
docker inspect mongodb

Once you have the IP address of the MongoDB container, you can use the following command to connect to the container and get the version:

mongo --host <mongodb_container_ip_address> --version
  1. Use the mongod command. You can also use the mongod command to get the version of MongoDB. To do this, you will need to know the port that MongoDB is running on. You can get the port that MongoDB is running on using the docker inspect command.
docker inspect mongodb

Once you have the port that MongoDB is running on, you can use the following command to get the version of MongoDB:

mongod --port <mongodb_container_port> --version
  1. Use the Python MongoClient. You can also use the Python MongoClient to get the version of MongoDB. To do this, you will need to install the MongoClient package.
pip install pymongo

Once you have installed the MongoClient package, you can use the following code to get the version of MongoDB:

Python
import pymongo

client = pymongo.MongoClient('mongodb://<mongodb_container_ip_address>:<mongodb_container_port>')
db = client.test
print(db.version())

Which method you use to get the version of MongoDB depends on your specific needs and requirements.

Yes, it is possible to get the MongoDB version from another container using a shell script. Here is a simple example:

#!/bin/bash

# Get the IP address of the MongoDB container
mongodb_container_ip_address=$(docker inspect mongodb -f '{{.NetworkSettings.IPAddress}}')

# Get the version of MongoDB
mongodb_version=$(mongo --host $mongodb_container_ip_address --version)

# Print the version of MongoDB
echo "MongoDB version: $mongodb_version"

This shell script will first get the IP address of the MongoDB container using the docker inspect command. Then, it will use the mongo command to connect to the MongoDB container and get the version. Finally, it will print the version of MongoDB to the console.

You can save this shell script as a file, such as get_mongodb_version.sh, and then run it using the following command:

bash get_mongodb_version.sh

This will print the version of MongoDB to the console.

You can also use this shell script in other scripts or applications to get the version of MongoDB from another container. For example, you could use this shell script in a CI/CD pipeline to verify that the correct version of MongoDB is running before deploying your application.

AI Assistant For Test Assignment

  Photo by Google DeepMind Creating an AI application to assist school teachers with testing assignments and result analysis can greatly ben...