Showing posts with label test. Show all posts
Showing posts with label test. Show all posts

Monday

Pytest with Django

 


Steps and code to set up Django Rest Framework (DRF) test cases with database mocking.


 1. Set up Django and DRF


Install Django and DRF:

```sh

pip install django djangorestframework

```


Create a Django project and app:

```sh

django-admin startproject projectname

cd projectname

python manage.py startapp appname

```


2. Define Models, Serializers, and Views


models.py (appname/models.py):

```python

from django.db import models


class Item(models.Model):

    name = models.CharField(max_length=100)

    description = models.TextField()

```


serializers.py (appname/serializers.py):

```python

from rest_framework import serializers

from .models import Item


class ItemSerializer(serializers.ModelSerializer):

    class Meta:

        model = Item

        fields = '__all__'

```


views.py (appname/views.py):

```python

from rest_framework import viewsets

from .models import Item

from .serializers import ItemSerializer


class ItemViewSet(viewsets.ModelViewSet):

    queryset = Item.objects.all()

    serializer_class = ItemSerializer

```


urls.py (appname/urls.py):

```python

from django.urls import path, include

from rest_framework.routers import DefaultRouter

from .views import ItemViewSet


router = DefaultRouter()

router.register(r'items', ItemViewSet)


urlpatterns = [

    path('', include(router.urls)),

]

```


projectname/urls.py:

```python

from django.contrib import admin

from django.urls import path, include


urlpatterns = [

    path('admin/', admin.site.urls),

    path('api/', include('appname.urls')),

]

```


3. Migrate Database and Create Superuser


```sh

python manage.py makemigrations appname

python manage.py migrate

python manage.py createsuperuser

python manage.py runserver

```


4. Write Test Cases


tests.py (appname/tests.py):

```python

from django.urls import reverse

from rest_framework import status

from rest_framework.test import APITestCase

from .models import Item

from .serializers import ItemSerializer


class ItemTests(APITestCase):

    

    def setUp(self):

        self.item1 = Item.objects.create(name='Item 1', description='Description 1')

        self.item2 = Item.objects.create(name='Item 2', description='Description 2')


    def test_get_items(self):

        url = reverse('item-list')

        response = self.client.get(url, format='json')

        items = Item.objects.all()

        serializer = ItemSerializer(items, many=True)

        self.assertEqual(response.status_code, status.HTTP_200_OK)

        self.assertEqual(response.data, serializer.data)


    def test_create_item(self):

        url = reverse('item-list')

        data = {'name': 'Item 3', 'description': 'Description 3'}

        response = self.client.post(url, data, format='json')

        self.assertEqual(response.status_code, status.HTTP_201_CREATED)

        self.assertEqual(Item.objects.count(), 3)

        self.assertEqual(Item.objects.get(id=3).name, 'Item 3')


    def test_update_item(self):

        url = reverse('item-detail', kwargs={'pk': self.item1.id})

        data = {'name': 'Updated Item 1', 'description': 'Updated Description 1'}

        response = self.client.put(url, data, format='json')

        self.assertEqual(response.status_code, status.HTTP_200_OK)

        self.item1.refresh_from_db()

        self.assertEqual(self.item1.name, 'Updated Item 1')


    def test_delete_item(self):

        url = reverse('item-detail', kwargs={'pk': self.item2.id})

        response = self.client.delete(url, format='json')

        self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)

        self.assertEqual(Item.objects.count(), 1)

```


5. Run Tests


```sh

python manage.py test

```


This setup provides a basic Django project with DRF and test cases for CRUD operations using the database. The test cases mock the database operations, ensuring isolation and consistency during testing.

Now diving into some more feature tests with Mock, patch etc.

Here are steps and code to write Django Rest Framework (DRF) test cases using mocking and faking features for scenarios like credit card processing.


1. Set up Django and DRF


Install Django and DRF:

```sh

pip install django djangorestframework

```


Create a Django project and app:

```sh

django-admin startproject projectname

cd projectname

python manage.py startapp appname

```


2. Define Models, Serializers, and Views


models.py (appname/models.py):

```python

from django.db import models


class Payment(models.Model):

    card_number = models.CharField(max_length=16)

    card_holder = models.CharField(max_length=100)

    expiration_date = models.CharField(max_length=5)

    amount = models.DecimalField(max_digits=10, decimal_places=2)

    status = models.CharField(max_length=10)

```


serializers.py (appname/serializers.py):

```python

from rest_framework import serializers

from .models import Payment


class PaymentSerializer(serializers.ModelSerializer):

    class Meta:

        model = Payment

        fields = '__all__'

```


views.py (appname/views.py):

```python

from rest_framework import viewsets

from .models import Payment

from .serializers import PaymentSerializer


class PaymentViewSet(viewsets.ModelViewSet):

    queryset = Payment.objects.all()

    serializer_class = PaymentSerializer

```


urls.py (appname/urls.py):

```python

from django.urls import path, include

from rest_framework.routers import DefaultRouter

from .views import PaymentViewSet


router = DefaultRouter()

router.register(r'payments', PaymentViewSet)


urlpatterns = [

    path('', include(router.urls)),

]

```


projectname/urls.py:

```python

from django.contrib import admin

from django.urls import path, include


urlpatterns = [

    path('admin/', admin.site.urls),

    path('api/', include('appname.urls')),

]

```


3. Migrate Database and Create Superuser


```sh

python manage.py makemigrations appname

python manage.py migrate

python manage.py createsuperuser

python manage.py runserver

```


4. Write Test Cases with Mocking and Faking


tests.py (appname/tests.py):

```python

from django.urls import reverse

from rest_framework import status

from rest_framework.test import APITestCase

from unittest.mock import patch

from .models import Payment

from .serializers import PaymentSerializer


class PaymentTests(APITestCase):


    def setUp(self):

        self.payment_data = {

            'card_number': '4111111111111111',

            'card_holder': 'John Doe',

            'expiration_date': '12/25',

            'amount': '100.00',

            'status': 'Pending'

        }

        self.payment = Payment.objects.create(**self.payment_data)

    

    @patch('appname.views.PaymentViewSet.create')

    def test_create_payment_with_mock(self, mock_create):

        mock_create.return_value = self.payment


        url = reverse('payment-list')

        response = self.client.post(url, self.payment_data, format='json')

        

        self.assertEqual(response.status_code, status.HTTP_201_CREATED)

        self.assertEqual(response.data['card_number'], self.payment_data['card_number'])


    @patch('appname.views.PaymentViewSet.perform_create')

    def test_create_payment_fake_response(self, mock_perform_create):

        def fake_perform_create(serializer):

            serializer.save(status='Success')


        mock_perform_create.side_effect = fake_perform_create


        url = reverse('payment-list')

        response = self.client.post(url, self.payment_data, format='json')

        

        self.assertEqual(response.status_code, status.HTTP_201_CREATED)

        self.assertEqual(response.data['status'], 'Success')


    def test_get_payments(self):

        url = reverse('payment-list')

        response = self.client.get(url, format='json')

        payments = Payment.objects.all()

        serializer = PaymentSerializer(payments, many=True)

        self.assertEqual(response.status_code, status.HTTP_200_OK)

        self.assertEqual(response.data, serializer.data)


    @patch('appname.views.PaymentViewSet.retrieve')

    def test_get_payment_with_mock(self, mock_retrieve):

        mock_retrieve.return_value = self.payment


        url = reverse('payment-detail', kwargs={'pk': self.payment.id})

        response = self.client.get(url, format='json')


        self.assertEqual(response.status_code, status.HTTP_200_OK)

        self.assertEqual(response.data['card_number'], self.payment_data['card_number'])


    @patch('appname.views.PaymentViewSet.update')

    def test_update_payment_with_mock(self, mock_update):

        mock_update.return_value = self.payment

        updated_data = self.payment_data.copy()

        updated_data['status'] = 'Completed'


        url = reverse('payment-detail', kwargs={'pk': self.payment.id})

        response = self.client.put(url, updated_data, format='json')


        self.assertEqual(response.status_code, status.HTTP_200_OK)

        self.assertEqual(response.data['status'], 'Completed')


    @patch('appname.views.PaymentViewSet.destroy')

    def test_delete_payment_with_mock(self, mock_destroy):

        mock_destroy.return_value = None


        url = reverse('payment-detail', kwargs={'pk': self.payment.id})

        response = self.client.delete(url, format='json')


        self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)

        self.assertEqual(Payment.objects.count(), 0)

```


5. Run Tests


```sh

python manage.py test

```


This setup uses `unittest.mock.patch` to mock the behavior of various viewset methods in DRF, allowing you to simulate different responses without hitting the actual database or external services.

Wednesday

How to Test Microservices Application

 

                                Photo by RF._.studio

Debugging and testing microservices applications can be challenging due to their distributed

nature. Here are some strategies to help you debug and test microservices effectively:

Debugging Microservices: 1. Centralized Logging: - Implement centralized logging using tools like ELK (Elasticsearch, Logstash, Kibana) or

centralized logging services. This allows you to trace logs across multiple services. 2. Distributed Tracing: - Use distributed tracing tools like Jaeger or Zipkin. They help track requests as they travel

through various microservices, providing insights into latency and errors. 3. Service Mesh: - Consider using a service mesh like Istio or Linkerd. Service meshes provide observability

features, such as traffic monitoring, security, and telemetry. 4. Container Orchestration Tools: - Leverage container orchestration tools like Kubernetes to manage and monitor

microservices. Kubernetes provides features for inspecting running containers, logs, and more. 5. API Gateway Monitoring: - Monitor and debug at the API gateway level. Many microservices architectures use an API

gateway for managing external requests. 6. Unit Testing: - Write unit tests for each microservice independently. Mock external dependencies and

services to isolate the microservice being tested. 7. Integration Testing: - Conduct integration tests to ensure that microservices interact correctly. Use tools like

Docker Compose or Kubernetes for setting up a test environment. 8. Chaos Engineering: - Implement chaos engineering to proactively test the system's resilience to failures.

Introduce controlled failures to observe how the system reacts. Testing Microservices: 1. Containerization: - Use containerization (e.g., Docker) to package each microservice and its dependencies.

This ensures consistent deployment across different environments. 2. Automated Testing: - Implement automated testing for each microservice, including unit tests, integration tests,

and end-to-end tests. CI/CD pipelines can help automate the testing process. 3. Contract Testing: - Perform contract testing to ensure that the interfaces between microservices remain stable. Tools like

Pact or Spring Cloud Contract can assist in this. 4. Mocking External Dependencies: - Mock external dependencies during testing to isolate microservices and focus on their

specific functionality. 5. Data Management: - Carefully manage test data. Consider using tools like TestContainers to spin up temporary

databases for integration testing. 6. Performance Testing: - Conduct performance testing to evaluate the scalability and responsiveness of each

microservice under different loads. 7. Security Testing: - Perform security testing, including vulnerability assessments and penetration testing, to

identify and fix potential security issues. 8. Continuous Monitoring: - Implement continuous monitoring to keep track of microservices' health and performance

in production. Remember that a combination of these strategies is often necessary to ensure the reliability

and stability of a microservices architecture. Regularly review and update your testing and

debugging approaches as the application evolves.

The choice of testing tools depends on the specific requirements of your project, the programming

languages used, and the types of tests you want to conduct. Here are some popular and relatively

easy-to-use testing tools across various categories:

Unit Testing:

1. JUnit (Java):

   - Widely used for testing Java applications.

   - Simple annotations for writing test cases.

2. PyTest (Python):

   - Python testing framework with concise syntax.

   - Supports test discovery and fixtures.

3. Mocha (JavaScript - Node.js):

   - Popular for testing Node.js applications.

   - Integrates well with asynchronous code.


Integration Testing:

4. TestNG (Java):

   - Extends JUnit and designed for test configuration.

   - Supports parallel test execution.

5. pytest (Python):

   - Not only for unit tests but also supports integration testing.

   - Easy fixture setup for test dependencies.

6. Jest (JavaScript):

   - Popular for testing JavaScript applications.

   - Built-in test runner with code coverage support.


End-to-End Testing:

7. Selenium WebDriver:

   - Cross-browser automation tool for web applications.

   - Supports various programming languages.

8. Cypress:

   - JavaScript-based end-to-end testing tool.

   - Provides fast and reliable testing for web applications.


API Testing:

9. Postman:

   - User-friendly interface for API testing.

   - Supports automated testing and scripting.

10. RestAssured (Java):

    - Java library for testing RESTful APIs.

    - Integrates well with popular Java testing frameworks.


Performance Testing:

11. Apache JMeter:

    - Open-source tool for performance testing.

    - GUI-based and supports scripting for complex scenarios.

12. Locust:

    - Python-based tool for load testing.

    - Supports distributed testing and easy script creation.


Security Testing:

13. OWASP ZAP (Zed Attack Proxy):

    - Open-source security testing tool.

    - Automated scanners and various tools for finding vulnerabilities.

14. Burp Suite:

    - Comprehensive toolkit for web application security testing.

    - Includes various tools for different aspects of security testing.


Continuous Integration/Continuous Deployment (CI/CD):

15. Jenkins:

    - Widely used for building, testing, and deploying code.

    - Extensive plugin support.

16. Travis CI:

    - Cloud-based CI/CD service with easy integration.

    - Supports GitHub repositories.


Test Automation Frameworks:

17. Robot Framework:

    - Generic test automation framework.

    - Supports keyword-driven testing.

18. TestNG (Java):

    - Not just for unit testing but also suitable for test automation.

    - Good for organizing and parallelizing tests.


Prometheus is an open-source systems monitoring and alerting toolkit. It is designed for

reliability and scalability, making it a popular choice for monitoring containerized applications

and microservices architectures. Here are some key features and components of Prometheus:


1. Data Model:

   - Prometheus uses a multi-dimensional data model with time series data identified by metric

names and key-value pairs.

   - Metrics are collected at regular intervals and stored as time series.

2. Query Language:

   - PromQL (Prometheus Query Language) allows users to query and aggregate metrics data

for analysis and visualization.

   - Supports various mathematical and statistical operations.

3. Scraping:

   - Prometheus uses a pull-based model for collecting metrics from monitored services.

   - Targets (services or endpoints) expose a /metrics endpoint, and Prometheus scrapes this

endpoint at configured intervals.

4. Alerting:

   - Prometheus includes a powerful alerting system that can trigger alerts based on defined

rules.

   - Alert notifications can be sent to various channels like email, Slack, or others.

5. Service Discovery:

   - Supports service discovery mechanisms, including static configuration files, DNS-based

discovery, and integration with container orchestration tools like Kubernetes.

6. Storage and Retention:

   - Metrics data is stored locally in a time series database.

   - Retention policies can be configured to control how long data is retained.

7. Exporters:

   - Prometheus exporters are small services that collect metrics from third-party systems and expose

them in a format Prometheus can scrape.

   - Exporters exist for various systems, databases, and applications.

8. Grafana Integration:

   - Often used in conjunction with Grafana for visualization and dashboard creation.

   - Grafana can query Prometheus and display metrics in interactive dashboards.

9. Alertmanager:

   - A separate component responsible for handling alerts sent by Prometheus.

   - Allows for additional routing, silencing, and inhibition of alerts.

10. Community and Ecosystem:

    - Prometheus has a vibrant and active community.

    - Extensive ecosystem with third-party integrations, exporters, and client libraries in various

programming languages.

Prometheus is well-suited for monitoring dynamic, containerized environments and is a popular choice

in cloud-native and DevOps ecosystems. Its flexibility, scalability, and active community make it a

powerful tool for observability and monitoring.


Gini Index & Information Gain in Machine Learning


What is the Gini index?

The Gini index is a measure of impurity in a set of data. It is calculated by summing the squared probabilities of each class. A lower Gini index indicates a more pure set of data.

What is information gain?

Information gain is a measure of how much information is gained by splitting a set of data on a particular feature. It is calculated by comparing the entropy of the original set of data to the entropy of the two child sets. A higher information gain indicates that the feature is more effective at splitting the data.

What is impurity?

Impurity is a measure of how mixed up the classes are in a set of data. A more impure set of data will have a higher Gini index.

How are Gini index and information gain related?

Gini index and information gain are both measures of impurity, but they are calculated differently. Gini index is calculated by summing the squared probabilities of each class, while information gain is calculated by comparing the entropy of the original set of data to the entropy of the two child sets.

When should you use Gini index and when should you use information gain?

Gini index and information gain can be used interchangeably, but there are some cases where one may be preferred over the other. Gini index is typically preferred when the classes are balanced, while information gain is typically preferred when the classes are imbalanced.

How do you calculate the Gini index for a decision tree?

The Gini index for a decision tree is calculated by summing the Gini indices of the child nodes. The Gini index of a child node is calculated by summing the squared probabilities of each class in the child node.

How do you calculate the information gain for a decision tree?

The information gain for a decision tree is calculated by comparing the entropy of the original set of data to the entropy of the two child sets. The entropy of a set of data is calculated by summing the probabilities of each class in the set multiplied by the log of the probability of each class.

What are the advantages and disadvantages of Gini index and information gain?

The advantages of Gini index include:

  • It is simple to calculate.
  • It is interpretable.
  • It is robust to overfitting.

The disadvantages of Gini index include:

  • It is not as effective as information gain when the classes are imbalanced.
  • It can be sensitive to noise.

The advantages of information gain include:

  • It is more effective than Gini index when the classes are imbalanced.
  • It is less sensitive to noise.

The disadvantages of information gain include:

  • It is more complex to calculate.
  • It is less interpretable.

Can you give me an example of how Gini index and information gain are used in machine learning?

Gini index and information gain are used in machine learning algorithms such as decision trees and random forests. These algorithms use these measures to decide how to split the data into smaller and smaller subsets. The goal is to create subsets that are as pure as possible, meaning that they contain mostly instances of the same class.

Given a decision tree, explain how you would use Gini index to choose the best split.

To use Gini index to choose the best split in a decision tree, you would start by calculating the Gini index for each of the features. The feature with the lowest Gini index is the best choice for the split.

For example, let's say we have a decision tree that is trying to predict whether a customer will churn. The tree has two features: age and income. The Gini index for age is 0.4 and the Gini index for income is 0.2. Therefore, the best choice for the split is age.

Given a set of data, explain how you would use information gain to choose the best feature to split the data on.

To use information gain to choose the best feature to split a set of data, you would start by calculating the information gain for each of the features. The feature with the highest information gain is the best choice for the split.

For example, let's say we have a set of data about customers who have churned. The features in the data set are age, income, and location. The information gain for age is 0.2, the information gain for income is 0.4, and the information gain for location is 0.1. Therefore, the best choice for the split is income.

What are some of the challenges of using Gini index and information gain?

One challenge of using Gini index and information gain is that they can be sensitive to noise. This means that they can be fooled by small changes in the data.

Another challenge is that they can be computationally expensive to calculate. This is especially true for large datasets.

How can you address the challenges of using Gini index and information gain?

There are a few ways to address the challenges of using Gini index and information gain. One way is to use a technique called cross-validation. Cross-validation is a way of evaluating the performance of a machine learning model on unseen data. By using cross-validation, you can get a better idea of how well the model will perform on new data.

Another way to address the challenges of using Gini index and information gain is to use a technique called regularization. Regularization is a way of preventing a machine learning model from overfitting the training data. By using regularization, you can make the model more robust to noise and less likely to be fooled by small changes in the data.

*** Entropy is a measure of uncertainty or randomness in a system. It is often used in machine learning to measure the impurity of a data set. A high-entropy data set is a data set with a lot of uncertainty, while a low-entropy data set is a data set with a lot of certainty.

In information theory, entropy is defined as the average of the logarithm of the probabilities of possible events. For example, if there is a 50% chance of rain and a 50% chance of sunshine, then the entropy of the weather forecast is:

H = -(0.5 * log(0.5) + 0.5 * log(0.5)) = 1

The entropy of a data set can be used to measure how well the data is classified. A data set with a high entropy is a data set that is not well classified, while a data set with a low entropy is a data set that is well classified.

Entropy is used in machine learning algorithms such as decision trees and random forests. These algorithms use entropy to decide how to split the data into smaller and smaller subsets. The goal is to create subsets that are as pure as possible, meaning that they contain mostly instances of the same class.

Here are some of the applications of entropy in machine learning:

  • Decision trees: Entropy is used in decision trees to decide which feature to split the data on. The feature with the highest entropy is the best choice for the split.
  • Random forests: Entropy is used in random forests to decide which trees to grow. The trees with the highest entropy are the best choices for growth.
  • Naive Bayes classifiers: Entropy is used in naive Bayes classifiers to calculate the probability of a class. The class with the highest probability is the predicted class.