Abstract:
A solar tracking system is a device or mechanism designed to orient solar panels, solar collectors, or other solar energy harvesting equipment so that they continuously face the sun as it moves across the sky. The primary purpose of a solar tracking system is to maximize the efficiency and energy output of solar power generation systems by ensuring that they receive the maximum amount of sunlight throughout the day.
There are two main types of solar tracking systems:
1. Single-Axis Tracking: These systems track the sun's movement along one axis, typically either the east-west axis (horizontal tracking) or the north-south axis (vertical tracking). Single-axis trackers adjust the tilt angle of solar panels or collectors to follow the sun's path from sunrise to sunset. Horizontal single-axis trackers are more common and cost-effective for residential and commercial installations.
2. Dual-Axis Tracking: Dual-axis tracking systems are more complex and can track the sun's movement in both the east-west and north-south directions. This allows solar panels or collectors to follow the sun's path more accurately throughout the day, maximizing energy production. Dual-axis trackers are often used in concentrated solar power (CSP) plants and large-scale solar installations.
Key components of a solar tracking system typically include sensors, motors, and a controller. Sensors detect the sun's position, and the controller calculates the optimal angle and direction for the solar panels or collectors. Motors then adjust the position of the solar equipment accordingly.
Advantages of solar tracking systems include increased energy production and efficiency compared to fixed solar installations. However, they are more complex and expensive to install and maintain. The choice between fixed and tracking systems depends on factors like location, available space, budget, and energy production goals.
Solar tracking systems are commonly used in solar power generation applications, including solar photovoltaic (PV) systems, solar thermal systems, and concentrated solar power (CSP) plants. They help harness the sun's energy more effectively, making solar power a more viable and efficient renewable energy source.
A novel approach to solar tracking, leveraging deep learning techniques, is under exploration and experimentation using TensorFlow, an open-source machine learning framework. TensorFlow introduces flexibility to the implementation process and extends development capabilities. It enables the deployment of neural networks across a wide range of devices, including embedded systems, mobile devices, and mini-computers. Moreover, TensorFlow supports various types of neural networks that can be fine-tuned and retrained for specific applications. Initial findings are promising, as the retrained networks accurately identify the Sun and target objects, enabling precise tracking of the Sun's apparent trajectory without additional information.
Introduction:
Solar tracking systems (STSs) are crucial for optimizing system efficiency and minimizing expenses. Traditional STSs have drawbacks, such as limited tracking range and susceptibility to environmental conditions.
Computer vision-based control addresses these drawbacks by using a strategically positioned camera to identify the Sun and target areas. The midpoint between the intersections of the solar and target vectors on an image plane is then computed and used as input for the STS controller.
This versatile approach can be applied to various solar technologies, irrespective of the solar tracker type. It involves detecting the Sun and the tower target area using a camera positioned at 0° and aligned with the collector's optical axis. The midpoint between the solar and target vectors' intersections with an image plane is then computed and used as input for the STS controller.
Neural Network Models:
Several pretrained neural network models available in TensorFlow and Keras eg. ResNet50, AlexNet, and VGG16 or CV2 are enough to find the brightest spot on the image, which have been considered for this work. These models have been fine-tuned to identify the Sun, target, heliostats, and clouds. The choice of model affects factors such as speed and precision. "SSD MobileNet V1 quantized" stands out as the most accurate model, while "SSD MobileNet V1 0.75 depth or VGG16" is the fastest. Further optimization of model configurations and training parameters may yield improved results.
So converted all Keras models into TensorFlow lite model. A TFLite model that you can use for inference on mobile and edge devices. Note that the TFLite model may not be the same as the original Keras model due to optimizations and quantization performed during the conversion process.
We also kept this one step advance with EDGE TPU to use in Coral type of hardware.
Training:
After creating different TF2.0 models for a series of detections. Needed to convert those models to work on EDGE [Enhanced Data rates for GSM Evolution] microcontroller eg. Raspberrypi where processing power and memory are very limited compared to where we trained our models eg. Google Colab Pro with high-end TPU and GPU processing power available.
Neural network training is an iterative process, and the RMSPropOptimizer provided by TensorFlow has been used in this study. Training is computationally intensive, and a GPU cluster with CUDA support was employed to reduce training time. The average training time was approximately 24 hours.
Validation:
Validation involves an independent image set with labeled object classes (Sun, and cloud). Metrics such as mean average precision (mAP) and inference time are used to evaluate neural network performance. The results indicate that "SSD MobileNet V1 quantized" is the most accurate model, while "SSD MobileNet V1 0.75 depth" is the fastest. "Mask R-CNN Inception V2" offers both object detection and semantic segmentation capabilities.
Implementation:
The computer vision-based STS approach has been implemented for both low-end embedded and mobile devices, in addition to common computers. A Raspberry Pi 3 Model B+ with a Pi camera was used for embedded devices. TensorFlow serves as the underlying machine learning framework.
Examples:
Images taken from Raspberry Pi and mobile devices demonstrate the successful detection of objects (Sun, target, and heliostats) and the computation of tracking errors. These examples illustrate the practical application of the approach.
Conclusion and Future Work:
The adoption of TensorFlow and machine learning has improved the speed and accuracy of the solar tracking approach. The flexibility of TensorFlow enables implementation on various devices. Ongoing work includes expanding the image datasets for training and validation, evaluating new neural networks, and optimizing implementations. Future work involves autonomous heliostat control using embedded devices to assess control accuracy and determine optimal image resolutions.
Building a solar tracker system with machine learning, a camera, Raspberry Pi/Arduino, motors, and GPS involves several steps. Here's a high-level plan:
1. Hardware Setup:
- Camera: Connect a compatible camera module (e.g., Raspberry Pi Camera) to the Raspberry Pi.
- Motors: Use motors or servo motors to control the movement of solar panels. Ensure they can be interfaced with Raspberry Pi/Arduino.
- Raspberry Pi/Arduino: Choose the appropriate board for processing and control.
2. Solar Detection with Machine Learning:
- Train a machine learning model to detect the sun in images. You can use a pre-trained model for object detection or train your own model using a dataset of images with and without the sun.
- Integrate the trained model into your application running on Raspberry Pi for real-time sun detection.
3. Motor Control:
- Write code to control the motors/servo motors based on the sun's position. Ensure that the solar panels are oriented towards the sun for maximum efficiency.
- Implement a feedback loop to continuously adjust the position based on the machine learning model's output.
4. GPS Integration:
- Integrate a GPS module to provide location information.
- Implement a fail-safe mechanism to check for cloudy weather using weather APIs or local sensors.
- If cloudy, park the solar panels horizontally or in a safe position.
5. Power Management:
- Implement efficient power management to ensure the system runs on solar power or a combination of solar and battery power.
6. User Interface (Optional):
- Develop a user interface for monitoring the solar tracker's status and making manual adjustments if needed.
7. Testing and Calibration:
- Test the system in various weather conditions to ensure robust performance.
- Calibrate the system to improve accuracy and responsiveness.
8. Safety Considerations:
- Implement safety features to protect the system in case of malfunctions or unexpected events.
9. Documentation:
- Document the system architecture, wiring diagrams, code, and calibration procedures.
10. Deployment:
- Install the solar tracker in a location with optimal sunlight exposure.
- Monitor and maintain the system regularly.
Calculating the movement of the solar tracker based on the sun's position in the image involves determining the angle of deviation from the center of the horizon. Here's a simplified approach using basic geometry:
1. Identify Sun Position:
- Use your trained machine learning model to detect the sun's position in the image. Get the coordinates (x, y) of the detected sun.
2. Calculate Deviation from Center:
- Calculate the horizontal deviation from the center of the image:
```python
image_width = ... # Width of the image in pixels
center_x = image_width / 2
deviation = sun_x - center_x
```
3. Convert Deviation to Angle:
- Convert the deviation to an angle by considering the field of view of the camera. If the camera has a known field of view (FOV), you can use it to calculate the angle:
```python
fov = ... # Field of view in degrees
pixels_per_degree = image_width / fov
angle = deviation / pixels_per_degree
```
4. Adjust Motor Position:
- Use the calculated angle to adjust the position of the solar tracker's motors. The angle will determine in which direction and how much the solar panels need to move.
5. Feedback Loop:
- Implement a feedback loop to continuously adjust the motor position based on the changing sun position. This can be done by periodically capturing images, detecting the sun, and making real-time adjustments.
Here's a simple Python function to calculate the angle based on the sun's position:
```python
def calculate_angle(image_width, sun_x, fov):
center_x = image_width / 2
deviation = sun_x - center_x
pixels_per_degree = image_width / fov
angle = deviation / pixels_per_degree
return angle
```
Remember to adapt the values of `image_width` (width of the image in pixels) and `fov` (field of view) based on your specific camera specifications.
This approach assumes a linear relationship between pixel deviation and angle. For precise tracking, you may need to consider more sophisticated methods and calibration based on your specific setup.
You can find more code here https://github.com/dhirajpatra/jupyter_notebooks/blob/main/DataScienceProjects/image_processing/find_the_bright_spot.ipynb
No comments:
Post a Comment