Please use this identifier to cite or link to this item: http://localhost/xmlui/handle/1/132
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRai, Chitranjan Kumar-
dc.date.accessioned2026-03-27T11:49:29Z-
dc.date.available2026-03-27T11:49:29Z-
dc.date.issued2024-
dc.identifier.urihttp://localhost/xmlui/handle/1/132-
dc.description.abstractAgricultural goods serve as the primary economic foundation for developing nations. However, diseases have a significant impact on plant output, resulting in substantial financial losses and posing a danger to food security. The conventional method involves visual inspection of plant diseases by specialists and farmers. Early detection of diseases remains a formidable challenge, particularly in regions where farming relies heavily on traditional practices and manual monitoring. Early detection involves identifying the initial signs of pest activity, as infected plants often exhibit subtle patterns and marks invisible to the naked eye. The variability of diseases across different plants and development stages further complicates this task. Although many large-scale farms in developed countries utilize automation and controlled environments to manage pests and ensure high production rates, this is not universally applicable. Consequently, constant monitoring, which may be costly and resource-intensive, is often necessary. Moreover, many farmers lack knowledge about unusual diseases. In addition, rural regions suffer from a scarcity of specialists, necessitating residents to travel significant distances in search of appropriate remedies. The latest technological progress has opened up the possibility of using computer-aided methods to identify and categorize diseases automatically. Several methods and advancements have been created to aid in the resolution or mitigation of plant diseases. Analysis tools and sensors are increasingly being included in smart systems to diagnose plant diseases. Several disease detection systems use image analysis methods to identify early signs of diseases and abnormalities in plants, fruits, and other vegetative products. This study pioneers the use of digital image processing and deep learning algorithms to detect and estimate the areas of eaten leaves in mixed plant leaves. “Mixed plants” refers to leaves from different types of plants. It also aims to identify and categorize diseases in three major cash crops, including Cotton, Maize, and Rice. A state-of-the-art Wi-Fi-enabled plant vision network system has been developed to collect high-resolution images of maize plants, along with other environmental factors. These images are then analysed using cutting-edge deeplearning models. The results are presented in an interactive graphical user interface, enhancing the user experience and making the system more accessible and user-friendly. The primary purpose of this research is to create a Virtual Instrumentation system for detecting and estimating plant leaf-eaten areas. Detecting eaten leaves can indicate pest activity early, allowing for timely intervention before the damage becomes widespread. This early intervention can help implement targeted pest management strategies, reducing crop loss and improving agricultural productivity. The research used a self-created dataset of leaves with and without holes captured under controlled and in-field conditions. Different image thresholding techniques have been used to segment pre-eaten and healthy areas from leaf images, with Otsu thresholding showing the best results on offline acquired datasets. The accuracy, sensitivity, and specificity with Otsu thresholding are 98.20%, 99.24%, and 94.34%, respectively. On the online acquired image dataset, the same VI showed accuracy, sensitivity, and specificity of 97.23%,97.25%, and 97.21%, respectively, using cluster-based thresholding. Moreover, three distinct datasets comprising images of cash crops— Cotton, Maize, and Rice — were employed to facilitate the detection and classification of diseases. These crops are particularly susceptible to pest infestations, which can significantly impact their yield and quality. Focusing on these crops, our study aims to address a major concern in agriculture and provide solutions that benefit many farmers and consumers. In the case of the cotton image dataset, two distinct models were devised: the Improved AlexNet and an ensemble deep transfer learning model. The Improved AlexNet model, trained with a data split ratio of 8:1:1 and a max-max-maxmax pooling layer configuration over 500 epochs, demonstrated a peak accuracy of 97.98%. This achievement surpasses the performance of the original AlexNet model. An ensemble deep transfer learning model utilised a bagging ensemble approach, which included five transfer learning models: InceptionV3, Inception- ResNetV2, VGG16, MobileNet, and Xception. When applied to binary datasets, the model attained an excellent accuracy of 99.48% and 98.52% on the multi-class dataset. Deep learning-based Attention U-Net and Attention ResUNet models were created from scratch to extract the region of interest from diseased plants and leaves. The Attention UNet model trained and tested on the maize image dataset demonstrated a disease segmentation accuracy of 98.97%. In contrast, the Attention ResUNet model trained and tested on the rice image dataset showed a segmentation accuracy of 94.11%. To assess the efficacy of the proposed models, a vision network system is developed to capture real-time images and perform spatial-temporal analysis of plant health. A vision network was established using the embedded controller (NI myRIO-1900) as a sensor node, which operates via Wi-Fi connectivity. Each sensor node comprises a USB camera and other environmental sensors that are installed in a maize field to acquire and transmit real-time images from different regions of the field. Every vision sensor node was powered by a 12V rechargeable power bank. The power bank was linked to a solar panel, which supplied energy to recharge the battery of the vision node or directly powered the node. The surplus energy generated by solar panels is stored in rechargeable batteries and used during periods of limited sunlight. Two sensor nodes were strategically positioned in various locations throughout the maize agricultural field to take images and record numerous environmental characteristics, including temperature, humidity, light intensity, battery voltage, and battery status. A graphical user interface is created using LabVIEW software to get the image data and environmental parameters. The collected data is then subjected to further preprocessing and then fed into the deep-learning models for testing purposes. The test findings may be seen on the GUI, which displays information regarding the affected leaf area as well as the environmental characteristics of various places. In the future, this study may be expanded by deploying a greater number of sensor nodes in the different agriculture fields to enhance the analysis of plant health. In order to enhance the range of data transmission, we will conduct a study on other wireless technology that is based on LoRa (from “long range”).en_US
dc.subjectDepartment of Instrumentation & Control Engineeringen_US
dc.titlePlant Health Monitoring and Analysis Using Image Processing and Vision Networken_US
dc.typeThesisen_US
Appears in Collections:PHD - Thesis

Files in This Item:
File Description SizeFormat 
PhD Thesis Chitranjan Kumar Rai Merged.pdf57.49 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.