Show simple item record

dc.contributor.authorOchago, Vincent M.
dc.contributor.authorWambugu, Geoffrey M
dc.contributor.authorNdia, John G.
dc.date.accessioned2022-03-18T07:33:37Z
dc.date.available2022-03-18T07:33:37Z
dc.date.issued2022-03
dc.identifier.citationInternational Journal of Formal Sciences: Current and Future Research Trends (IJFSCFRT) (2022) Volume 13, No 1, pp 60-73 61 Using captured images of crops to tell whether or not they are disease through image classification using a machine learning algorithm and if they are disease, the machine learning algorithm to tell the particular disease affecting the plant is the solution to this problem [1]. The farmer can then purchase the right medicine for their plants. From this research paper, the features from the images are extracted using ORB, HOG, and KAZE method, and once the features are extracted, they are passed to the machine learning image classification algorithm which can tell the particular maize disease affecting the crops [2]. A comparison of three methods was done and the HOG feature extraction method performed better with image classification algorithms hence the researcher decided to work out with HOG as a feature extraction method. HOG feature descriptor extracts key points from images and throws away information that is not useful and this is what is considered dimensionality reduction [3]. These key points are the ones that differentiate an image from the other images since they are unique for every image and clearly distinguish an image from the other images. The feature descriptor converts an image to a vector which is an array and this feature vector is an input value to the classification algorithms. Before the feature extraction method calculated the descriptor, the image window was resized to an aspect ratio of 1:2 and most probably 64 × 128 and this process was known as image preprocessing [3]. The main reason for resizing the image to 64 × 128 sizes was that when extracting features, the image needs to be divided into a patch of 8 × 8 and 16 ×16. The histogram of Gradient was calculated by first calculating the vertical and horizontal gradient which was achieved by applying filters to an image. A lot of unnecessary information such as colored background was removed by gradient image and only the shape and the edges of the image remained. Other feature descriptors usually recognize if an element in an image is an edge or not in the case of edge features but the proposed feature extraction method went further and extracted the magnitude and direction of the edges thus being able to provide the edge direction. Calculating gradient meant calculating the direction of x and y pixel values for the image. A patch was taken from an image and a gradient was calculated for the patch taken. The pixel matrix was generated for each small patch taken from an image [4]. For every pixel value in the matrix, the researcher calculated the change in x and y direction which was denoted by Gx and Gy respectively. And the process gave us the new matrices, one storing Gx and the other one storing Gy. The step that followed was to find the direction and magnitude of all elements in an image. And the process was done by calculating the Total Gradient Magnitude (T.G.M). And the following equation helped in calculating the total gradient magnitude; T.G.M= √ [( (Gy)2 + Gx)2 +] The following mathematical equation shows how the direction of the pixel was calculated; ϴ=arctan (Gx / Gy) Finally, the histogram was calculated for each pixel using the magnitude and the direction of each pixel [5]. The features extracted are the ones that acted as the input value for the proposed image classification model. The imageen_US
dc.identifier.urihttps://ijfscfrtjournal.isrra.org/index.php/Formal_Sciences_Journal/article/view/625
dc.identifier.urihttp://hdl.handle.net/123456789/5546
dc.description.abstractThe number of data points predicted correctly out of the total data points is known as accuracy in image classification models. Assessment of the accuracy is very important since it compares the correct images to the ones that have been classified by the image classification models. Image classification accuracy is a challenge since image classification models classify images to the class they don’t belong to hence there is an inaccurate relationship between the predicted class and the actual class which results in a low model accuracy score. Therefore, there is a need for a model that can classify the images with the highest accuracy. The paper presents image classification models together with the feature extraction methods used to classify maize disease images. The researcher used an augmented maize leaf disease dataset obtained from the Kaggle website. Features are extracted from maize disease images and passed to the machine learning classification algorithm to identify the possible disease based on the features detected using the feature extraction method. The maize disease images used include images of common rust, leaf spot, and northern leaf blight and healthy images. An evaluation was done for the feature extraction methods and the outcomes revealed Histogram of Oriented Gradients performed best with classifiers compared to KAZE and Oriented FAST and rotated BRIEF. The experimental outcome also indicated that the Artificial Neural Network model had the highest accuracy of 0.82 compared to Logistic Regression, K-Nearest Neighbors, Random Forest, Linear Support Vector Classifier, Decision Tree, and Support Vector Machine.en_US
dc.language.isoenen_US
dc.subjectFeature Descriptor, Gradient Direction, Gradient Magnitude, Machine Learning, Cross-Validation, Overfitting, Artificial Neural Network, Support Vector Machineen_US
dc.titleComparative Analysis of Machine Learning Algorithms Accuracy for Maize Leaf Disease Identificationen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record