Refine Search

New Search

Results: 9

(searched for: doi:10.13176/11.302)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Published: 18 September 2021
by MDPI
Future Internet, Volume 13; https://doi.org/10.3390/fi13090239

Abstract:
Hindi is the official language of India and used by a large population for several public services like postal, bank, judiciary, and public surveys. Efficient management of these services needs language-based automation. The proposed model addresses the problem of handwritten Hindi character recognition using a machine learning approach. The pre-trained DCNN models namely; InceptionV3-Net, VGG19-Net, and ResNet50 were used for the extraction of salient features from the characters’ images. A novel approach of fusion is adopted in the proposed work; the DCNN-based features are fused with the handcrafted features received from Bi-orthogonal discrete wavelet transform. The feature size was reduced by the Principal Component Analysis method. The hybrid features were examined with popular classifiers namely; Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM). The recognition cost was reduced by 84.37%. The model achieved significant scores of precision, recall, and F1-measure—98.78%, 98.67%, and 98.69%—with overall recognition accuracy of 98.73%.
V. Amrutha Raj, R. L. Jyothi, A. Anilkumar
2017 International Conference on Computing Methodologies and Communication (ICCMC) pp 790-794; https://doi.org/10.1109/iccmc.2017.8282574

Abstract:
Grantha script, which was used for writing sacred texts in Sanskrit language. Grantha script contains valuable information, but these historical document images suffer from noises present in the original document due to its degradation, faint ink strokes, unwanted impurities, background images, bleed through, aging of the palm leaves and so on. It includes handwritten characters and also it is an extinct language. The incentive behind this research work include presenting a novel recognition system for modern Grantha script characters and also confirming the link between Malayalam and Grantha script. After pre-processing the input image, universe of discourse is selected. Feature extraction plays a vital role in the proposed recognition process. The proposed method uses HOOSC (Histogram of Orientation Shape Context) feature extraction, which is new in character recognition, but used in some other area and ANN (Artificial Neural Network) for classification. Feature extraction methods which are used for other languages and that can be used in Grantha script like HOG (Histogram of Oriented Gradients), Gabor features, Zoning, and Invariant Moments provides classification accuracy of 84%, 76.3%, 76%, and 52% respectively. The recognized characters are mapped to corresponding Malayalam characters, and proposed method provides an accuracy of about 96.5%.
Published: 1 January 2016
Procedia Computer Science, Volume 79, pp 337-343; https://doi.org/10.1016/j.procs.2016.03.044

Abstract:
Devanagari script is widely used in the Indian subcontinent in several major languages such as Hindi, Sanskrit, Marathi and Nepali. Recognition of unconstrained (Handwritten) Devanagari character is more complex due to shape of constituent strokes. Hence character recognition (CR) has been an active area of research till now and it continues to be a challenging research topic due to its diverse applicable environment. As the size of the vocabulary increases, the complexity of algorithms also increases linearly due to the need for a larger search space. Devnagari script recognition systems using Zernike moments, fuzzy rule and quadratic classifier provide less accuracy and less efficiency. Classification methods based on learning from examples have been widely applied to character recognition from the 1990s and have brought forth significant improvements of recognition accuracies. In this paper techniques like particle swarm optimization and support vector machines are implemented and compared. An android phone is used for taking input character and MATLAB software for showing the recognized Devnagari character. For the connection between android device and MATLAB we are using PHP language. The particle swarm optimization technique provides accuracy up to 90%.
Saniya Ansari, Udaysingh Sutar
2015 International Conference on Information Processing (ICIP) pp 11-15; https://doi.org/10.1109/infop.2015.7489342

Abstract:
Online handwritten character recognition is having wide areas of application in real life environment. Therefore the accuracy of such systems should be more, efficient and faster to process applications. A lots of research work is still going on over handwritten character recognition based on different languages and scripts. For any handwritten character recognition, there are three main tasks such as Image segmentation, Feature Extraction and Classification. Feature extraction is a very essential step for online handwritten character recognition. As the success rate of a recognition system is often depends on a good feature extraction method. The feature extractor determines which properties of the preprocessed data are most significant and should be used in further stages. In this paper different feature extraction methods are discussed and presented related with Devnagari script and proposed efficient and optimized extraction method with their comparative analysis. The accuracy of recognition system is majorly depending on feature extraction phase, types of features and size of features. The hybrid efficient, faster and optimized feature vector is used which is combination of geometrical features, regional features, distance transform and gradient features. Feature vector length is 91. In addition to this, in existing cases, the time required for extracting the geometrical features is very high; however Universe of discourse is used to speed up the retrieval. From practical analysis, accuracy of proposed feature vector set is improved as compared to existing feature vectors.
Gurpreet Singh, Chandan Jyoti Kumar, Rajneesh Rani, Singh G., Kumar C.J., Rani R.
International Conference on Computing, Communication & Automation pp 1091-1095; https://doi.org/10.1109/ccaa.2015.7148568

Abstract:
The paper is focuses on using hybridization of multiple features with different classifiers for the purpose of recognition of isolated handwritten Gurmukhi character images. We have tested four different types of features named as Histogram Oriented Gradient (HOG), Distance Profile, Background Directional Distribution (BDD) and Zonal Based Diagonal (ZBD). HOG feature is computed by information of Directions provided from gradient's tangent of arc. Distance Profile can be computed by counting pixels from bounding line of image of character to edge of character from different directions. BDD feature can be computed by background distribution of foreground pixels to background pixels in eight different directions. For computation of ZBD feature, image is segmented into 100 equal zones then feature is calculated from pixels of each zone by traveling along its diagonal direction. For these experiment seven thousand isolated images of Gurmukhi characters have been tested. The experiment achieves a maximum recognition accuracy of 97.257% with 5-fold and 97.671% with 10-fold cross validation by applying hybrid features on SVM classifier.
, Ankush Mittal,
EURASIP Journal on Image and Video Processing, Volume 2014; https://doi.org/10.1186/1687-5281-2014-36

Abstract:
Stylistic text can be found on sign boards, street and organizations boards and logos, bulletin boards, announcements, advertisements, dangerous goods plates, warning notices, etc. In stylistic text images, text-lines within an image may have different orientations such as curved in shape or not be parallel to each other. As a result, extraction and subsequent recognition of individual text-lines and words in such images is a difficult task. In this paper, we propose a novel scheme for straightening of curved text-lines using the concept of dilation, flood-fill, robust thinning, and B-spline curve-based fitting. In the proposed scheme, at first, dilation is applied on individual text-lines to cover the area within a certain boundary. Next, thinning is applied to get the path of the text, approximate the path using the B-spline, find the angle between the normal at a point on the curve and the vertical line, and finally visit each point on the text and rotate by their corresponding angles. The proposed methodology is tested on variety of text images containing text-lines in Devanagari, English, and Chinese scripts which is evaluated on the basis of visual perception and the mean square error (MSE) calculation. MSE is calculated by line fitting applied on input and output images. On the basis of evaluation results obtained in our experiments, the proposed method is promising.
Shruti R Kulkarni, Maryam Shojaei Baghini, Sanjeev R. Kulkarni
2013 Ninth International Conference on Natural Computation (ICNC) pp 194-199; https://doi.org/10.1109/icnc.2013.6817969

Abstract:
Spiking neural networks are the recent models of artificial neural networks. These networks use biologically similar neuron models as their basic computation units. This paper presents and compares a custom spiking neural network (SNN) with a conventional nearest neighbour classifier for hand written character recognition. The classifiers are designed and simulated in 90nm CMOS technology. The two algorithms are compared in terms of their success rates and their hardware requirements (based on the area and power estimates). The classification performance of the SNN is also compared with that of second generation feedforward neural network, with the same set of images. The robustness of SNN is demonstrated in this work by its ability to classify the 30 out of 32 noisy characters images presented as compared to the nearest neighbour algorithm, which correctly classified only 20 of them. The feedforward neural network using backpropagation algorithm was able to correctly identify 29 out of 32 noisy images in MATLAB. In terms of hardware, the ASIC realizing the nearest neighbour classifier dissipates power of 1.2mW and an area of 380μm × 380μm, while the SNN dissipates 16.7mW power and an area of 1mm × 1mm. The higher area and power requirements for the SNN stem from its inherent parallel architecture. Earlier works have focused on realization of a single spiking neuron and its variants while this work brings about the application using networks of these neurons and their suitability for custom realization.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top