Turkish Journal of Computer and Mathematics Education (TURCOMAT)

Journal Information
ISSN / EISSN : 1309-4653 / 1309-4653
Published by: Auricle Technologies, Pvt., Ltd. (10.17762)
Total articles ≅ 2,230
Current Coverage
Archived in

Latest articles in this journal

Et. Al. Sarthika Dutt
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1886-1891; https://doi.org/10.17762/turcomat.v12i11.6142

Dysgraphia is a disorder that affects writing skills. Dysgraphia Identification at an early age of a child's development is a difficult task. It can be identified using problematic skills associated with Dysgraphia difficulty. In this study motor ability, space knowledge, copying skill, Visual Spatial Response are some of the features included for Dysgraphia identification. The features that affect Dysgraphia disability are analyzed using a feature selection technique EN (Elastic Net). The significant features are classified using machine learning techniques. The classification models compared are KNN (K-Nearest Neighbors), Naïve Bayes, Decision tree, Random Forest, SVM (Support Vector Machine) on the Dysgraphia dataset. Results indicate the highest performance of the Random forest classification model for Dysgraphia identification.
Et. Al. N.Bhaskar
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1892-1897; https://doi.org/10.17762/turcomat.v12i11.6143

This paper is made based on the requirement of different domain users for effective hearing system. The system uses different devices to enhance the audio capability of different users at different application environment. The system uses IoT devices, sensors and different gateways for effective use of IoT based systems. The system will enhance the hearing capability of all users at different conference or personal or professional environments for the effective hearing capability among different hearing capabilities. It provides the environment to increase or decrease the volume of the source system based on the hearing capability of participants in the hearing environment. It can be used in any domain like conferences, home environment, education institutions and at public locations. It also adopts a security algorithm to protect the customer’s data in encrypted format at cloud system. It recognizes the candidate at any time based on the ID assigned to an individual.
Et. Al. Tamanna Siddiqui
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1916-1924; https://doi.org/10.17762/turcomat.v12i11.6144

Sarcasm is well-defined as a cutting, frequently sarcastic remark intended to fast ridicule or dislike. Irony detection is the assignment of fittingly labeling the text as’ Sarcasm’ or ’non- Sarcasm.’ There is a challenging task owing to the deficiency of facial expressions and intonation in the text. Social media and micro-blogging websites are extensively explored for getting the information to extract the opinion of the target because a huge of text data existence is put out into the open field into social media like Twitter. Such large, openly available text data could be utilized for a variety of researches. Here we applied text data set for classifying Sarcasm and experiments have been made from the textual data extracted from the Twitter data set. Text data set downloaded from Kaggle, including 1984 tweets that collected from Twitter. These data already have labels here. In this paper, we apply these data to train our model Classifiers for different algorithms to see the ability of model machine learning to recognize sarcasm and non-sarcasm through a set of the process start by text pre-processing feature extraction (TF-IDF) and apply different classification algorithms, such as Decision Tree classifier, Multinomial Naïve Bayes Classifier, Support vector machines, and Logistic Regression classifier. Then tuning a model fitting the best results, we get in (TF-IDF) we achieve 0.94% in Multinomial NB, Decision Tree Classifier we achieve 0.93%, Logistic Regression we achieve 0.97%, and Support vector machines (SVM) we achieve 0.42%. All these result models were improved, except the SVM model has the lowest accuracy. The results were extracted, and the evaluation of the results has been proved above to be good in accuracy for identifying sarcastic impressions of people.
Et. Al. R Hemalatha
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1801-1814; https://doi.org/10.17762/turcomat.v12i11.6126

This study examines the mediating role of Nonaka’s knowledge spiralsof SECI.Whetherthishasarelationshipwiththeeffectofsocialmediaonknowledgesharingandexamines whether it leads to effective learning. The effect on knowledge sharing through socialmedia which was constructed by Bock et al. (2005) has been used for measuring in this study. Themediating role of SECI on social media and knowledge sharing for effective learning has beenassessed based on the four dimensionsSECI multi-dimensional questionnaire offered by Nonakaet al. (2000) has been used for this study. The results reveal which of the four dimensions of Nonaka's that, which has a significant impact on effective learning using social media & knowledge sharing that has been brought to light from this study. The empirical findings of thisstudymayenable toenrichthetheoreticaland practicalimplications.
Et. Al. Ritu
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 807-817; https://doi.org/10.17762/turcomat.v12i11.5966

: Software Quality is the key priority of today’s marketplace and software development organization to which a system, technique, or factor meets particular requirements and conditions. Soft computing techniques play a vital role in developing software engineering applications. In this paper, we have identified five parameters: Reliability, Efficiency, Usability, Maintainability, and Portability for accessing the level of quality of software. A fuzzy logic-based intelligent identification methodology has been proposed to access the quality of particular software-based on five parameters. The proposed identification scheme takes these five parameters as input and predicts the quality of the software using the fuzzy rule base which is generated using various studies. As this scheme takes five inputs and each input is divided into three regions i.e. ‘Low’, ‘Medium’, ‘High’ and thus a total of 35 i.e. 243 rules has been generated to analyze the software quality. Furthermore, Mamdani fuzzy model has been used as the reference model. To show the effectiveness of the proposed methodology, simulation results have been performed in MATLAB, which shows that the software's quality closely matches with the actual one.
Et. Al. Djumanova Aijan Baxtiyarovna
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 693-696; https://doi.org/10.17762/turcomat.v12i11.5951

The article deals with the problem of implementing management accounting in the domestic practice of logistics systems. Which is of paramount importance due to the need for in-depth research of the economic nature, essence and content of management accounting, its fundamental theoretical foundations, for making informed management decisions.
Et. Al. K.Ranga Narayana
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 697-703; https://doi.org/10.17762/turcomat.v12i11.5952

In present scenario, tracking of target in videos with low resolution is most important task. The problem aroused due to lack of discriminatory data that have low visual visibility of the moving objects. However, earlier detection methods often extract explanations around fascinating points of space or exclude mathematical features in moving regions, resulting in limited capabilities to detect better video functions. To overcome the above problem, in this paper a novel method which recognizes a person from low resolution videos is proposed. A Three step process is implemented in which during the first step, the video data acquired from a low-resolution video i.e. from three different datasets. The acquired video is divided into frames and converted into gray scale from RGB. Secondly, background subtraction is performed using LBP and thereafter Histogram of Optical Flow (HOF) descriptors is extracted from optical flow images for motion estimation. In the third step, the eigen features are extracted and optimized using particle swarm optimization (PSO) model to eliminate redundant information and obtain optimized features from the video which is being processed. Finally to find a person from low resolution videos, the features are classified by Support Vector Machine (SVM) and parameters are evaluated. Experimental results are performed on VIRAT, Soccer and KTH datasets and demonstrated that the proposed detection approach is superior to the previous method
Et. Al. Dr. Saikumari V
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 411-416; https://doi.org/10.17762/turcomat.v12i11.5892

In today’s corporate world, many companies and organization are increasingly focusing on human capital as a competitive advantage in rapidly changing environment. Many successful companies realize that their employees are their greatest asset. Therefore, companies are increasingly investing in educating their own employees so that they can grow and change within the company and make it more profitable. The range of training opportunities varies considerable from company to company so, when researching potential employers, it is important for job seekers who care about this to investigate the level and type of training provided to employees. After employees have been selected for various positions in an organization, training them for specific tasks to which they have been assigned assumes greater importance. This study suggests the organization to implement more modern training methodologies, then to provide practical training to the employees and to provide specific learning assignments or projects for participants to improve on their competency gap.
Et. Al. Viratkumar K. Kothari
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1940-1953; https://doi.org/10.17762/turcomat.v12i11.6149

There is substantial archival data available in different forms, including manuscripts, printed papers, photographs, videos, audios, artefacts, sculptures, building, and others. Media content like photographs, audios, and videos are crucial content because such content conveys information well. The digital version of such media data is essential as it can be shared easily, available in the online or offline platform, easy to copy, easy to transport, easy to back up and easy to keep multiple copies at different places. The limitation of the digital version of media data is the lack of searchability as it hardly has any text that can be processed for OCR. These important data cannot be analysed and, therefore, cannot be used in a meaningful way. To make this data meaningful, one has to manually identify people in the images and tag them to create metadata. Most of the photographs were possible to search based on very basic metadata. This data, when hosted on the web platform, searching media data is becoming a challenge due to its data formats. Improvement in existing search functionality is required to improve the searchability of the photographs in terms of ease of usage, quick retrieval and efficiency. The recent revolution in machine learning, deep learning and artificial intelligence offers a variety of facilities to process media data and identify meaningful information out of it. This research paper explains the methods to process digital photographs to classify people in the given photographs, tag them and saves that information in the metadata. We will tune various hyperparameter to improve their accuracy. Machine learning, deep learning and artificial intelligence offers several benefits, including auto-identification of people, auto-tagging them, provide insights and finally, the most important part is it improves the searchability of photographs drastically. It was envisaged that about 85% of the manual tagging activity might be reduced and improves the searchability of photographs by 90%.
Et. Al. Hardik N. Soni
Turkish Journal of Computer and Mathematics Education (TURCOMAT), Volume 12, pp 1954-1963; https://doi.org/10.17762/turcomat.v12i11.6150

It is generally observed that the products losses its freshness with the course of time that stimulates depression in demand of the product. In these circumstances, price discounts are necessary to raise the market. This is why, when the product's the index of freshness reaches a certain level, we created an inventory model wherein price reductions are provided at a sale price. The main goal is to figure out what the best selling price and cycle time are in order to maximise profit. The meaning and uniqueness of an ideal model solution are incorporated into the circumstances. The next move is to use a simple algorithm to find an optimal solution. Finally, a numerical example is presented, followed by a sensitivity analysis.
Back to Top Top