Refine Search

New Search

Results in Journal Inteligencia Artificial: 551

(searched for: journal_id:(2514979))
Page of 12
Articles per Page
Show export options
  Select all
Jorge Herrera-Franklin, Alejandro Rosete, Milton García-Borroto
Inteligencia Artificial, Volume 24, pp 71-89;

The Variable Cost and Size Bin Packing Problem (VCSBPP) is a known NP-Hard problem that consists in minimizing the cost of all bins used to pack a set of items. There are many real-life applications of the VCSBPP where the focus is to improve the efficiency of the solution method. In spite of the existence of fuzzy approaches to adapt other optimization problems to real life conditions, VCSBPP has not been extensively studied in terms of relaxations of the crisp conditions. In this sense, the fuzzy approaches for the VCSBPP varies from relaxing the capacity of the bins to the items weights. In this paper we address a non-explored side consisting in relaxing the set of items to be packed. Therefore, our main contribution is a fuzzy version of VCSBPP that allows incomplete packing. The proposed fuzzy VCSBPP is solved by a parametric approach. Particularly, a fast heuristic algorithm is introduced that allows to obtain a set of solutions with interesting trade-offs between cost and relaxation of the original crisp conditions. An experimental study is presented to explore the proposed fuzzy VCSBPP and its solution.
Jean Phelipe De Oliveira Lima, Carlos Maurí­cio Seródio Figueiredo
Inteligencia Artificial, Volume 24, pp 40-50;

In modern smart cities, there is a quest for the highest level of integration and automation service. In the surveillance sector, one of the main challenges is to automate the analysis of videos in real-time to identify critical situations. This paper presents intelligent models based on Convolutional Neural Networks (in which the MobileNet, InceptionV3 and VGG16 networks had used), LSTM networks and feedforward networks for the task of classifying videos under the classes "Violence" and "Non-Violence", using for this the RLVS database. Different data representations held used according to the Temporal Fusion techniques. The best outcome achieved was Accuracy and F1-Score of 0.91, a higher result compared to those found in similar researches for works conducted on the same database.
Mariela Morveli Espinoza
Inteligencia Artificial, Volume 24, pp 36-39;

Rhetorical arguments are used in negotiation dialogues when a proponent agent tries to persuade his opponent to accept a proposal more readily. When more than one argument is generated, the proponent must compare them in order to select the most adequate for his interests. A way of comparing them is by means of their strength values. Related work propose a calculation based only on the components of the rhetorical arguments, i.e., the importance of the opponent's goal and the certainty level of the beliefs that make up the argument. This work aims to propose a model for the calculation of the strength of rhetorical arguments, which is inspired on the pre-conditions of credibility and preferability stated by Guerini and Castelfranchi. Thus, we suggest the use of two new criteria to the strength calculation: the credibility of the proponent and the status of the opponent's goal in the goal processing cycle. The model is empirically evaluated and the results demonstrate that the proposed model is more efficient than previous works in terms of number of exchanged arguments and number of reached agreements.
Otto Menegasso Pires, Eduardo Inacio Duzzioni, Jerusa Marchi, Rafael De Santiago
Inteligencia Artificial, Volume 24, pp 90-101;

Quantum Computing has been evolving in the last years. Although nowadays quantum algorithms performance has shown superior to their classical counterparts, quantum decoherence and additional auxiliary qubits needed for error tolerance routines have been huge barriers for quantum algorithms efficient use.These restrictions lead us to search for ways to minimize algorithms costs, i.e the number of quantum logical gates and the depth of the circuit. For this, quantum circuit synthesis and quantum circuit optimization techniques are explored.We studied the viability of using Projective Simulation, a reinforcement learning technique, to tackle the problem of quantum circuit synthesis. The agent had the task of creating quantum circuits up to 5 qubits. Our simulations demonstrated that the agent had a good performance but its capacity for learning new circuits decreased as the number of qubits increased.
Flávio Arthur O. Santos, Thiago Dias Bispo, Hendrik Teixeira Macedo, Cleber Zanchettin
Inteligencia Artificial, Volume 24, pp 1-17;

Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others. Word embeddings (i.e., continuous word representations) are an essential module for those applications generally used as word representation to machine learning models. Some popular methods to train word embeddings are GloVe and Word2Vec. They achieve good word representations, despite limitations: both ignore morphological information of the words and consider only one representation vector for each word. This approach implies the word embeddings does not consider different word contexts properly and are unaware of its inner structure. To mitigate this problem, the other word embeddings method FastText represents each word as a bag of characters n-grams. Hence, a continuous vector describes each n-gram, and the final word representation is the sum of its characters n-grams vectors. Nevertheless, the use of all n-grams character of a word is a poor approach since some n-grams have no semantic relation with their words and increase the amount of potentially useless information. This approach also increase the training phase time. In this work, we propose a new method for training word embeddings, and its goal is to replace the FastText bag of character n-grams for a bag of word morphemes through the morphological analysis of the word. Thus, words with similar context and morphemes are represented by vectors close to each other. To evaluate our new approach, we performed intrinsic evaluations considering 15 different tasks, and the results show a competitive performance compared to FastText. Moreover, the proposed model is $40\%$ faster than FastText in the training phase. We also outperform the baseline approaches in extrinsic evaluations through Hate speech detection and NER tasks using different scenarios.
Varsha Bhole, Arun Kumar
Inteligencia Artificial, Volume 24, pp 102-120;

Shelf-life prediction for fruits based on the visual inspection and with RGB imaging through external features becomes more pervasive in agriculture and food business. In the proposed architecture, to enhance the accuracy with low computational costs we focus on two challenging tasks of shelf life (remaining useful life) prediction: 1) detecting the intrinsic features like internal defects, bruises, texture, and color of the fruits; and 2) classification of fruits according to their remaining useful life. To accomplish these tasks, we use the thermal imaging technique as a baseline which is used as non-destructive approach to find the intrinsic values of fruits in terms of temperature parameter. Further to improve the classification tasks, we combine it with a transfer learning approach to forecast the shelf life of fruits. For this study, we have chosen „Kesar? (Mangifera Indica Linn cv. Kesar) mangoes and for the purpose of classification, our designed dataset images are categorized into 19 classes viz. RUL-1 (Remaining Useful Life-1) to RUL-18 (Remaining Useful Life-18) and No-Life as after harvesting, the storage span of „Kesar? is near about 19 days. A comparative analysis using SqueezeNet, ShuffleNet, and MobileNetv2 (which are prominent CNN based lightweight models) has been performed in this study. The empirical results show a highest achievable accuracy of 98.15±0.44% with an almost a double speedup in training the entire process by using thermal images.
Gerardo Ernesto Rolong Agudelo, Carlos Enrique Montenegro Marin, Paulo Alonso Gaona-Garcia
Inteligencia Artificial, Volume 24, pp 121-128;

In the world and some countries like Colombia, the number of missing person is a phenome very worrying and growing, every year, thousands of people are reported missing all over the world, the fact that this keeps happening might indicate that there are still analyses that have not been done and tools that have not been considered in order to find patterns in the information of missing person. The present article presents a study of the way informatics and computational tools can be used to help find missing person and what patterns can be found in missing person datasets using as a study case open data about missing person in Colombia in 2017. The goal of this study is to review how computational tools like data mining and image analysis can be used to help find missing person and draw patterns in the available information about missing person. For this, first it will be review of the state of art of image analysis in real world applications was made in order to explore the possibilities when studying the photos of missing person, then a data mining process with data of missing person in Colombia was conducted to produce a set of decision rules that can explain the cause of the disappearance, as a result is generated decision rules algorithm suggest links between socioeconomic stratification, age, gender and specific locations of Colombia and the missing person reports. In conclusion, this work reviews what information about missing person is available publicly and what analysis can me made with them, showing that data mining and face recognition can be useful tools to extract patterns and identify patterns in missing person data.
Gildã¡sio Lecchi Cravo, Dayan De Castro Bissoli, Andrã© Renato Sales Amaral
Inteligencia Artificial, Volume 24, pp 51-70;

O problema de layout em linha dupla (DRLP) consiste em determinar a localização de facilidades ao longo de ambos os lados de um corredor central, tendo como objetivo, a minimização da soma ponderada das distâncias entre todos os pares de facilidades. Como facilidades podem ser máquinas, centros de trabalho, células de manufatura, departamentos de um edifício e robôs em sistemas de manufatura. Esse trabalho propõe uma abordagem puramente heurística, baseada na meta-heurística Otimização do Enxame de Partículas (PSO). Para validar o algoritmo proposto, o mesmo foi submetido a testes computacionais com cinquenta e uma instâncias, incluindo instâncias consideradas de grande porte e os resultados encontrados mostram o PSO proposto como uma excelente abordagem para o DRLP, melhorado tendo os valores conhecidos para diversas instâncias disponíveis na literatura.
Amin Rezaeipanah, Rahmad Syah, Siswi Wulandari, A Arbansyah
Inteligencia Artificial, Volume 24, pp 147-156;

Nowadays, breast cancer is one of the leading causes of death women in the worldwide. If breast cancer is detected at the beginning stage, it can ensure long-term survival. Numerous methods have been proposed for the early prediction of this cancer, however, efforts are still ongoing given the importance of the problem. Artificial Neural Networks (ANN) have been established as some of the most dominant machine learning algorithms, where they are very popular for prediction and classification work. In this paper, an Intelligent Ensemble Classification method based on Multi-Layer Perceptron neural network (IEC-MLP) is proposed for breast cancer diagnosis. The proposed method is split into two stages, parameters optimization and ensemble classification. In the first stage, the MLP Neural Network (MLP-NN) parameters, including optimal features, hidden layers, hidden nodes and weights, are optimized with an Evolutionary Algorithm (EA) for maximize the classification accuracy. In the second stage, an ensemble classification algorithm of MLP-NN is applied to classify the patient with optimized parameters. Our proposed IEC-MLP method which can not only help to reduce the complexity of MLP-NN and effectively selection the optimal feature subset, but it can also obtain the minimum misclassification cost. The classification results were evaluated using the IEC-MLP for different breast cancer datasets and the prediction results obtained were very promising (98.74% accuracy on the WBCD dataset). Meanwhile, the proposed method outperforms the GAANN and CAFS algorithms and other state-of-the-art classifiers. In addition, IEC-MLP could also be applied to other cancer diagnosis.
Hicham Deghbouch, Fatima Debbat
Inteligencia Artificial, Volume 24, pp 18-35;

This work addresses the deployment problem in Wireless Sensor Networks (WSNs) by hybridizing two metaheuristics, namely the Bees Algorithm (BA) and the Grasshopper Optimization Algorithm (GOA). The BA is an optimization algorithm that demonstrated promising results in solving many engineering problems. However, the local search process of BA lacks efficient exploitation due to the random assignment of search agents inside the neighborhoods, which weakens the algorithm’s accuracy and results in slow convergence especially when solving higher dimension problems. To alleviate this shortcoming, this paper proposes a hybrid algorithm that utilizes the strength of the GOA to enhance the exploitation phase of the BA. To prove the effectiveness of the proposed algorithm, it is applied for WSNs deployment optimization with various deployment settings. Results demonstrate that the proposed hybrid algorithm can optimize the deployment of WSN and outperforms the state-of-the-art algorithms in terms of coverage, overlapping area, average moving distance, and energy consumption.
Qing An, Xijiang Chen, Jupu Yuan
Inteligencia Artificial, Volume 23, pp 115-123;

In order to meet the needs of high precision, high availability and high safety positioning for automatic driving, aiming at the technical difficulties of automatic driving positioning in the complex urban environment, an inertial navigation model suitable for the dynamic characteristics of vehicles is established, and a tight combination method of Beidou / inertial high precision positioning is proposed, which solves the problem of rapid accumulation of positioning errors in the weak signal environment of Beidou. The results show that when the Beidou signal is completely interrupted and the INS is combined tightly, the positioning accuracy and continuity are improved significantly, and the maximum error is less than 0.5m, which can realize the automatic driving high-precision continuous navigation and positioning in the complex urban environment.
Leonardo Luís Röpke, Manuel Osório Binelo
Inteligencia Artificial, Volume 23, pp 67-85;

This work presents the study and development of an Artificial Intelligence system, with focus on K-means algorithms and Artificial Neural Networks, to assist fleet managers in the identification of routes and route deviations. The developed tool has the objective of modernizing the process of identification of routes and deviations of routes. The results show that the Artificial Neural Networks obtained a 100% accuracy rate in the identification of routes, and in the identification of route deviations the RNAs were able to identify 61% of the routes presented. Therefore, RNAs are an excellent technique to be applied to the identification of routes and deviations of routes. The K-means algorithm presented good results when applied in the discovery of similar routes, thus becoming an important tool applied to the work of monitoring vehicles routes.
Imane Guellil, , Faical Azouaou
Inteligencia Artificial, Volume 23, pp 124-135;

This paper presents an analytic study showing that it is entirely possible to analyze the sentiment of an Arabic dialect without constructing any resources. The idea of this work is to use the resources dedicated to a given dialect \textit{X} for analyzing the sentiment of another dialect \textit{Y}. The unique condition is to have \textit{X} and \textit{Y} in the same category of dialects. We apply this idea on Algerian dialect, which is a Maghrebi Arabic dialect that suffers from limited available tools and other handling resources required for automatic sentiment analysis. To do this analysis, we rely on Maghrebi dialect resources and two manually annotated sentiment corpus for respectively Tunisian and Moroccan dialect. We also use a large corpus for Maghrebi dialect. We use a state-of-the-art system and propose a new deep learning architecture for automatically classify the sentiment of Arabic dialect (Algerian dialect). Experimental results show that F1-score is up to 83% and it is achieved by Multilayer Perceptron (MLP) with Tunisian corpus and with Long short-term memory (LSTM) with the combination of Tunisian and Moroccan. An improvement of 15% compared to its closest competitor was observed through this study. Ongoing work is aimed at manually constructing an annotated sentiment corpus for Algerian dialect and comparing the results
, Luis Alberto López Rodríguez, Marlén Pérez-Díaz
Inteligencia Artificial, Volume 23, pp 56-66;

Breast cancer is the most frequent in females. Mammography has proven to be the most effective method for the early detection of this type of cancer. Mammographic images are sometimes difficult to understand, due to the nature of the anomalies, the low contrast image and the composition of the mammary tissues, as well as various technological factors such as spatial resolution of the image or noise. Computer-aided diagnostic systems have been developed to increase the accuracy of mammographic examinations and be used by physicians as a second opinion in obtaining the final diagnosis, and thus reduce human errors. Convolutional neural networks are a current trend in computer vision tasks, due to the great performance they have achieved. The present investigation was based on this type of networks to classify into three classes, normal, benign and malignant tumour. Due to the fact that the miniMIAS database used has a low number of images, the transfer learning technique was applied to the Inception v3 pre-trained network. Two convolutional neural network architectures were implemented, obtaining in the architecture with three classes, 86.05% accuracy. On the other hand, in the architecture with two neural networks in series, an accuracy of 88.2% was reached.
Maged Mamdouh, Mostafa Ezzat, Hesham A. Hefny
Inteligencia Artificial, Volume 23, pp 19-32;

The airport ground handling has a global trend to meet the Service Level Agreement (SLA) requirementsthat represents resource allocation with more restrictions according to flights. That can be achieved by predictingfuture resources demands. this research presents a comparison between the most used machine learning techniquesimplemented in many different fields for demand prediction and resource allocation. The prediction model nomi-nated and used in this research is the Support Vector Machine (SVM) to predict the required resources for eachflight, despite the restrictions imposed by airlines when contracting their services in the Service Level Agreement.The approach has been trained and tested using real data from Cairo International Airport. the proposed (SVM)technique implemented and explained with a varying accuracy of resource allocation prediction, showing thateven for variations accuracy in resource prediction in different scenarios; the Support Vector Machine techniquecan produce a good performance as resource allocation in the airport.
Supoj Hengpraprohm, Suwimol Jungjit
Inteligencia Artificial, Volume 23, pp 100-114;

For breast cancer data classification, we propose an ensemble filter feature selection approach named ‘EnSNR’. Entropy and SNR evaluation functions are used to find the features (genes) for the EnSNR subset. A Genetic Algorithm (GA) generates the classification ‘model’. The efficiency of the ‘model’ is validated using 10-Fold Cross-Validation re-sampling. The Microarray dataset used in our experiments contains 50,739 genes for each of 32 patients. When our proposed ‘EnSNR’ subset of features is used; as well as giving an enhanced degree of prediction accuracy and reducing the number of irrelevant features (genes), there is also a small saving of computer processing time.
UshaDevi G, Gokulnath Bv
Inteligencia Artificial, Volume 23, pp 136-154;

The major agricultural products in India are rice, wheat, pulses, and spices. As our population is increasing rapidly the demand for agriculture products also increasing alarmingly. A huge amount of data are incremented from various field of agriculture. Analysis of this data helps in predicting the crop yield, analyzing soil quality, predicting disease in a plant, and how meteorological factor affects crop productivity. Crop protection plays a vital role in maintaining agriculture product. Pathogen, pest, weed, and animals are responsible for the productivity loss in agriculture product. Machine learning techniques like Random Forest, Bayesian Network, Decision Tree, Support Vector Machine etc. help in automatic detection of plant disease from visual symptoms in the plant. A survey of different existing machine learning techniques used for plant disease prediction was presented in this paper. Automatic detection of disease in plant helps in early diagnosis and prevention of disease which leads to an increase in agriculture productivity.
Suresh K, Karthik S, Hanumanthappa M
Inteligencia Artificial, Volume 23, pp 86-99;

With the progressions in Information and Communication Technology (ICT), the innumerable electronic devices (like smart sensors) and several software applications can proffer notable contributions to the challenges that are existent in monitoring plants. In the prevailing work, the segmentation accuracy and classification accuracy of the Disease Monitoring System (DMS), is low. So, the system doesn't properly monitor the plant diseases. To overcome such drawbacks, this paper proposed an efficient monitoring system for paddy leaves based on big data mining. The proposed model comprises 5 phases: 1) Image acquisition, 2) segmentation, 3) Feature extraction, 4) Feature Selection along with 5) Classification Validation. Primarily, consider the paddy leaf image which is taken as of the dataset as the input. Then, execute image acquisition phase where 3 steps like, i) transmute RGB image to grey scale image, ii) Normalization for high intensity, and iii) preprocessing utilizing Alpha-trimmed mean filter (ATMF) through which the noises are eradicated and its nature is the hybrid of the mean as well as median filters, are performed. Next, segment the resulting image using Fuzzy C-Means (i.e. FCM) Clustering Algorithm. FCM segments the diseased portion in the paddy leaves. In the next phase, features are extorted, and then the resulted features are chosen by utilizing Multi-Verse Optimization (MVO) algorithm. After completing feature selection, the chosen features are classified utilizing ANFIS (Adaptive Neuro-Fuzzy Inference System). Experiential results contrasted with the former SVM classifier (Support Vector Machine) and the prevailing methods in respect of precision, recall, F-measure,sensitivity accuracy, and specificity. In accuracy level, the proposed one has 97.28% but the prevailing techniques only offer 91.2% for SVM classifier, 85.3% for KNN and 88.78% for ANN. Hence, this proposed DMS has more accurate detection and classification process than the other methods. The proposed DMS evinces better accuracy when contrasting with the prevailing methods.
Raul Cesar Alves, Josué Silva de Morais, Keiji Yamanaka
Inteligencia Artificial, Volume 23, pp 33-55;

Indoor localization has been considered to be the most fundamental problem when it comes to providing a robot with autonomous capabilities. Although many algorithms and sensors have been proposed, none have proven to work perfectly under all situations. Also, in order to improve the localization quality, some approaches use expensive devices either mounted on the robots or attached to the environment that don't naturally belong to human environments. This paper presents a novel approach that combines the benefits of two localization techniques, WiFi and Kinect, into a single algorithm using low-cost sensors. It uses separate Particle Filters (PFs). The WiFi PF gives the global location of the robot using signals of Access Point devices from different parts of the environment while it bounds particles of the Kinect PF, which determines the robot's pose locally. Our algorithm also tackles the Initialization/Kidnapped Robot Problem by detecting divergence on WiFi signals, which starts a localization recovering process. Furthermore, new methods for WiFi mapping and localization are introduced.
Levan Uridia, Dirk Walther
Inteligencia Artificial, Volume 23, pp 1-18;

We investigate the variant of epistemic logic S5 for reasoning about knowledge under hypotheses. The logic is equipped with a modal operator of necessity that can be parameterized with a hypothesis representing background assumptions. The modal operator can be described as relative necessity and the resulting logic turns out to be a variant of Chellas’ Conditional Logic. We present an axiomatization of the logic and its extension with the common knowledge operator and distributed knowledge operator. We show that the logics are decidable, complete w.r.t. Kripke as well as topological structures. The topological completeness results are obtained by utilizing the Alexandroff connection between preorders and Alexandroff spaces.
Zhongshan Chen, Juxiao Zhang, Xiaoyan Jiang, Zuojin Hu, Xue Han, Mengyang Xu, Savitha V, G.N. Vivekananda
Inteligencia Artificial, Volume 23, pp 124-137;

Nowadays, predicting students' performance is one of the most specific topics for learning environments, such as universities and schools, since it leads to the development of effective mechanisms that can enhance academic outcomes and avoid destruction. In education 4.0, Artificial Intelligence (AI) can play a key role in identifying new factors in the performance of students and implementing personalized learning, answering routine student questions, using learning analytics, and predictive modeling. It is a new challenge to redefine education 4.0 to recognize the creative and innovative intelligent students, and it is difficult to determine students' outcomes. Hence, in this paper, Hybridized Deep Neural Network (HDNN) to predict student performance in Education 4.0. The proposed HDNN method is utilized to determine the dynamics that likely influence the student's performance. The deep neural network monitor, predicts, and evaluates the student's performance in an education 4.0 environment. The findings show that the proposed HDNN method achieved better prediction accuracy when compared to other popular methods.
Adriana Villa-Murillo, Andrés Carrión, Antonio Sozzi
Inteligencia Artificial, Volume 23, pp 9-25;

We propose a methodology for the improvement of the parameter design that consists of the combination ofRandom Forest (RF) with Genetic Algorithms (GA) in 3 phases: normalization, modelling and optimization.The rst phase corresponds to the previous preparation of the data set by using normalization functions. In thesecond phase, we designed a modelling scheme adjusted to multiple quality characteristics and we have called itMultivariate Random Forest (MRF) for the determination of the objective function. Finally, in the third phase,we obtained the optimal combination of parameter levels with the integration of properties of our modellingscheme and desirability functions in the establishment of the corresponding GA. Two illustrative cases allow us tocompare and validate the virtues of our methodology versus other proposals involving Articial Neural Networks(ANN) and Simulated Annealing (SA).
Yang Cui, Cheng Liu, Yanming Cheng, Jing Niu
Inteligencia Artificial, Volume 23, pp 26-35;

According to the nonlinear output characteristics of photovoltaic cells, combined with artificial intelligence algorithm the MPPT(Maximum Power Point Tracking)control algorithm based on fuzzy variable step size is proposed, which enables the system to quickly track the maximum power point and improve the energy conversion efficiency of photovoltaic system. This paper designs a small-scale photovoltaic power generation system. The main circuit of the system consists of Perovskite Solar Panels, DC voltage regulator circuit, storage battery and one-way full bridge inverter circuit. The control circuit consists of sun-seeking, inverter and maximum power tracking on constant voltage. Proteus simulation software is used to simulate the sun-seeking part, the inverting part, the general control unit, the keys and the display interface. The results indicate that the functions of the small-scale photovoltaic power generation system can be achieved very well.
Shruthi P, Anil Kumar K M
Inteligencia Artificial, Volume 23, pp 97-111;

Automating hate speech or inappropriate text detection in social media and other internet platforms isgaining a lot of interest and becoming a valuable research topic for both industry and academia in recent years. Itis more important for applications to identify the disruptive contents, understand sentiment analysis, identify cyberbullying, detect flames, threats, hatred towards people or particular communities or groups etc. Text classificationis a very challenging task due to the nature and complexities with languages, especially its context, micro words,emojis, typo error and sarcasm present in the text. In this paper, we have proposed a model with a novel approachfor generating hybrid features for an effective feature representation to classify hate speech. We have combinedfeatures learned from deep learning methods with the semantic features like word n-grams and tweets specificsyntactic features to form hybrid feature sets. We have also improvised preprocessing steps to reduce the numberof missing embeddings to increase the vocabulary for efficient feature learning. We have experimented with thevarious neural networks for feature learning and machine learning models with hybrid features for classification.Our work delivers hybrid features and appropriate preprocessing techniques for an efficient classification of thestandard dataset of 16k annotated hate speech tweets. The combination of Long Short Term Memory (LSTM)trained on Random Embeddings for deep learning features extraction and Logistic Regression (LR) as a classifierwith the hybrid features is found to be the best model and it outperforms the state of the art reported in theliterature.
Bruno Rover Dal Prá, Roberto Navarro de Mesquita, Mário Olímpio de Menezes, Delvonei Alves de Andrade
Inteligencia Artificial, Volume 23, pp 85-96;

A identificação do estresse nutricional das plantas com base nos sintomas visuais é predominantemente manual e é realizada por especialistas treinados para identificar tais anomalias. Além disso, esse processo tende a consumir muito tempo, tem uma variabilidade entre as áreas de cultivo e é frequentemente necessário para análise em vários pontos da propriedade. Este trabalho propõe um sistema de reconhecimento de imagens que analisa o estado nutricional da planta para ajudar a resolver esses problemas. A metodologia utiliza aprendizado profundo que automatiza o processo de identificação e classificação do estresse nutricional de Brachiaria brizantha cv. marandu. Um sistema de reconhecimento de imagem foi construído e analisa o estado nutricional da planta usando as imagens digitais de suas folhas. O sistema identifica e classifica as deficiências de nitrogênio e potássio. Ao receber a imagem da folha do pasto, após uma classificação realizada por uma rede neural convolucional (CNN), o sistema apresenta o resultado do estado nutricional diagnosticado. Os testes realizados para identificar o estado nutricional das folhas apresentaram uma precisão de 96%. Estamos trabalhando para expandir os dados do banco de dados de imagens para obter um aumento nos níveis de precisão, visando o treinamento com maior quantidade de informações apresentadas à CNN e, assim, obtendo resultados mais expressivos.
Wei Cao, Qinan Wang, Asma Sbeih, Fha. Shibly
Inteligencia Artificial, Volume 23, pp 112-123;

A smart learning environment is equipped with personal digital devices, wireless communication, learning platforms, and sensors that associate to provide input into Artificial intelligence systems. Artificial intelligence makes decisions about regulating the physical aspects of the environment or learning systems. These requirements may be identified by analyzing learning performance, behaviors, and the real-world and online settings in which students are situated. There are several challenges in implementing smart learning environments that are highly cost-effective, connectivity issues (internet), impair the problem-solving capacity of students, technical challenges, e.g., malfunctioning of electronic gadgets. Hence, in this paper, Artificial Intelligence based Efficient Smart Learning Framework (AI-ESLF) has been proposed to overcome the challenges faced by a smart learning environment. This study aims to designate the current concept of the smart learning environment based on AI application and to examine the fundamental criteria of it and to demonstrate how tests can be performed in this smart learning environment by case studies. The experimental results show that the suggested system enhances the prediction ratio in terms of students learning behavior when compared to other existing approaches.
Marilyn Minicucci Iba˜nez, Reinaldo Roberto Rosa, Lamartine N. F. Guimarães
Inteligencia Artificial, Volume 23, pp 66-84;

In the last few decades, the growth in the use of the Internet has generated a substantial increase in the circulation of information on social media. Due to the high interest of several areas of society in the analysis of these data, a study of better techniques for the manipulation and understanding of this type of data is of great importance so that this enormous volume of information can be interpreted quickly and accurately. Based on this context, this study shows two approaches of sentiment analysis to verify the emotion of the population in different context. The first approach analyses the 2018 presidential elections in Brazil considering data from the Twitter social network. The second approach performs analysis of data from social media to identify threats level of armed conflicts considering data off the conflict between Syria and the USA in 2017. To achieve this goal, machine learning techniques such as auto-encoder and deep learning will be considered in conjunction with NLP text analysis techniques. The results obtained show the effectiveness of the approaches used in the classification of feelings within the domains used according to the methodology developed for this work.
Jean Phelipe De Oliveira Lima, Carlos Maurício Seródio Figueiredo
Inteligencia Artificial, Volume 23, pp 36-50;

Energy Monitoring is a crucial activity in Energy Efficiency, which involves the study of techniques to supervise the energy consumption in a power grid, regarding the main purpose is to assure a good level of detail, to achieve consumption quotas for each connected device, for a low infrastructure cost. This paper presents the evaluation of different Machine Learning models to classify electric current patterns to identify and monitor electric charges present in circuits with a single sensing device. The models were trained and validated by a database created from signal samples of 4 electrical devices: Notebook Charger, Refrigerator, Blender and Fan. The models that presented the best metrics achieved, respectively, 97% and 100% Accuracy and 98% and 100% F1-Score, surpassing results obtained in related researches.
Yun Shi, Yanyan Zhu
Inteligencia Artificial, Volume 23, pp 1-8;

Considering the need for a large number of samples and the long training time, this paper uses deep and transfer learning to identify motion-blurred Chinese character coded targets (CCTs). Firstly, a set of CCTs are designed, and the motion blur image generation system is used to provide samples for the recognition network. Secondly, the OTSU algorithm, the expansion, and the Canny operator are performed on the real shot blurred image, where the target area is segmented by the minimum bounding box. Thirdly, the sample is selected from the sample set according to the 4:1 ratio as the training set and the test set. Under the Tensor Flow framework, the convolutional layer in the AlexNet model is fixed, and the fully-connected layer is trained for transfer learning. Finally, experiments on simulated and real-time motion-blurred images are carried out. The results show that network training and testing only take 30 minutes and two seconds, and the recognition accuracy reaches 98.6% and 93.58%, respectively. As a result, our method has higher recognition accuracy, does not require a large number of trained samples, takes less time, and can provide a certain reference for the recognition of motion-blurred CCTs.
Mingyong Li, Ziye An, Miaomiao Ren
Inteligencia Artificial, Volume 23, pp 51-65;

With the rapid development of Internet technology, traditional online learning can no longer meet the adaptive learning needs of students, and smart education concepts, such as mushrooms, use data generated by traditional platforms to use machine learning and depth. Artificial intelligence technology with learning as a means has gradually become a new research hotspot through re-analysis technology. How to further use these big data resources for adaptive learning and push to improve the quality of student training has become an important issue in the current research field. For the protection of students' learning during the COVID-19 epidemic prevention and control, national universities, primary schools and secondary schools solved the problem of “Classes Suspended but Learning Continue” through online teaching. Students learn online at home and the family plays a vital role as a special classroom. Based on the analysis of the factors affecting home study, this article compares the live broadcast platforms and constructs a student-centered network broadcast + home learning model under the epidemic situation. After the implementation effect investigation, the evaluation effect is good. It is hoped that this model can provide a reference for teachers and students in the new situation and solve some problems currently facing online teaching at home.
Marco Javier Suárez-Barón, José Fdo. López, Carlos Enrique Montenegro Marin, , Franklin Guillermo Montenegro-Marin
Inteligencia Artificial, Volume 22;

This work explains for a computational model design focused organizational learning in R&D centers. We explained the first stage of this architecture that enables extracting, retrieval and integrating of lessons learned in the areas of innovation and technological development that have been registered by R&D researchers and personnel in social networks corporative focused to research. In addition, this article provides details about the design and construction of organizational memory as a computational learning mechanism within an organization. The end result of the process is discusses the management of the extraction and retrieval of information as a technological knowledge management mechanism with the goal of consolidating the Organizational Memory.
Gustavo Martins, Paulo Urbano, Anders Lyhne Christensen
Inteligencia Artificial, Volume 22;

In evolutionary robotics role allocation studies, it is common that the role assumed by each robot is strongly associated with specific local conditions, which may compromise scalability and robustness because of the dependency on those conditions. To increase scalability, communication has been proposed as a means for robots to exchange signals that represent roles. This idea was successfully applied to evolve communication-based role allocation for a two-role task. However, it was necessary to reward signal differentiation in the fitness function, which is a serious limitation as it does not generalize to tasks where the number of roles is unknown a priori. In this paper, we show that rewarding signal differentiation is not necessary to evolve communication-based role allocation strategies for the given task, and we improve reported scalability, while requiring less a priori knowledge. Our approach for the two-role task puts fewer constrains on the evolutionary process and enhances the potential of evolving communication-based role allocation for more complex tasks. Furthermore, we conduct experiments for a three-role task where we compare two different cognitive architectures and several fitness functions and we show how scalable controllers might be evolved.
Antonio Jiménez Márquez, Gabriel Beltrán Maza
Inteligencia Artificial, Volume 22, pp 135-142;

This paper shows the results obtained from images processing digitized, taken with a 'smartphone', of 56 samples of crushed olives, using the methodology of the gray-level co-occurrence matrix (GLCM). The values ​​of the appropriate direction (θ) and distance (D) that two pixel with gray tone are neighbourhood, are defined to extract the information of the parameters: Contrast, Correlation, Energy and Homogeneity. The values ​​of these parameters are correlated with several characteristic components of the olives mass: oil content (RGH) and water content (HUM), whose values ​​are in the usual ranges during their processing to obtain virgin olive oil in mills and they contribute to generate different mechanical textures in the mass according to their relationship HUM / RGH. The results indicate the existence of significant correlations of the parameters Contrast, Energy and Homogeneity with the RGH and the HUM, which have allowed to obtain, by means of a multiple linear regression (MLR), mathematical equations that allow to predict both components with a high degree of correlation coefficient, r = 0.861 and r = 0.872 for RGH and HUM respectively. These results suggest the feasibility of textural analysis using GLCM to extract features of interest from digital images of the olives mass, quickly and non-destructively, as an aid in the decision making to optimize the production process of virgin olive oil.
Mohamed Amine Nemmich, Fatima Debbat, Mohamed Slimane
Inteligencia Artificial, Volume 22;

In this paper, we propose a novel efficient model based on Bees Algorithm (BA) for the Resource-Constrained Project Scheduling Problem (RCPSP). The studied RCPSP is a NP-hard combinatorial optimization problem which involves resource, precedence, and temporal constraints. It has been applied to many applications. The main objective is to minimize the expected makespan of the project. The proposed model, named Enhanced Discrete Bees Algorithm (EDBA), iteratively solves the RCPSP by utilizing intelligent foraging behaviors of honey bees. The potential solution is represented by the multidimensional bee, where the activity list representation (AL) is considered. This projection involves using the Serial Schedule Generation Scheme (SSGS) as decoding procedure to construct the active schedules. In addition, the conventional local search of the basic BA is replaced by a neighboring technique, based on the swap operator, which takes into account the specificity of the solution space of project scheduling problems and reduces the number of parameters to be tuned. The proposed EDBA is tested on well-known benchmark problem instance sets from Project Scheduling Problem Library (PSPLIB) and compared with other approaches from the literature. The promising computational results reveal the effectiveness of the proposed approach for solving the RCPSP problems of various scales.
, Mohamed Sadgal, Aziz El Fazziki
Inteligencia Artificial, Volume 22, pp 102-122;

It is generally accepted that segmentation is a critical problem that influences subsequent tasks during image processing. Often, the proposed approaches provide effectiveness for a limited type of images with a significant lack of a global solution. The difficulty of segmentation lies in the complexity of providing a global solution with acceptable accuracy within a reasonable time. To overcome this problem, some solutions combined several methods. This paper presents a method for segmenting 2D/3D images by merging regions and solving problems encountered during the process using a multi-agent system (MAS). We are using the strengths of MAS by opting for a compromise that satisfies segmentation by agents’ acts. Regions with high similarity are merged immediately, while the others with low similarity are ignored. The remaining ones, with ambiguous similarity, are solved in a coalition by negotiation. In our system, the agents make decisions according to the utility functions adopting the Pareto optimal in Game theory. Unlike hierarchical merging methods, MAS performs a hypothetical merger planning then negotiates the agreements' subsets to merge all regions at once.
Omar Andres Carmona Cortes, Leticia De Fátima Corrêa Costa, Jo˜ao Pedro Augusto Costa
Inteligencia Artificial, Volume 22, pp 85-101;

This article describes a new adaptive metaheuristic based on a vector evaluated approach for solving multiobjective problems. We called our proposed algorithm Vector Evaluated Meta-Heuristic. Its main idea is to evolve two populations independently, exchanging information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well known in the literature: PSO, DE, ABC. In order to evaluate the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests have shown that the adaptive metaheuristic reaches the best hyper-volumes in three of ZDT benchmarks functions and, also, in two portfolios of a real-world problem called portfolio investment optimization. The results show that our algorithm improved the Pareto curve when compared to the hypervolumes of each heuristic separately.
JanapatyI Naga Muneiah, Ch D V Subbarao
Inteligencia Artificial, Volume 22, pp 63-84;

Enterprises often classify their customers based on the degree of profitability in decreasing order like C1, C2, ..., Cn. Generally, customers representing class Cn are zero profitable since they migrate to the competitor. They are called as attritors (or churners) and are the prime reason for the huge losses of the enterprises. Nevertheless, customers of other intermediary classes are reluctant and offer an insignificant amount of profits in different degrees and lead to uncertainty. Various data mining models like decision trees, etc., which are built using the customers’ profiles, are limited to classifying the customers as attritors or non-attritors only and not providing profitable actionable knowledge. In this paper, we present an efficient algorithm for the automatic extraction of profit-maximizing knowledge for business applications with multi-class customers by postprocessing the probability estimation decision tree (PET). When the PET predicts a customer as belonging to any of the lesser profitable classes, then, our algorithm suggests the cost-sensitive actions to change her/him to a maximum possible higher profitable status. In the proposed novel approach, the PET is represented in the compressed form as a Bit patterns matrix and the postprocessing task is performed on the bit patterns by applying the bitwise AND operations. The computational performance of the proposed method is strong due to the employment of effective data structures. Substantial experiments conducted on UCI datasets, real Mobile phone service data and other benchmark datasets demonstrate that the proposed method remarkably outperforms the state-of-the-art methods.
Mariela Morveli Espinoza, Juan Carlos Nieves, Ayslan Possebom, Cesar Augusto Tacla
Inteligencia Artificial, Volume 22, pp 47-62;

By considering rational agents, we focus on the problem of selecting goals out of a set of incompatible ones. We consider three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal, the instrumental (or based on resources), and the superfluity. We represent the agent's plans by means of structured arguments whose premises are pervaded with uncertainty. We measure the strength of these arguments in order to determine the set of compatible goals. We propose two novel ways for calculating the strength of these arguments, depending on the kind of incompatibility thatexists between them. The first one is the logical strength value, it is denoted by a three-dimensional vector, which is calculated from a probabilistic interval associated with each argument. The vector represents the precision of the interval, the location of it, and the combination of precision and location. This type of representation and treatment of the strength of a structured argument has not been defined before by the state of the art. The second way for calculating the strength of the argument is based on the cost of the plans (regarding the necessary resources) and the preference of the goals associated with the plans. Considering our novel approach for measuring the strength of structured arguments, we propose a semantics for the selection of plans and goals that is based on Dung's abstract argumentation theory. Finally, we make a theoretical evaluation of our proposal.
Inteligencia Artificial, Volume 22, pp 36-46;

Classification algorithms' performance could be enhanced by selecting many representative points to be included in the training sample. In this paper, a new border and rare biased sampling (BRBS) scheme is proposed by assigning each point in the dataset an importance factor. The importance factor of border points and rare points (i.e. points belong to rare classes) is higher than other points. Then the points are selected to be in the training sample depending on these factors. Including these points in the training sample enhances classifiers experience. The results of experiments on 10 UCI machine learning repository datasets prove that the BRBS algorithm outperforms many sampling algorithms and enhanced the performance of several classification algorithms by about 8%. BRBS is proposed to be easy to configure, covering all points space, and generate a unique samples every time it is executed.
José Menezes, Giordano Cabral, Bruno Gomes, Paulo Pereira
Inteligencia Artificial, Volume 22, pp 14-35;

To choice audio features has been a very interesting theme for audio classification experts. They have seen that this process is probably the most important effort to solve the classification problem. In this sense, there are techniques of Feature Learning for generate new features more suitable for classification model than conventional features. However, these techniques generally do not depend on knowledge domain and they can apply in various types of raw data. However, less agnostic approaches learn a type of knowledge restricted to the area studded. The audio data requires a specific knowledge type. There are many techniques that seek to improve the performance of the new generation of acoustic features, among which stands the technique that use evolutionary algorithms to explore analytical space of function. However, the efforts made leave opportunities for improvement. The purpose of this work is to propose and evaluate a multi-objective alternative to the exploitation of analytical audio features. In addition, experiments were arranged to be validated the method, with the help a computational prototype that implemented the proposed solution. After it was found the effectiveness of the model and ensuring that there is still opportunity for improvement in the chosen segment.
Inteligencia Artificial, Volume 22, pp 1-13;

Nowadays, many approaches for Sentiment Analysis (SA) rely on affective lexicons to identify emotions transmitted in opinions. However, most of these lexicons do not consider that a word can express different sentiments in different predication domains, introducing errors in the sentiment inference. Due to this problem, we present a model based on a context-graph which can be used for building domain specic sentiment lexicons(DL: Dynamic Lexicons) by propagating the valence of a few seed words. For different corpora, we compare the results of a simple rule-based sentiment classier using the corresponding DL, with the results obtained using a general affective lexicon. For most corpora containing specic domain opinions, the DL reaches better results than the general lexicon.
Inteligencia Artificial, Volume 22, pp 121-134;

In this paper, we use data from the Microsoft Kinect sensor that processes the captured imageof a person using and extracting the joints information on every frame. Then, we propose the creation ofan image derived from all the sequential frames of a gesture the movement, which facilitates training in aconvolutional neural network. We trained a CNN using two strategies: combined training and individualtraining. The strategies were experimented in the convolutional neural network (CNN) using theMSRC-12 dataset, obtaining an accuracy rate of 86.67% in combined training and 90.78% of accuracyrate in the individual training.. Then, the trained neural network was used to classify data obtained fromKinect with a person, obtaining an accuracy rate of 72.08% in combined training and 81.25% inindividualized training. Finally, we use the system to send commands to a mobile robot in order to controlit.
Antonela Tommasel, Juan Manuel Rodriguez, Daniela Godoy
Inteligencia Artificial, Volume 22, pp 81-100;

With the widespread of modern technologies and social media networks, a new form of bullying occurring anytime and anywhere has emerged. This new phenomenon, known as cyberaggression or cyberbullying, refers to aggressive and intentional acts aiming at repeatedly causing harm to other person involving rude, insulting, offensive, teasing or demoralising comments through online social media. As these aggressions represent a threatening experience to Internet users, especially kids and teens who are still shaping their identities, social relations and well-being, it is crucial to understand how cyberbullying occurs to prevent it from escalating. Considering the massive information on the Web, the developing of intelligent techniques for automatically detecting harmful content is gaining importance, allowing the monitoring of large-scale social media and the early detection of unwanted and aggressive situations. Even though several approaches have been developed over the last few years based both on traditional and deep learning techniques, several concerns arise over the duplication of research and the difficulty of comparing results. Moreover, there is no agreement regarding neither which type of technique is better suited for the task, nor the type of features in which learning should be based. The goal of this work is to shed some light on the effects of learning paradigms and feature engineering approaches for detecting aggressions in social media texts. In this context, this work provides an evaluation of diverse traditional and deep learning techniques based on diverse sets of features, across multiple social media sites.
, Lamartine Nogueira Frutuoso Guimarães,
Inteligencia Artificial, Volume 22, pp 162-195;

Nowadays, there is a remarkable world trend in employing UAVs and drones for diverse applications. The main reasons are that they may cost fractions of manned aircraft and avoid the exposure of human lives to risks. Nevertheless, they depend on positioning systems that may be vulnerable. Therefore, it is necessary to ensure that these systems are as accurate as possible, aiming to improve the navigation. In pursuit of this end, conventional Data Fusion techniques can be employed. However, its computational cost may be prohibitive due to the low payload of some UAVs. This paper proposes a Multisensor Data Fusion application based on Hybrid Adaptive Computational Intelligence - the cascaded use of Fuzzy C-Means Clustering (FCM) and Adaptive-Network-Based Fuzzy Inference System (ANFIS) algorithms - that have been shown able to improve the accuracy of current positioning estimation systems for real-time UAV autonomous navigation. In addition, the proposed methodology outperformed two other Computational Intelligence techniques.
Inteligencia Artificial, Volume 22, pp 150-161;

For proper attitude control of space-crafts conventional optimal Linear Quadratic (LQ) controllers are designed via trial-and-error selection of the weighting matrices. This time consuming method is inefficient and usually results in a high order complex controller. Therefore, this work proposes a genetic algorithm (GA) for the search problem of the attitude controller gains of a satellite launcher. The GA's fitness function considers some control features as eigenstructure, control goals and constraints. According to simulation results, the search problem of controller parameters with evolutionary algorithms was faster than usual approaches and the designed controller reached all the specifications with satisfactory time responses. These results could improve engineering tasks by speeding up the design process and reducing costs.
Dan Ezequiel Kröhling, Omar Chiotti, Ernesto Martínez
Inteligencia Artificial, Volume 22, pp 135-149;

Automated negotiation between artificial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiation agent depends significantly on the influence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is introduced. Also, a simple negotiation agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against other negotiation agents in the existing literature, it is shown using our context-aware agent that it makes no sense to negotiate without taking context-relevant variables into account. Our context-aware negotiation agent has been implemented in the GENIUS environment, and results obtained are significant and quite revealing.
Mariano Maisonnave, , , Ana Gabriela Maguitman
Inteligencia Artificial, Volume 22, pp 61-80;

Successful modeling and prediction depend on effective methods for the extraction of domain-relevant variables. This paper proposes a methodology for identifying domain-specific terms. The proposed methodology relies on a collection of documents labeled as relevant or irrelevant to the domain under analysis. Based on the labeled document collection, we propose a supervised technique that weights terms based on their descriptive and discriminating power. Finally, the descriptive and discriminating values are combined into a general measure that, through the use of an adjustable parameter, allows to independently favor different aspects of retrieval such as maximizing precision or recall, or achieving a balance between both of them. The proposed technique is applied to the economic domain and is empirically evaluated through a human-subject experiment involving experts and non-experts in Economy. It is also evaluated as a term-weighting technique for query-term selection showing promising results. We finally illustrate the applicability of the proposed technique to address diverse problems such as building prediction models, supporting knowledge modeling, and achieving total recall.
Gabriel Dario Caffaratti, Martín Gonzalo Marchetta, Raymundo Quilez Forradellas
Inteligencia Artificial, Volume 22, pp 16-38;

Visual depth recognition through Stereo Matching is an active field of research due to the numerous applications in robotics, autonomous driving, user interfaces, etc. Multiple techniques have been developed in the last two decades to achieve accurate disparity maps in short time. With the arrival of Deep Leaning architectures, different fields of Artificial Vision, but mainly on image recognition, have achieved a great progress due to their easier training capabilities and reduction of parameters. This type of networks brought the attention of the Stereo Matching researchers who successfully applied the same concept to generate disparity maps. Even though multiple approaches have been taken towards the minimization of the execution time and errors in the results, most of the time the number of parameters of the networks is neither taken into consideration nor optimized. Inspired on the Squeeze-Nets developed for image recognition, we developed a Stereo Matching Squeeze neural network architecture capable of providing disparity maps with a highly reduced network size without a significant impact on quality and execution time compared with state of the art architectures. In addition, with the purpose of improving the quality of the solution and get solutions closer to real time, an extra refinement module is proposed and several tests are performed using different input size reductions.
James L. Cox, Stephen Lucci, Tayfun Pay
Inteligencia Artificial, Volume 22, pp 1-15;

We carry out a detailed analysis of the effects of different dynamic variable and value ordering heuristics on the search space of Sudoku when the encoding method and the filtering algorithm are fixed. Our study starts by examining lexicographical variable and value ordering and evaluates different combinations of dynamic variable and value ordering heuristics. We eventually build up to a dynamic variable ordering heuristic that has two rounds of tie-breakers, where the second tie-breaker is a dynamic value ordering heuristic. We show that our method that uses this interlinked heuristic outperforms the previously studied ones with the same experimental setup. Overall, we conclude that constructing insightful dynamic variable ordering heuristics that also utilize a dynamic value ordering heuristic in their decision making process could drastically improve the search effort for some constraint satisfaction problems.
Paulo Vitor De Campos Souza, Augusto Junio Guimaraes, Vanessa Souza Araújo, Thiago Silva Rezende, Vinicius Jonathan Silva Araújo
Inteligencia Artificial, Volume 21, pp 114-133;

This paper presents a novel learning algorithm for fuzzy logic neuron based on neural networks and fuzzy systems able to generate accurate and transparent models. The learning algorithm is based on ideas from Extreme Learning Machine [36], to achieve a low time complexity, and regularization theory, resulting in sparse and accurate models. A compact set of incomplete fuzzy rules can be extracted from the resulting network topology. Experiments considering regression problems are detailed. Results suggest the proposed approach as a promising alternative for pattern recognition with a good accuracy and some level of interpretability.
Page of 12
Articles per Page
Show export options
  Select all
Back to Top Top