Refine Search

New Search

Results in Journal Inteligencia Artificial: 542

(searched for: journal_id:(2514979))
Page of 11
Articles per Page
Show export options
  Select all
Mariela Morveli Espinoza
Inteligencia Artificial, Volume 24, pp 36-39; doi:10.4114/intartif.vol24iss67pp36-39

Rhetorical arguments are used in negotiation dialogues when a proponent agent tries to persuade his opponent to accept a proposal more readily. When more than one argument is generated, the proponent must compare them in order to select the most adequate for his interests. A way of comparing them is by means of their strength values. Related work propose a calculation based only on the components of the rhetorical arguments, i.e., the importance of the opponent's goal and the certainty level of the beliefs that make up the argument. This work aims to propose a model for the calculation of the strength of rhetorical arguments, which is inspired on the pre-conditions of credibility and preferability stated by Guerini and Castelfranchi. Thus, we suggest the use of two new criteria to the strength calculation: the credibility of the proponent and the status of the opponent's goal in the goal processing cycle. The model is empirically evaluated and the results demonstrate that the proposed model is more efficient than previous works in terms of number of exchanged arguments and number of reached agreements.
UshaDevi G, Gokulnath Bv
Inteligencia Artificial, Volume 23, pp 136-154; doi:10.4114/intartif.vol23iss65pp136-154

The major agricultural products in India are rice, wheat, pulses, and spices. As our population is increasing rapidly the demand for agriculture products also increasing alarmingly. A huge amount of data are incremented from various field of agriculture. Analysis of this data helps in predicting the crop yield, analyzing soil quality, predicting disease in a plant, and how meteorological factor affects crop productivity. Crop protection plays a vital role in maintaining agriculture product. Pathogen, pest, weed, and animals are responsible for the productivity loss in agriculture product. Machine learning techniques like Random Forest, Bayesian Network, Decision Tree, Support Vector Machine etc. help in automatic detection of plant disease from visual symptoms in the plant. A survey of different existing machine learning techniques used for plant disease prediction was presented in this paper. Automatic detection of disease in plant helps in early diagnosis and prevention of disease which leads to an increase in agriculture productivity.
Suresh K, Karthik S, Hanumanthappa M
Inteligencia Artificial, Volume 23, pp 86-99; doi:10.4114/intartif.vol23iss65pp86-99

With the progressions in Information and Communication Technology (ICT), the innumerable electronic devices (like smart sensors) and several software applications can proffer notable contributions to the challenges that are existent in monitoring plants. In the prevailing work, the segmentation accuracy and classification accuracy of the Disease Monitoring System (DMS), is low. So, the system doesn't properly monitor the plant diseases. To overcome such drawbacks, this paper proposed an efficient monitoring system for paddy leaves based on big data mining. The proposed model comprises 5 phases: 1) Image acquisition, 2) segmentation, 3) Feature extraction, 4) Feature Selection along with 5) Classification Validation. Primarily, consider the paddy leaf image which is taken as of the dataset as the input. Then, execute image acquisition phase where 3 steps like, i) transmute RGB image to grey scale image, ii) Normalization for high intensity, and iii) preprocessing utilizing Alpha-trimmed mean filter (ATMF) through which the noises are eradicated and its nature is the hybrid of the mean as well as median filters, are performed. Next, segment the resulting image using Fuzzy C-Means (i.e. FCM) Clustering Algorithm. FCM segments the diseased portion in the paddy leaves. In the next phase, features are extorted, and then the resulted features are chosen by utilizing Multi-Verse Optimization (MVO) algorithm. After completing feature selection, the chosen features are classified utilizing ANFIS (Adaptive Neuro-Fuzzy Inference System). Experiential results contrasted with the former SVM classifier (Support Vector Machine) and the prevailing methods in respect of precision, recall, F-measure,sensitivity accuracy, and specificity. In accuracy level, the proposed one has 97.28% but the prevailing techniques only offer 91.2% for SVM classifier, 85.3% for KNN and 88.78% for ANN. Hence, this proposed DMS has more accurate detection and classification process than the other methods. The proposed DMS evinces better accuracy when contrasting with the prevailing methods.
Raul Cesar Alves, Josué Silva de Morais, Keiji Yamanaka
Inteligencia Artificial, Volume 23, pp 33-55; doi:10.4114/intartif.vol23iss65pp33-55

Indoor localization has been considered to be the most fundamental problem when it comes to providing a robot with autonomous capabilities. Although many algorithms and sensors have been proposed, none have proven to work perfectly under all situations. Also, in order to improve the localization quality, some approaches use expensive devices either mounted on the robots or attached to the environment that don't naturally belong to human environments. This paper presents a novel approach that combines the benefits of two localization techniques, WiFi and Kinect, into a single algorithm using low-cost sensors. It uses separate Particle Filters (PFs). The WiFi PF gives the global location of the robot using signals of Access Point devices from different parts of the environment while it bounds particles of the Kinect PF, which determines the robot's pose locally. Our algorithm also tackles the Initialization/Kidnapped Robot Problem by detecting divergence on WiFi signals, which starts a localization recovering process. Furthermore, new methods for WiFi mapping and localization are introduced.
Supoj Hengpraprohm, Suwimol Jungjit
Inteligencia Artificial, Volume 23, pp 100-114; doi:10.4114/intartif.vol23iss65pp100-114

For breast cancer data classification, we propose an ensemble filter feature selection approach named ‘EnSNR’. Entropy and SNR evaluation functions are used to find the features (genes) for the EnSNR subset. A Genetic Algorithm (GA) generates the classification ‘model’. The efficiency of the ‘model’ is validated using 10-Fold Cross-Validation re-sampling. The Microarray dataset used in our experiments contains 50,739 genes for each of 32 patients. When our proposed ‘EnSNR’ subset of features is used; as well as giving an enhanced degree of prediction accuracy and reducing the number of irrelevant features (genes), there is also a small saving of computer processing time.
Qing An, Xijiang Chen, Jupu Yuan
Inteligencia Artificial, Volume 23, pp 115-123; doi:10.4114/intartif.vol23iss65pp115-123

In order to meet the needs of high precision, high availability and high safety positioning for automatic driving, aiming at the technical difficulties of automatic driving positioning in the complex urban environment, an inertial navigation model suitable for the dynamic characteristics of vehicles is established, and a tight combination method of Beidou / inertial high precision positioning is proposed, which solves the problem of rapid accumulation of positioning errors in the weak signal environment of Beidou. The results show that when the Beidou signal is completely interrupted and the INS is combined tightly, the positioning accuracy and continuity are improved significantly, and the maximum error is less than 0.5m, which can realize the automatic driving high-precision continuous navigation and positioning in the complex urban environment.
Leonardo Luís Röpke, Manuel Osório Binelo
Inteligencia Artificial, Volume 23, pp 67-85; doi:10.4114/intartif.vol23iss65pp67-85

This work presents the study and development of an Artificial Intelligence system, with focus on K-means algorithms and Artificial Neural Networks, to assist fleet managers in the identification of routes and route deviations. The developed tool has the objective of modernizing the process of identification of routes and deviations of routes. The results show that the Artificial Neural Networks obtained a 100% accuracy rate in the identification of routes, and in the identification of route deviations the RNAs were able to identify 61% of the routes presented. Therefore, RNAs are an excellent technique to be applied to the identification of routes and deviations of routes. The K-means algorithm presented good results when applied in the discovery of similar routes, thus becoming an important tool applied to the work of monitoring vehicles routes.
Imane Guellil, Marcelo Mendoza, Faical Azouaou
Inteligencia Artificial, Volume 23, pp 124-135; doi:10.4114/intartif.vol23iss65pp124-135

This paper presents an analytic study showing that it is entirely possible to analyze the sentiment of an Arabic dialect without constructing any resources. The idea of this work is to use the resources dedicated to a given dialect \textit{X} for analyzing the sentiment of another dialect \textit{Y}. The unique condition is to have \textit{X} and \textit{Y} in the same category of dialects. We apply this idea on Algerian dialect, which is a Maghrebi Arabic dialect that suffers from limited available tools and other handling resources required for automatic sentiment analysis. To do this analysis, we rely on Maghrebi dialect resources and two manually annotated sentiment corpus for respectively Tunisian and Moroccan dialect. We also use a large corpus for Maghrebi dialect. We use a state-of-the-art system and propose a new deep learning architecture for automatically classify the sentiment of Arabic dialect (Algerian dialect). Experimental results show that F1-score is up to 83% and it is achieved by Multilayer Perceptron (MLP) with Tunisian corpus and with Long short-term memory (LSTM) with the combination of Tunisian and Moroccan. An improvement of 15% compared to its closest competitor was observed through this study. Ongoing work is aimed at manually constructing an annotated sentiment corpus for Algerian dialect and comparing the results
José Daniel López-Cabrera, Luis Alberto López Rodríguez, Marlén Pérez-Díaz
Inteligencia Artificial, Volume 23, pp 56-66; doi:10.4114/intartif.vol23iss65pp56-66

Breast cancer is the most frequent in females. Mammography has proven to be the most effective method for the early detection of this type of cancer. Mammographic images are sometimes difficult to understand, due to the nature of the anomalies, the low contrast image and the composition of the mammary tissues, as well as various technological factors such as spatial resolution of the image or noise. Computer-aided diagnostic systems have been developed to increase the accuracy of mammographic examinations and be used by physicians as a second opinion in obtaining the final diagnosis, and thus reduce human errors. Convolutional neural networks are a current trend in computer vision tasks, due to the great performance they have achieved. The present investigation was based on this type of networks to classify into three classes, normal, benign and malignant tumour. Due to the fact that the miniMIAS database used has a low number of images, the transfer learning technique was applied to the Inception v3 pre-trained network. Two convolutional neural network architectures were implemented, obtaining in the architecture with three classes, 86.05% accuracy. On the other hand, in the architecture with two neural networks in series, an accuracy of 88.2% was reached.
Maged Mamdouh, Mostafa Ezzat, Hesham A. Hefny
Inteligencia Artificial, Volume 23, pp 19-32; doi:10.4114/intartif.vol23iss65pp19-32

The airport ground handling has a global trend to meet the Service Level Agreement (SLA) requirementsthat represents resource allocation with more restrictions according to flights. That can be achieved by predictingfuture resources demands. this research presents a comparison between the most used machine learning techniquesimplemented in many different fields for demand prediction and resource allocation. The prediction model nomi-nated and used in this research is the Support Vector Machine (SVM) to predict the required resources for eachflight, despite the restrictions imposed by airlines when contracting their services in the Service Level Agreement.The approach has been trained and tested using real data from Cairo International Airport. the proposed (SVM)technique implemented and explained with a varying accuracy of resource allocation prediction, showing thateven for variations accuracy in resource prediction in different scenarios; the Support Vector Machine techniquecan produce a good performance as resource allocation in the airport.
Levan Uridia, Dirk Walther
Inteligencia Artificial, Volume 23, pp 1-18; doi:10.4114/intartif.vol23iss65pp1-18

We investigate the variant of epistemic logic S5 for reasoning about knowledge under hypotheses. The logic is equipped with a modal operator of necessity that can be parameterized with a hypothesis representing background assumptions. The modal operator can be described as relative necessity and the resulting logic turns out to be a variant of Chellas’ Conditional Logic. We present an axiomatization of the logic and its extension with the common knowledge operator and distributed knowledge operator. We show that the logics are decidable, complete w.r.t. Kripke as well as topological structures. The topological completeness results are obtained by utilizing the Alexandroff connection between preorders and Alexandroff spaces.
Yun Shi, Yanyan Zhu
INTELIGENCIA ARTIFICIAL, Volume 23, pp 1-8; doi:10.4114/intartif.vol23iss66pp1-8

Considering the need for a large number of samples and the long training time, this paper uses deep and transfer learning to identify motion-blurred Chinese character coded targets (CCTs). Firstly, a set of CCTs are designed, and the motion blur image generation system is used to provide samples for the recognition network. Secondly, the OTSU algorithm, the expansion, and the Canny operator are performed on the real shot blurred image, where the target area is segmented by the minimum bounding box. Thirdly, the sample is selected from the sample set according to the 4:1 ratio as the training set and the test set. Under the Tensor Flow framework, the convolutional layer in the AlexNet model is fixed, and the fully-connected layer is trained for transfer learning. Finally, experiments on simulated and real-time motion-blurred images are carried out. The results show that network training and testing only take 30 minutes and two seconds, and the recognition accuracy reaches 98.6% and 93.58%, respectively. As a result, our method has higher recognition accuracy, does not require a large number of trained samples, takes less time, and can provide a certain reference for the recognition of motion-blurred CCTs.
Mingyong Li, Ziye An, Miaomiao Ren
INTELIGENCIA ARTIFICIAL, Volume 23, pp 51-65; doi:10.4114/intartif.vol23iss66pp51-65

With the rapid development of Internet technology, traditional online learning can no longer meet the adaptive learning needs of students, and smart education concepts, such as mushrooms, use data generated by traditional platforms to use machine learning and depth. Artificial intelligence technology with learning as a means has gradually become a new research hotspot through re-analysis technology. How to further use these big data resources for adaptive learning and push to improve the quality of student training has become an important issue in the current research field. For the protection of students' learning during the COVID-19 epidemic prevention and control, national universities, primary schools and secondary schools solved the problem of “Classes Suspended but Learning Continue” through online teaching. Students learn online at home and the family plays a vital role as a special classroom. Based on the analysis of the factors affecting home study, this article compares the live broadcast platforms and constructs a student-centered network broadcast + home learning model under the epidemic situation. After the implementation effect investigation, the evaluation effect is good. It is hoped that this model can provide a reference for teachers and students in the new situation and solve some problems currently facing online teaching at home.
Jean Phelipe De Oliveira Lima, Carlos Maurício Seródio Figueiredo
INTELIGENCIA ARTIFICIAL, Volume 23, pp 36-50; doi:10.4114/intartif.vol23iss66pp36-50

Energy Monitoring is a crucial activity in Energy Efficiency, which involves the study of techniques to supervise the energy consumption in a power grid, regarding the main purpose is to assure a good level of detail, to achieve consumption quotas for each connected device, for a low infrastructure cost. This paper presents the evaluation of different Machine Learning models to classify electric current patterns to identify and monitor electric charges present in circuits with a single sensing device. The models were trained and validated by a database created from signal samples of 4 electrical devices: Notebook Charger, Refrigerator, Blender and Fan. The models that presented the best metrics achieved, respectively, 97% and 100% Accuracy and 98% and 100% F1-Score, surpassing results obtained in related researches.
Marilyn Minicucci Iba˜nez, Reinaldo Roberto Rosa, Lamartine N. F. Guimarães
INTELIGENCIA ARTIFICIAL, Volume 23, pp 66-84; doi:10.4114/intartif.vol23iss66pp66-84

In the last few decades, the growth in the use of the Internet has generated a substantial increase in the circulation of information on social media. Due to the high interest of several areas of society in the analysis of these data, a study of better techniques for the manipulation and understanding of this type of data is of great importance so that this enormous volume of information can be interpreted quickly and accurately. Based on this context, this study shows two approaches of sentiment analysis to verify the emotion of the population in different context. The first approach analyses the 2018 presidential elections in Brazil considering data from the Twitter social network. The second approach performs analysis of data from social media to identify threats level of armed conflicts considering data off the conflict between Syria and the USA in 2017. To achieve this goal, machine learning techniques such as auto-encoder and deep learning will be considered in conjunction with NLP text analysis techniques. The results obtained show the effectiveness of the approaches used in the classification of feelings within the domains used according to the methodology developed for this work.
Yang Cui, Cheng Liu, Yanming Cheng, Jing Niu
INTELIGENCIA ARTIFICIAL, Volume 23, pp 26-35; doi:10.4114/intartif.vol23iss66pp26-35

According to the nonlinear output characteristics of photovoltaic cells, combined with artificial intelligence algorithm the MPPT(Maximum Power Point Tracking)control algorithm based on fuzzy variable step size is proposed, which enables the system to quickly track the maximum power point and improve the energy conversion efficiency of photovoltaic system. This paper designs a small-scale photovoltaic power generation system. The main circuit of the system consists of Perovskite Solar Panels, DC voltage regulator circuit, storage battery and one-way full bridge inverter circuit. The control circuit consists of sun-seeking, inverter and maximum power tracking on constant voltage. Proteus simulation software is used to simulate the sun-seeking part, the inverting part, the general control unit, the keys and the display interface. The results indicate that the functions of the small-scale photovoltaic power generation system can be achieved very well.
Shruthi P, Anil Kumar K M
INTELIGENCIA ARTIFICIAL, Volume 23, pp 97-111; doi:10.4114/intartif.vol23iss66pp97-111

Automating hate speech or inappropriate text detection in social media and other internet platforms isgaining a lot of interest and becoming a valuable research topic for both industry and academia in recent years. Itis more important for applications to identify the disruptive contents, understand sentiment analysis, identify cyberbullying, detect flames, threats, hatred towards people or particular communities or groups etc. Text classificationis a very challenging task due to the nature and complexities with languages, especially its context, micro words,emojis, typo error and sarcasm present in the text. In this paper, we have proposed a model with a novel approachfor generating hybrid features for an effective feature representation to classify hate speech. We have combinedfeatures learned from deep learning methods with the semantic features like word n-grams and tweets specificsyntactic features to form hybrid feature sets. We have also improvised preprocessing steps to reduce the numberof missing embeddings to increase the vocabulary for efficient feature learning. We have experimented with thevarious neural networks for feature learning and machine learning models with hybrid features for classification.Our work delivers hybrid features and appropriate preprocessing techniques for an efficient classification of thestandard dataset of 16k annotated hate speech tweets. The combination of Long Short Term Memory (LSTM)trained on Random Embeddings for deep learning features extraction and Logistic Regression (LR) as a classifierwith the hybrid features is found to be the best model and it outperforms the state of the art reported in theliterature.
Bruno Rover Dal Prá, Roberto Navarro de Mesquita, Mário Olímpio de Menezes, Delvonei Alves de Andrade
INTELIGENCIA ARTIFICIAL, Volume 23, pp 85-96; doi:10.4114/intartif.vol23iss66pp85-96

A identificação do estresse nutricional das plantas com base nos sintomas visuais é predominantemente manual e é realizada por especialistas treinados para identificar tais anomalias. Além disso, esse processo tende a consumir muito tempo, tem uma variabilidade entre as áreas de cultivo e é frequentemente necessário para análise em vários pontos da propriedade. Este trabalho propõe um sistema de reconhecimento de imagens que analisa o estado nutricional da planta para ajudar a resolver esses problemas. A metodologia utiliza aprendizado profundo que automatiza o processo de identificação e classificação do estresse nutricional de Brachiaria brizantha cv. marandu. Um sistema de reconhecimento de imagem foi construído e analisa o estado nutricional da planta usando as imagens digitais de suas folhas. O sistema identifica e classifica as deficiências de nitrogênio e potássio. Ao receber a imagem da folha do pasto, após uma classificação realizada por uma rede neural convolucional (CNN), o sistema apresenta o resultado do estado nutricional diagnosticado. Os testes realizados para identificar o estado nutricional das folhas apresentaram uma precisão de 96%. Estamos trabalhando para expandir os dados do banco de dados de imagens para obter um aumento nos níveis de precisão, visando o treinamento com maior quantidade de informações apresentadas à CNN e, assim, obtendo resultados mais expressivos.
Wei Cao, Qinan Wang, Asma Sbeih, Fha. Shibly
INTELIGENCIA ARTIFICIAL, Volume 23, pp 112-123; doi:10.4114/intartif.vol23iss66pp112-123

A smart learning environment is equipped with personal digital devices, wireless communication, learning platforms, and sensors that associate to provide input into Artificial intelligence systems. Artificial intelligence makes decisions about regulating the physical aspects of the environment or learning systems. These requirements may be identified by analyzing learning performance, behaviors, and the real-world and online settings in which students are situated. There are several challenges in implementing smart learning environments that are highly cost-effective, connectivity issues (internet), impair the problem-solving capacity of students, technical challenges, e.g., malfunctioning of electronic gadgets. Hence, in this paper, Artificial Intelligence based Efficient Smart Learning Framework (AI-ESLF) has been proposed to overcome the challenges faced by a smart learning environment. This study aims to designate the current concept of the smart learning environment based on AI application and to examine the fundamental criteria of it and to demonstrate how tests can be performed in this smart learning environment by case studies. The experimental results show that the suggested system enhances the prediction ratio in terms of students learning behavior when compared to other existing approaches.
Adriana Villa-Murillo, Andrés Carrión, Antonio Sozzi
INTELIGENCIA ARTIFICIAL, Volume 23, pp 9-25; doi:10.4114/intartif.vol23iss66pp9-25

We propose a methodology for the improvement of the parameter design that consists of the combination ofRandom Forest (RF) with Genetic Algorithms (GA) in 3 phases: normalization, modelling and optimization.The rst phase corresponds to the previous preparation of the data set by using normalization functions. In thesecond phase, we designed a modelling scheme adjusted to multiple quality characteristics and we have called itMultivariate Random Forest (MRF) for the determination of the objective function. Finally, in the third phase,we obtained the optimal combination of parameter levels with the integration of properties of our modellingscheme and desirability functions in the establishment of the corresponding GA. Two illustrative cases allow us tocompare and validate the virtues of our methodology versus other proposals involving Articial Neural Networks(ANN) and Simulated Annealing (SA).
Zhongshan Chen, Juxiao Zhang, Xiaoyan Jiang, Zuojin Hu, Xue Han, Mengyang Xu, Savitha V, G.N. Vivekananda
INTELIGENCIA ARTIFICIAL, Volume 23, pp 124-137; doi:10.4114/intartif.vol23iss66pp124-137

Nowadays, predicting students' performance is one of the most specific topics for learning environments, such as universities and schools, since it leads to the development of effective mechanisms that can enhance academic outcomes and avoid destruction. In education 4.0, Artificial Intelligence (AI) can play a key role in identifying new factors in the performance of students and implementing personalized learning, answering routine student questions, using learning analytics, and predictive modeling. It is a new challenge to redefine education 4.0 to recognize the creative and innovative intelligent students, and it is difficult to determine students' outcomes. Hence, in this paper, Hybridized Deep Neural Network (HDNN) to predict student performance in Education 4.0. The proposed HDNN method is utilized to determine the dynamics that likely influence the student's performance. The deep neural network monitor, predicts, and evaluates the student's performance in an education 4.0 environment. The findings show that the proposed HDNN method achieved better prediction accuracy when compared to other popular methods.
Gustavo Martins, Paulo Urbano, Anders Lyhne Christensen
INTELIGENCIA ARTIFICIAL, Volume 22; doi:10.4114/intartif.vol22iss64pp152-165

In evolutionary robotics role allocation studies, it is common that the role assumed by each robot is strongly associated with specific local conditions, which may compromise scalability and robustness because of the dependency on those conditions. To increase scalability, communication has been proposed as a means for robots to exchange signals that represent roles. This idea was successfully applied to evolve communication-based role allocation for a two-role task. However, it was necessary to reward signal differentiation in the fitness function, which is a serious limitation as it does not generalize to tasks where the number of roles is unknown a priori. In this paper, we show that rewarding signal differentiation is not necessary to evolve communication-based role allocation strategies for the given task, and we improve reported scalability, while requiring less a priori knowledge. Our approach for the two-role task puts fewer constrains on the evolutionary process and enhances the potential of evolving communication-based role allocation for more complex tasks. Furthermore, we conduct experiments for a three-role task where we compare two different cognitive architectures and several fitness functions and we show how scalable controllers might be evolved.
Antonio Jiménez Márquez, Gabriel Beltrán Maza
INTELIGENCIA ARTIFICIAL, Volume 22, pp 135-142; doi:10.4114/intartif.vol22iss64pp135-142

This paper shows the results obtained from images processing digitized, taken with a 'smartphone', of 56 samples of crushed olives, using the methodology of the gray-level co-occurrence matrix (GLCM). The values ​​of the appropriate direction (θ) and distance (D) that two pixel with gray tone are neighbourhood, are defined to extract the information of the parameters: Contrast, Correlation, Energy and Homogeneity. The values ​​of these parameters are correlated with several characteristic components of the olives mass: oil content (RGH) and water content (HUM), whose values ​​are in the usual ranges during their processing to obtain virgin olive oil in mills and they contribute to generate different mechanical textures in the mass according to their relationship HUM / RGH. The results indicate the existence of significant correlations of the parameters Contrast, Energy and Homogeneity with the RGH and the HUM, which have allowed to obtain, by means of a multiple linear regression (MLR), mathematical equations that allow to predict both components with a high degree of correlation coefficient, r = 0.861 and r = 0.872 for RGH and HUM respectively. These results suggest the feasibility of textural analysis using GLCM to extract features of interest from digital images of the olives mass, quickly and non-destructively, as an aid in the decision making to optimize the production process of virgin olive oil.
Marco Javier Suárez-Barón, José Fdo. López, Carlos Enrique Montenegro Marin, , Franklin Guillermo Montenegro-Marin
INTELIGENCIA ARTIFICIAL, Volume 22; doi:10.4114/intartif.vol22iss64pp143-151

This work explains for a computational model design focused organizational learning in R&D centers. We explained the first stage of this architecture that enables extracting, retrieval and integrating of lessons learned in the areas of innovation and technological development that have been registered by R&D researchers and personnel in social networks corporative focused to research. In addition, this article provides details about the design and construction of organizational memory as a computational learning mechanism within an organization. The end result of the process is discusses the management of the extraction and retrieval of information as a technological knowledge management mechanism with the goal of consolidating the Organizational Memory.
Mohamed Amine Nemmich, Fatima Debbat, Mohamed Slimane
INTELIGENCIA ARTIFICIAL, Volume 22; doi:10.4114/intartif.vol22iss64pp123-134

In this paper, we propose a novel efficient model based on Bees Algorithm (BA) for the Resource-Constrained Project Scheduling Problem (RCPSP). The studied RCPSP is a NP-hard combinatorial optimization problem which involves resource, precedence, and temporal constraints. It has been applied to many applications. The main objective is to minimize the expected makespan of the project. The proposed model, named Enhanced Discrete Bees Algorithm (EDBA), iteratively solves the RCPSP by utilizing intelligent foraging behaviors of honey bees. The potential solution is represented by the multidimensional bee, where the activity list representation (AL) is considered. This projection involves using the Serial Schedule Generation Scheme (SSGS) as decoding procedure to construct the active schedules. In addition, the conventional local search of the basic BA is replaced by a neighboring technique, based on the swap operator, which takes into account the specificity of the solution space of project scheduling problems and reduces the number of parameters to be tuned. The proposed EDBA is tested on well-known benchmark problem instance sets from Project Scheduling Problem Library (PSPLIB) and compared with other approaches from the literature. The promising computational results reveal the effectiveness of the proposed approach for solving the RCPSP problems of various scales.
Hanane Allioui, Mohamed Sadgal, Aziz El Fazziki
INTELIGENCIA ARTIFICIAL, Volume 22, pp 102-122; doi:10.4114/intartif.vol22iss64pp102-122

It is generally accepted that segmentation is a critical problem that influences subsequent tasks during image processing. Often, the proposed approaches provide effectiveness for a limited type of images with a significant lack of a global solution. The difficulty of segmentation lies in the complexity of providing a global solution with acceptable accuracy within a reasonable time. To overcome this problem, some solutions combined several methods. This paper presents a method for segmenting 2D/3D images by merging regions and solving problems encountered during the process using a multi-agent system (MAS). We are using the strengths of MAS by opting for a compromise that satisfies segmentation by agents’ acts. Regions with high similarity are merged immediately, while the others with low similarity are ignored. The remaining ones, with ambiguous similarity, are solved in a coalition by negotiation. In our system, the agents make decisions according to the utility functions adopting the Pareto optimal in Game theory. Unlike hierarchical merging methods, MAS performs a hypothetical merger planning then negotiates the agreements' subsets to merge all regions at once.
Omar Andres Carmona Cortes, Leticia De Fátima Corrêa Costa, Jo˜ao Pedro Augusto Costa
INTELIGENCIA ARTIFICIAL, Volume 22, pp 85-101; doi:10.4114/intartif.vol22iss64pp85-101

This article describes a new adaptive metaheuristic based on a vector evaluated approach for solving multiobjective problems. We called our proposed algorithm Vector Evaluated Meta-Heuristic. Its main idea is to evolve two populations independently, exchanging information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well known in the literature: PSO, DE, ABC. In order to evaluate the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests have shown that the adaptive metaheuristic reaches the best hyper-volumes in three of ZDT benchmarks functions and, also, in two portfolios of a real-world problem called portfolio investment optimization. The results show that our algorithm improved the Pareto curve when compared to the hypervolumes of each heuristic separately.
JanapatyI Naga Muneiah, Ch D V Subbarao
INTELIGENCIA ARTIFICIAL, Volume 22, pp 63-84; doi:10.4114/intartif.vol22iss64pp63-84

Enterprises often classify their customers based on the degree of profitability in decreasing order like C1, C2, ..., Cn. Generally, customers representing class Cn are zero profitable since they migrate to the competitor. They are called as attritors (or churners) and are the prime reason for the huge losses of the enterprises. Nevertheless, customers of other intermediary classes are reluctant and offer an insignificant amount of profits in different degrees and lead to uncertainty. Various data mining models like decision trees, etc., which are built using the customers’ profiles, are limited to classifying the customers as attritors or non-attritors only and not providing profitable actionable knowledge. In this paper, we present an efficient algorithm for the automatic extraction of profit-maximizing knowledge for business applications with multi-class customers by postprocessing the probability estimation decision tree (PET). When the PET predicts a customer as belonging to any of the lesser profitable classes, then, our algorithm suggests the cost-sensitive actions to change her/him to a maximum possible higher profitable status. In the proposed novel approach, the PET is represented in the compressed form as a Bit patterns matrix and the postprocessing task is performed on the bit patterns by applying the bitwise AND operations. The computational performance of the proposed method is strong due to the employment of effective data structures. Substantial experiments conducted on UCI datasets, real Mobile phone service data and other benchmark datasets demonstrate that the proposed method remarkably outperforms the state-of-the-art methods.
Mariela Morveli Espinoza, Juan Carlos Nieves, Ayslan Possebom, Cesar Augusto Tacla
INTELIGENCIA ARTIFICIAL, Volume 22, pp 47-62; doi:10.4114/intartif.vol22iss64pp47-62

By considering rational agents, we focus on the problem of selecting goals out of a set of incompatible ones. We consider three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal, the instrumental (or based on resources), and the superfluity. We represent the agent's plans by means of structured arguments whose premises are pervaded with uncertainty. We measure the strength of these arguments in order to determine the set of compatible goals. We propose two novel ways for calculating the strength of these arguments, depending on the kind of incompatibility thatexists between them. The first one is the logical strength value, it is denoted by a three-dimensional vector, which is calculated from a probabilistic interval associated with each argument. The vector represents the precision of the interval, the location of it, and the combination of precision and location. This type of representation and treatment of the strength of a structured argument has not been defined before by the state of the art. The second way for calculating the strength of the argument is based on the cost of the plans (regarding the necessary resources) and the preference of the goals associated with the plans. Considering our novel approach for measuring the strength of structured arguments, we propose a semantics for the selection of plans and goals that is based on Dung's abstract argumentation theory. Finally, we make a theoretical evaluation of our proposal.
INTELIGENCIA ARTIFICIAL, Volume 22, pp 36-46; doi:10.4114/intartif.vol22iss64pp36-46

Classification algorithms' performance could be enhanced by selecting many representative points to be included in the training sample. In this paper, a new border and rare biased sampling (BRBS) scheme is proposed by assigning each point in the dataset an importance factor. The importance factor of border points and rare points (i.e. points belong to rare classes) is higher than other points. Then the points are selected to be in the training sample depending on these factors. Including these points in the training sample enhances classifiers experience. The results of experiments on 10 UCI machine learning repository datasets prove that the BRBS algorithm outperforms many sampling algorithms and enhanced the performance of several classification algorithms by about 8%. BRBS is proposed to be easy to configure, covering all points space, and generate a unique samples every time it is executed.
José Antonio Alves Menezes, Giordano Cabral, Bruno Gomes, Paulo Pereira
INTELIGENCIA ARTIFICIAL, Volume 22, pp 14-35; doi:10.4114/intartif.vol22iss64pp14-35

To choice audio features has been a very interesting theme for audio classification experts. They have seen that this process is probably the most important effort to solve the classification problem. In this sense, there are techniques of Feature Learning for generate new features more suitable for classification model than conventional features. However, these techniques generally do not depend on knowledge domain and they can apply in various types of raw data. However, less agnostic approaches learn a type of knowledge restricted to the area studded. The audio data requires a specific knowledge type. There are many techniques that seek to improve the performance of the new generation of acoustic features, among which stands the technique that use evolutionary algorithms to explore analytical space of function. However, the efforts made leave opportunities for improvement. The purpose of this work is to propose and evaluate a multi-objective alternative to the exploitation of analytical audio features. In addition, experiments were arranged to be validated the method, with the help a computational prototype that implemented the proposed solution. After it was found the effectiveness of the model and ensuring that there is still opportunity for improvement in the chosen segment.
INTELIGENCIA ARTIFICIAL, Volume 22, pp 1-13; doi:10.4114/intartif.vol22iss64pp1-13

Nowadays, many approaches for Sentiment Analysis (SA) rely on affective lexicons to identify emotions transmitted in opinions. However, most of these lexicons do not consider that a word can express different sentiments in different predication domains, introducing errors in the sentiment inference. Due to this problem, we present a model based on a context-graph which can be used for building domain specic sentiment lexicons(DL: Dynamic Lexicons) by propagating the valence of a few seed words. For different corpora, we compare the results of a simple rule-based sentiment classier using the corresponding DL, with the results obtained using a general affective lexicon. For most corpora containing specic domain opinions, the DL reaches better results than the general lexicon.
Mariano Maisonnave, , , Ana Gabriela Maguitman
INTELIGENCIA ARTIFICIAL, Volume 22, pp 61-80; doi:10.4114/intartif.vol22iss63pp61-80

Successful modeling and prediction depend on effective methods for the extraction of domain-relevant variables. This paper proposes a methodology for identifying domain-specific terms. The proposed methodology relies on a collection of documents labeled as relevant or irrelevant to the domain under analysis. Based on the labeled document collection, we propose a supervised technique that weights terms based on their descriptive and discriminating power. Finally, the descriptive and discriminating values are combined into a general measure that, through the use of an adjustable parameter, allows to independently favor different aspects of retrieval such as maximizing precision or recall, or achieving a balance between both of them. The proposed technique is applied to the economic domain and is empirically evaluated through a human-subject experiment involving experts and non-experts in Economy. It is also evaluated as a term-weighting technique for query-term selection showing promising results. We finally illustrate the applicability of the proposed technique to address diverse problems such as building prediction models, supporting knowledge modeling, and achieving total recall.
, , Raymundo Quilez Forradellas
INTELIGENCIA ARTIFICIAL, Volume 22, pp 16-38; doi:10.4114/intartif.vol22iss63pp16-38

Visual depth recognition through Stereo Matching is an active field of research due to the numerous applications in robotics, autonomous driving, user interfaces, etc. Multiple techniques have been developed in the last two decades to achieve accurate disparity maps in short time. With the arrival of Deep Leaning architectures, different fields of Artificial Vision, but mainly on image recognition, have achieved a great progress due to their easier training capabilities and reduction of parameters. This type of networks brought the attention of the Stereo Matching researchers who successfully applied the same concept to generate disparity maps. Even though multiple approaches have been taken towards the minimization of the execution time and errors in the results, most of the time the number of parameters of the networks is neither taken into consideration nor optimized. Inspired on the Squeeze-Nets developed for image recognition, we developed a Stereo Matching Squeeze neural network architecture capable of providing disparity maps with a highly reduced network size without a significant impact on quality and execution time compared with state of the art architectures. In addition, with the purpose of improving the quality of the solution and get solutions closer to real time, an extra refinement module is proposed and several tests are performed using different input size reductions.
, Lamartine Nogueira Frutuoso Guimarães,
INTELIGENCIA ARTIFICIAL, Volume 22, pp 162-195; doi:10.4114/intartif.vol22iss63pp162-195

Nowadays, there is a remarkable world trend in employing UAVs and drones for diverse applications. The main reasons are that they may cost fractions of manned aircraft and avoid the exposure of human lives to risks. Nevertheless, they depend on positioning systems that may be vulnerable. Therefore, it is necessary to ensure that these systems are as accurate as possible, aiming to improve the navigation. In pursuit of this end, conventional Data Fusion techniques can be employed. However, its computational cost may be prohibitive due to the low payload of some UAVs. This paper proposes a Multisensor Data Fusion application based on Hybrid Adaptive Computational Intelligence - the cascaded use of Fuzzy C-Means Clustering (FCM) and Adaptive-Network-Based Fuzzy Inference System (ANFIS) algorithms - that have been shown able to improve the accuracy of current positioning estimation systems for real-time UAV autonomous navigation. In addition, the proposed methodology outperformed two other Computational Intelligence techniques.
Dan Ezequiel Kröhling, Omar Chiotti, Ernesto Martínez
INTELIGENCIA ARTIFICIAL, Volume 22, pp 135-149; doi:10.4114/intartif.vol22iss63pp135-149

Automated negotiation between artificial agents is essential to deploy Cognitive Computing and Internet of Things. The behavior of a negotiation agent depends significantly on the influence of environmental conditions or contextual variables, since they affect not only a given agent preferences and strategies, but also those of other agents. Despite this, the existing literature on automated negotiation is scarce about how to properly account for the effect of context-relevant variables in learning and evolving strategies. In this paper, a novel context-driven representation for automated negotiation is introduced. Also, a simple negotiation agent that queries available information from its environment, internally models contextual variables, and learns how to take advantage of this knowledge by playing against himself using reinforcement learning is proposed. Through a set of episodes against other negotiation agents in the existing literature, it is shown using our context-aware agent that it makes no sense to negotiate without taking context-relevant variables into account. Our context-aware negotiation agent has been implemented in the GENIUS environment, and results obtained are significant and quite revealing.
INTELIGENCIA ARTIFICIAL, Volume 22, pp 150-161; doi:10.4114/intartif.vol22iss63pp150-161

For proper attitude control of space-crafts conventional optimal Linear Quadratic (LQ) controllers are designed via trial-and-error selection of the weighting matrices. This time consuming method is inefficient and usually results in a high order complex controller. Therefore, this work proposes a genetic algorithm (GA) for the search problem of the attitude controller gains of a satellite launcher. The GA's fitness function considers some control features as eigenstructure, control goals and constraints. According to simulation results, the search problem of controller parameters with evolutionary algorithms was faster than usual approaches and the designed controller reached all the specifications with satisfactory time responses. These results could improve engineering tasks by speeding up the design process and reducing costs.
INTELIGENCIA ARTIFICIAL, Volume 22, pp 121-134; doi:10.4114/intartif.vol22iss63pp121-134

In this paper, we use data from the Microsoft Kinect sensor that processes the captured imageof a person using and extracting the joints information on every frame. Then, we propose the creation ofan image derived from all the sequential frames of a gesture the movement, which facilitates training in aconvolutional neural network. We trained a CNN using two strategies: combined training and individualtraining. The strategies were experimented in the convolutional neural network (CNN) using theMSRC-12 dataset, obtaining an accuracy rate of 86.67% in combined training and 90.78% of accuracyrate in the individual training.. Then, the trained neural network was used to classify data obtained fromKinect with a person, obtaining an accuracy rate of 72.08% in combined training and 81.25% inindividualized training. Finally, we use the system to send commands to a mobile robot in order to controlit.
Antonela Tommasel, Juan Manuel Rodriguez, Daniela Godoy
INTELIGENCIA ARTIFICIAL, Volume 22, pp 81-100; doi:10.4114/intartif.vol22iss63pp81-100

With the widespread of modern technologies and social media networks, a new form of bullying occurring anytime and anywhere has emerged. This new phenomenon, known as cyberaggression or cyberbullying, refers to aggressive and intentional acts aiming at repeatedly causing harm to other person involving rude, insulting, offensive, teasing or demoralising comments through online social media. As these aggressions represent a threatening experience to Internet users, especially kids and teens who are still shaping their identities, social relations and well-being, it is crucial to understand how cyberbullying occurs to prevent it from escalating. Considering the massive information on the Web, the developing of intelligent techniques for automatically detecting harmful content is gaining importance, allowing the monitoring of large-scale social media and the early detection of unwanted and aggressive situations. Even though several approaches have been developed over the last few years based both on traditional and deep learning techniques, several concerns arise over the duplication of research and the difficulty of comparing results. Moreover, there is no agreement regarding neither which type of technique is better suited for the task, nor the type of features in which learning should be based. The goal of this work is to shed some light on the effects of learning paradigms and feature engineering approaches for detecting aggressions in social media texts. In this context, this work provides an evaluation of diverse traditional and deep learning techniques based on diverse sets of features, across multiple social media sites.
INTELIGENCIA ARTIFICIAL, Volume 22, pp 1-15; doi:10.4114/intartif.vol22iss63pp1-15

We carry out a detailed analysis of the effects of different dynamic variable and value ordering heuristics on the search space of Sudoku when the encoding method and the filtering algorithm are fixed. Our study starts by examining lexicographical variable and value ordering and evaluates different combinations of dynamic variable and value ordering heuristics. We eventually build up to a dynamic variable ordering heuristic that has two rounds of tie-breakers, where the second tie-breaker is a dynamic value ordering heuristic. We show that our method that uses this interlinked heuristic outperforms the previously studied ones with the same experimental setup. Overall, we conclude that constructing insightful dynamic variable ordering heuristics that also utilize a dynamic value ordering heuristic in their decision making process could drastically improve the search effort for some constraint satisfaction problems.
INTELIGENCIA ARTIFICIAL, Volume 21, pp 134-144; doi:10.4114/intartif.vol21iss62pp134-144

Artificial Neural Networks (ANNs) have continued to be efficient models in solving classification problems. In this paper, we explore the use of an ANN that can accurately classify whether Filipino call center agents’ pronunciations are neutral or not based on their employer’s standards. Isolated utterances of the ten most commonly used words in the call center were recorded from eleven agents creating a dataset of 110 utterances. Two learning specialists were consulted to establish ground truths and Cohen’s Kappa was computed as 0.82, validating the reliability of the dataset. The first thirteen Mel-Frequency Cepstral Coefficients (MFCCs) were then extracted from each word and an ANN was trained with Ten-fold Stratified Cross Validation. Experimental results on the model recorded a classification accuracy of 89.60% supported by an overall F-Score of 0.92.
INTELIGENCIA ARTIFICIAL, Volume 21, pp 114-133; doi:10.4114/intartif.vol22iss63pp114-133

This paper presents a novel learning algorithm for fuzzy logic neuron based on neural networks and fuzzy systems able to generate accurate and transparent models. The learning algorithm is based on ideas from Extreme Learning Machine [36], to achieve a low time complexity, and regularization theory, resulting in sparse and accurate models. A compact set of incomplete fuzzy rules can be extracted from the resulting network topology. Experiments considering regression problems are detailed. Results suggest the proposed approach as a promising alternative for pattern recognition with a good accuracy and some level of interpretability.
Christian Muise
INTELIGENCIA ARTIFICIAL, Volume 21, pp 67-74; doi:10.4114/intartif.vol21iss62pp67-74

Dead-end detection is a key challenge in automated planning, and it is rapidly growing in popularity. Effective dead-end detection techniques can have a large impact on the strength of a planner, and so the effective computation of dead-ends is central to many planning approaches. One of the better understood techniques for detecting dead-ends is to focus on the delete relaxation of a planning problem, where dead-end detection is a polynomial-time operation. In this work, we provide a logical characterization for not just a single dead-end, but for every delete-relaxed dead-end in a planning problem. With a logical representation in hand, one could compile the representation into a form amenable to effective reasoning. We lay the ground-work for this larger vision and provide a preliminary evaluation to this end
, , Fr´ed´eric Maris, Pierre R´egnier, Ma¨el Valais
INTELIGENCIA ARTIFICIAL, Volume 21, pp 103-113; doi:10.4114/intartif.vol21iss62pp103-114

Considerable improvements in the technology and performance of SAT solvers has made their use possible for the resolution of various problems in artificial intelligence, and among them that of generating plans. Recently, promising Quantified Boolean Formula (QBF) solvers have been developed and we may expect that in a near future they become as efficient as SAT solvers. So, it is interesting to use QBF language that allows us to produce more compact encodings. We present in this article a translation from STRIPS planning problems into quantified propositional formulas. We introduce two new Compact Tree Encodings: CTE-EFA based on Explanatory frame axioms, and CTE-OPEN based on causal links. Then we compare both of them to CTE-NOOP based on No-op Actions proposed in [Cashmore et al. 2012]. In terms of execution time over benchmark problems, CTE-EFA and CTE-OPEN always performed better than CTE-NOOP.
Anastasios Alexiadis, Ioannis Refanidis,
INTELIGENCIA ARTIFICIAL, Volume 21, pp 53-66; doi:10.4114/intartif.vol21iss62pp53-66

Automated meeting scheduling is the task of reaching an agreement on a time slot to schedule a new meeting, taking into account the participants’ preferences over various aspects of the problem. Such a negotiation is commonly performed in a non-automated manner, that is, the users decide whether they can reschedule existing individual activities and, in some cases, already scheduled meetings in order to accommodate the new meeting request in a particular time slot, by inspecting their schedules. In this work, we take advantage of SelfPlanner, an automated system that employs greedy stochastic optimization algorithms to schedule individual activities under a rich model of preferences and constraints, and we extend that work to accommodate meetings. For each new meeting request, participants decide whether they can accommodate the meeting in a particular time slot by employing SelfPlanner’s underlying algorithms to automatically reschedule existing individual activities. Time slots are prioritized in terms of the number of users that need to reschedule existing activities. An agreement is reached as soon as all agents can schedule the meeting at a particular time slot, without anyone of them experiencing an overall utility loss, that is, taking into account also the utility gain from the meeting. This dynamic multi-agent meeting scheduling approach has been tested on a variety of test problems with very promising results.
Filip Dvorak, Maxwell Micali, Mathias Mathieug
INTELIGENCIA ARTIFICIAL, Volume 21, pp 40-52; doi:10.4114/intartif.vol21iss62pp40-52

Recent advances in additive manufacturing (AM) and 3D printing technologies have led to significant growth in the use of additive manufacturing in industry, which allows for the physical realization of previously difficult to manufacture designs. However, in certain cases AM can also involve higher production costs and unique in-process physical complications, motivating the need to solve new optimization challenges. Optimization for additive manufacturing is relevant for and involves multiple fields including mechanical engineering, materials science, operations research, and production engineering, and interdisciplinary interactions must be accounted for in the optimization framework. In this paper we investigate a problem in which a set of parts with unique configurations and deadlines must be printed by a set of machines while minimizing time and satisfying deadlines, bringing together bin packing, nesting (two-dimensional bin packing), job shop scheduling, and constraints satisfaction. We first describe the real-world industrial motivation for solving the problem. Subsequently, we encapsulate the problem within constraints and graph theory, create a formal model of the problem, discuss nesting as a subproblem, and describe the search algorithm. Finally, we present the datasets, the experimental approach, and the preliminary results.
INTELIGENCIA ARTIFICIAL, Volume 21, pp 91-102; doi:10.4114/intartif.vol21iss62pp91-102

Deep Learning has been successfully applied in hard to solve areas, such as image recognition and audioclassification. However, Deep Learning has not yet reached the same performance when employed in textual data,including Opinion Mining. In models that implement a deep architecture, Deep Learning is characterized by theautomatic feature selection step. The impact of previous data refinement in the pre-processing step before theapplication of Deep Learning is investigated to identify opinion polarity. This refinement includes the use of aclassical procedure of textual content and a popular feature selection technique. The results of the experimentsovercome the results of the current literature with the Deep Belief Network application in opinion classification.In addition to overcoming the results, their presentation is broader than the related works, considering the changeof parameter variables. We prove that combining feature selection with a basic preprocessing step, aiming toincrease data quality, might achieve promising results with Deep Belief Network implementation.
, Susanne Biundo
INTELIGENCIA ARTIFICIAL, Volume 21, pp 75-90; doi:10.4114/intartif.vol21iss62pp75-90

Linear temporal logic (LTL) provides expressive means to specify temporally extended goals as well as preferences.Recent research has focussed on compilation techniques, i.e., methods to alter the domain ensuring that every solution adheres to the temporally extended goals.This requires either new actions or an construction that is exponential in the size of the formula.A translation into boolean satisfiability (SAT) on the other hand requires neither.So far only one such encoding exists, which is based on the parallel $\exists$-step encoding for classical planning.We show a connection between it and recently developed compilation techniques for LTL, which may be exploited in the future.The major drawback of the encoding is that it is limited to LTL without the X operator.We show how to integrate X and describe two new encodings, which allow for more parallelism than the original encoding.An empirical evaluation shows that the new encodings outperform the current state-of-the-art encoding.
Thomas M Roehr
INTELIGENCIA ARTIFICIAL, Volume 21, pp 25-39; doi:10.4114/intartif.vol21iss62pp25-39

The application of reconfigurable multi-robot systems introduces additional degrees of freedom to design robotic missions compared to classical multi-robot systems. To allow for autonomous operation of such systems, planning approaches have to be investigated that cannot only cope with the combinatorial challenge arising from the increased flexibility of modular systems, but also exploit this flexibility to improve for example the safety of operation. While the problem originates from the domain of robotics it is of general nature and significantly intersects with operations research. This paper suggests a constraint-based mission planning approach, and presents a set of revised definitions for reconfigurable multi-robot systems including the representation of the planning problem using spatially and temporally qualified resource constraints. Planning is performed using a multi-stage approach and a combined use of knowledge-based reasoning, constraint-based programming and integer linear programming. The paper concludes with the illustration of the solution of a planned example mission.
Jorge E. Camargo, Vladimir Vargas-Calderon, Nelson Vargas, Liliana Calderón-Benavides
INTELIGENCIA ARTIFICIAL, Volume 21, pp 1-12; doi:10.4114/intartif.vol21iss62pp1-12

With the purpose of classifying text based on its sentiment polarity (positive or negative), we proposed an extension of a 68,000 tweets corpus through the inclusion of word definitions from a dictionary of the Real Academia Espa\~{n}ola de la Lengua (RAE). A set of 28,000 combinations of 6 Word2Vec and support vector machine parameters were considered in order to evaluate how positively would affect the inclusion of a RAE's dictionary definitions classification performance. We found that such a corpus extension significantly improve the classification accuracy. Therefore, we conclude that the inclusion of a RAE's dictionary increases the semantic relations learned by Word2Vec allowing a better classification accuracy.
Page of 11
Articles per Page
Show export options
  Select all
Back to Top Top